Science.gov

Sample records for qcd evolution kernels

  1. Wilson Dslash Kernel From Lattice QCD Optimization

    SciTech Connect

    Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  2. QCD Evolution 2015

    NASA Astrophysics Data System (ADS)

    These are the proceedings of the QCD Evolution 2015 Workshop which was held 26-30 May, 2015 at Jefferson Lab, Newport News, Virginia, USA. The workshop is a continuation of a series of workshops held during four consecutive years 2011, 2012, 2013 at Jefferson Lab, and in 2014 in Santa Fe, NM. With the rapid developments in our understanding of the evolution of parton distributions including low-x, TMDs, GPDs, higher-twist correlation functions, and the associated progress in perturbative QCD, lattice QCD and effective field theory techniques we look forward with great enthusiasm to the 2015 meeting. A special attention was also paid to participation of experimentalists as the topics discussed are of immediate importance for the JLab 12 experimental program and a future Electron Ion Collider.

  3. Archeology and evolution of QCD

    NASA Astrophysics Data System (ADS)

    De Rújula, A.

    2017-03-01

    These are excerpts from the closing talk at the "XIIth Conference on Quark Confinement and the Hadron Spectrum", which took place last Summer in Thessaloniki -an excellent place to enjoy an interest in archeology. A more complete personal view of the early days of QCD and the rest of the Standard Model is given in [1]. Here I discuss a few of the points which -to my judgement- illustrate well the QCD evolution (in time), both from a scientific and a sociological point of view.

  4. QCDNUM: Fast QCD evolution and convolution

    NASA Astrophysics Data System (ADS)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline

  5. QCD Evolution in Dense Medium

    NASA Astrophysics Data System (ADS)

    Gay Ducati, M. B.

    The dynamics of the partonic distribution is a main concern in high energy physics, once it provides the initial condition for the Heavy Ion colliders. The determination of the evolution equation which drives the partonic behavior is subject of great interest since is connected to the observables. This lecture aims to present a brief review of the evolution equations that describe the partonic dynamics at high energies. First the linear evolution equations (DGLAP and BFKL) are presented. Then, the formulations developed to deal with the high density effects, which originate the non-linear evolution equations (GLR, AGL, BK, JIMWLK) are discussed, as well as an example of related phenomenology.

  6. R evolution: Improving perturbative QCD

    SciTech Connect

    Hoang, Andre H.; Jain, Ambar; Stewart, Iain W.; Scimemi, Ignazio

    2010-07-01

    Perturbative QCD results in the MS scheme can be dramatically improved by switching to a scheme that accounts for the dominant power law dependence on the factorization scale in the operator product expansion. We introduce the ''MSR scheme'' which achieves this in a Lorentz and gauge invariant way and has a very simple relation to MS. Results in MSR depend on a cutoff parameter R, in addition to the {mu} of MS. R variations can be used to independently estimate (i.) the size of power corrections, and (ii.) higher-order perturbative corrections (much like {mu} in MS). We give two examples at three-loop order, the ratio of mass splittings in the B*-B and D*-D systems, and the Ellis-Jaffe sum rule as a function of momentum transfer Q in deep inelastic scattering. Comparing to data, the perturbative MSR results work well even for Q{approx}1 GeV, and power corrections are reduced compared to MS.

  7. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  8. Hydrodynamical Evolution near the QCD Critical End Point

    NASA Astrophysics Data System (ADS)

    Nonaka, Chiho; Asakawa, Masayuki

    2003-10-01

    Recently, the possibility of the existence of a critical end point (CEP) in the QCD phase diagram has attracted a lot of attention and several experimental signatures have been proposed for it^1. Berdnikov and Rajagopal discussed the growth of the correlation length near the critical end point in heavy-ion collision from the schematic argument^2. However, there has seen, so far, no quantitative study on the hydrodynamic evolution near CEP. Here we quantitatively evaluate the effect of the critical end point on the observables using the hydrodynamical model. First, we construct an equation of state (EOS) that includes critical behavior of CEP. Here we assume that the singular part of EOS near CEP belongs to the same universality class as the 3-d Ising model. Then we match the singular part of EOS with known QGP and hadronic EOS. We found the strong focusing effect near the critical end point in n_B/s trajectories in T-μ plane. This behavior is very different from an EOS of Bag model which is used in usual hydrodynamical models. This suggests that the effect of CEP appears strongly in the time evolution of system and the experimental observables. Next we investigate the time evolution and the behavior of correlation length near CEP along n_B/s trajectories. In addition, we also discuss the consequences of CEP in experimental results such as fluctuations and the kinetic freeze-out temperature. ^1M. Stephanov, K. Rajagopal, and E. Shuryak, Phys. Rev. Lett. 81 (1998) 4816. ^2B. Berdnikov and K. Rajagopal, Phys. Rev. D61 (2000) 105017.

  9. Iterative filtering decomposition based on local spectral evolution kernel

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559

  10. Evolution of Anthocyanin Biosynthesis in Maize Kernels: The Role of Regulatory and Enzymatic Loci

    PubMed Central

    Hanson, M. A.; Gaut, B. S.; Stec, A. O.; Fuerstenberg, S. I.; Goodman, M. M.; Coe, E. H.; Doebley, J. F.

    1996-01-01

    Understanding which genes contribute to evolutionary change and the nature of the alterations in them are fundamental challenges in evolution. We analyzed regulatory and enzymatic genes in the maize anthocyanin pathway as related to the evolution of anthocyanin-pigmented kernels in maize from colorless kernels of its progenitor, teosinte. Genetic tests indicate that teosinte possesses functional alleles at all enzymatic loci. At two regulatory loci, most teosintes possess alleles that encode functional proteins, but ones that are not expressed during kernel development and not capable of activating anthocyanin biosynthesis there. We investigated nucleotide polymorphism at one of the regulatory loci, c1. Several observations suggest that c1 has not evolved in a strictly neutral manner, including an exceptionally low level of polymorphism and a biased representation of haplotypes in maize. Curiously, sequence data show that most of our teosinte samples possess a promoter element necessary for the activation of the anthocyanin pathway during kernel development, although genetic tests indicate that teosinte c1 alleles are not active during kernel development. Our analyses suggest that the evolution of the purple kernels resulted from changes in cis regulatory elements at regulatory loci and not changes in either regulatory protein function nor the enzymatic loci. PMID:8807310

  11. Temperature perturbations evolution as a possible mechanism of exothermal reaction kernels formation in shock tubes

    NASA Astrophysics Data System (ADS)

    Drakon, A. V.; Kiverin, A. D.; Yakovenko, I. S.

    2016-11-01

    The basic question raised in the paper concerns the origins of exothermal reaction kernels and the mechanisms of detonation onset behind the reflected shock wave in shock-tube experiments. Using the conventional experimental technique, it is obtained that in the certain diapason of conditions behind the reflected shocks a so-called “mild ignition” arises which is characterized by the detonation formation from the kernel distant from the end-wall. The results of 2-D and 3-D simulations of the flow evolution behind the incident and reflected shocks allow formulation of the following scenario of ignition kernels formation. Initial stage during and after the diaphragm rupture is characterized by a set of non-steady gasdynamical processes. As a result, the flow behind the incident shock occurs to be saturated with temperature perturbations. Further evolution of these perturbations provides generating of the shear stresses in the flow accompanied with intensification of velocity and temperature perturbations. After reflection the shock wave interacts with the formed kernels of higher temperature and more pronounced kernels arise on the background of reactivity profile determined by moving reflected shock. Exothermal reaction starts inside such kernels and propagates into the ambient medium as a spontaneous ignition wave with minimum initial speed equal to the reflected shock wave speed.

  12. COLLINEAR SPLITTING, PARTON EVOLUTION AND THE STRANGE-QUARK ASYMMETRY OF THE NUCLEON IN NNLO QCD.

    SciTech Connect

    RODRIGO,G.CATANI,S.DE FLORIAN, D.VOGELSANG,W.

    2004-04-25

    We consider the collinear limit of QCD amplitudes at one-loop order, and their factorization properties directly in color space. These results apply to the multiple collinear limit of an arbitrary number of QCD partons, and are a basic ingredient in many higher-order computations. In particular, we discuss the triple collinear limit and its relation to flavor asymmetries in the QCD evolution of parton densities at three loops. As a phenomenological consequence of this new effect, and of the fact that the nucleon has non-vanishing quark valence densities, we study the perturbative generation of a strange-antistrange asymmetry s(x)-{bar s}(x) in the nucleon's sea.

  13. Remembrance of things past: non-equilibrium effects and the evolution of critical fluctuations near the QCD critical point

    NASA Astrophysics Data System (ADS)

    Mukherjee, Swagato; Venugopalan, Raju; Yin, Yi

    2016-12-01

    We report on recent progress in the study of the evolution of non-Gaussian cumulants of critical fluctuations. We explore the implications of non-equilibrium effects on the search for the QCD critical point.

  14. Tracking temporal evolution of nonlinear dynamics in hippocampus using time-varying volterra kernels.

    PubMed

    Chan, Rosa H M; Song, Dong; Berger, Theodore W

    2008-01-01

    Hippocampus and other parts of the cortex are not stationary, but change as a function of time and experience. The goal of this study is to apply adaptive modeling techniques to the tracking of multiple-input, multiple-output (MIMO) nonlinear dynamics underlying spike train transformations across brain subregions, e.g. CA3 and CA1 of the hippocampus. A stochastic state point process adaptive filter will be used to track the temporal evolutions of both feedforward and feedback kernels in the natural flow of multiple behavioral events.

  15. Real time evolution of non-Gaussian cumulants in the QCD critical regime

    DOE PAGES

    Mukherjee, Swagato; Venugopalan, Raju; Yin, Yi

    2015-09-23

    In this study, we derive a coupled set of equations that describe the nonequilibrium evolution of cumulants of critical fluctuations for spacetime trajectories on the crossover side of the QCD phase diagram. In particular, novel expressions are obtained for the nonequilibrium evolution of non-Gaussian skewness and kurtosis cumulants. UBy utilizing a simple model of the spacetime evolution of a heavy-ion collision, we demonstrate that, depending on the relaxation rate of critical fluctuations, skewness and kurtosis can differ significantly in magnitude as well as in sign from equilibrium expectations. Memory effects are important and shown to persist even for trajectories thatmore » skirt the edge of the critical regime. We use phenomenologically motivated parametrizations of freeze-out curves and of the beam-energy dependence of the net baryon chemical potential to explore the implications of our model study for the critical-point search in heavy-ion collisions.« less

  16. Real time evolution of non-Gaussian cumulants in the QCD critical regime

    SciTech Connect

    Mukherjee, Swagato; Venugopalan, Raju; Yin, Yi

    2015-09-23

    In this study, we derive a coupled set of equations that describe the nonequilibrium evolution of cumulants of critical fluctuations for spacetime trajectories on the crossover side of the QCD phase diagram. In particular, novel expressions are obtained for the nonequilibrium evolution of non-Gaussian skewness and kurtosis cumulants. UBy utilizing a simple model of the spacetime evolution of a heavy-ion collision, we demonstrate that, depending on the relaxation rate of critical fluctuations, skewness and kurtosis can differ significantly in magnitude as well as in sign from equilibrium expectations. Memory effects are important and shown to persist even for trajectories that skirt the edge of the critical regime. We use phenomenologically motivated parametrizations of freeze-out curves and of the beam-energy dependence of the net baryon chemical potential to explore the implications of our model study for the critical-point search in heavy-ion collisions.

  17. Markovian Monte Carlo program EvolFMC v.2 for solving QCD evolution equations

    NASA Astrophysics Data System (ADS)

    Jadach, S.; Płaczek, W.; Skrzypek, M.; Stokłosa, P.

    2010-02-01

    We present the program EvolFMC v.2 that solves the evolution equations in QCD for the parton momentum distributions by means of the Monte Carlo technique based on the Markovian process. The program solves the DGLAP-type evolution as well as modified-DGLAP ones. In both cases the evolution can be performed in the LO or NLO approximation. The quarks are treated as massless. The overall technical precision of the code has been established at 5×10. This way, for the first time ever, we demonstrate that with the Monte Carlo method one can solve the evolution equations with precision comparable to the other numerical methods. New version program summaryProgram title: EvolFMC v.2 Catalogue identifier: AEFN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including binary test data, etc.: 66 456 (7407 lines of C++ code) No. of bytes in distributed program, including test data, etc.: 412 752 Distribution format: tar.gz Programming language: C++ Computer: PC, Mac Operating system: Linux, Mac OS X RAM: Less than 256 MB Classification: 11.5 External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: Solution of the QCD evolution equations for the parton momentum distributions of the DGLAP- and modified-DGLAP-type in the LO and NLO approximations. Solution method: Monte Carlo simulation of the Markovian process of a multiple emission of partons. Restrictions:Limited to the case of massless partons. Implemented in the LO and NLO approximations only. Weighted events only. Unusual features: Modified-DGLAP evolutions included up to the NLO level. Additional comments: Technical precision established at 5×10. Running time: For the 10 6 events at 100 GeV: DGLAP NLO: 27s; C-type modified DGLAP NLO: 150s (MacBook Pro with Mac OS X v.10

  18. Integrated model of multiple kernel learning and differential evolution for EUR/USD trading.

    PubMed

    Deng, Shangkun; Sakurai, Akito

    2014-01-01

    Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL) with differential evolution (DE) for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI), while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX) currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits.

  19. Understanding the large-distance behavior of transverse-momentum-dependent parton densities and the Collins-Soper evolution kernel

    DOE PAGES

    Collins, John; Rogers, Ted

    2015-04-01

    There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part ofmore » the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.« less

  20. Understanding the large-distance behavior of transverse-momentum-dependent parton densities and the Collins-Soper evolution kernel

    SciTech Connect

    Collins, John; Rogers, Ted

    2015-04-01

    There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part of the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.

  1. Temporal Evolution and Spatial Distribution of White-light Flare Kernels in a Solar Flare

    NASA Astrophysics Data System (ADS)

    Kawate, T.; Ishii, T. T.; Nakatani, Y.; Ichimoto, K.; Asai, A.; Morita, S.; Masuda, S.

    2016-12-01

    On 2011 September 6, we observed an X2.1-class flare in continuum and Hα with a frame rate of about 30 Hz. After processing images of the event by using a speckle-masking image reconstruction, we identified white-light (WL) flare ribbons on opposite sides of the magnetic neutral line. We derive the light curve decay times of the WL flare kernels at each resolution element by assuming that the kernels consist of one or two components that decay exponentially, starting from the peak time. As a result, 42% of the pixels have two decay-time components with average decay times of 15.6 and 587 s, whereas the average decay time is 254 s for WL kernels with only one decay-time component. The peak intensities of the shorter decay-time component exhibit good spatial correlation with the WL intensity, whereas the peak intensities of the long decay-time components tend to be larger in the early phase of the flare at the inner part of the flare ribbons, close to the magnetic neutral line. The average intensity of the longer decay-time components is 1.78 times higher than that of the shorter decay-time components. If the shorter decay time is determined by either the chromospheric cooling time or the nonthermal ionization timescale and the longer decay time is attributed to the coronal cooling time, this result suggests that WL sources from both regions appear in 42% of the WL kernels and that WL emission of the coronal origin is sometimes stronger than that of chromospheric origin.

  2. Non-perturbative scale evolution of four-fermion operators in two-flavour QCD

    NASA Astrophysics Data System (ADS)

    Herdoiza, Gregorio

    2006-12-01

    We apply finite-size recursion techniques based on the Schrödinger functional formalism to de- termine the renormalization group running of four-fermion operators which appear in the S = 2 effective weak Hamiltonian of the Standard Model. Our calculations are done using O(a) im- proved Wilson fermions with Nf = 2 dynamical flavours. Preliminary results are presented for the four-fermion operator which determines the BK -parameter in tmQCD.

  3. Small-x Evolution of Structure Functions in the Next-to-Leading Order

    SciTech Connect

    Chirilli, Giovanni A.

    2009-12-17

    The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the nonlinear Balitsky-Kovchegov (BK) equation. The NLO corrections define the scale of the running-coupling constant in the BK equation and in QCD, its kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N = 4 SYM theory, then we define the 'composite dipole operator' with the rapidity cutoff preserving conformal invariance, and the resulting Moebius invariant kernel for this operator agrees with the forward NLO BFKL calculation.In QCD, the NLO kernel for the composite operators resolves in a sum of the conformal part and the running-coupling part.

  4. Small-x Evolution of Structure Functions in the Next-to-Leading Order

    SciTech Connect

    Giovanni Antonio Chirilli

    2009-12-01

    The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the nonlinear Balitsky-Kovchegov (BK) equation. The NLO corrections define the scale of the running coupling constant in the BK equation and in QCD, its kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N = 4 SYM theory, then we define the "composite dipole operator" with the rapidity cutoff preserving conformal invariance, and the resulting Möbius invariant kernel for this operator agrees with the forward NLO BFKL calculation. In QCD, the NLO kernel for the composite operators resolves in a sum of the conformal part and the running-coupling part.

  5. Extraction of quark transversity distribution and Collins fragmentation functions with QCD evolution

    NASA Astrophysics Data System (ADS)

    Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; Yuan, Feng

    2016-01-01

    We study the transverse-momentum-dependent (TMD) evolution of the Collins azimuthal asymmetries in e+e- annihilations and semi-inclusive hadron production in deep inelastic scattering processes. All the relevant coefficients are calculated up to the next-to-leading-logarithmic-order accuracy. By applying the TMD evolution at the approximate next-to-leading-logarithmic order in the Collins-Soper-Sterman formalism, we extract transversity distributions for u and d quarks and Collins fragmentation functions from current experimental data by a global analysis of the Collins asymmetries in back-to-back dihadron productions in e+e- annihilations measured by BELLE and BABAR collaborations and semi-inclusive hadron production in deep inelastic scattering data from HERMES, COMPASS, and JLab HALL A experiments. The impact of the evolution effects and the relevant theoretical uncertainties are discussed. We further discuss the TMD interpretation for our results and illustrate the unpolarized quark distribution, transversity distribution, unpolarized quark fragmentation, and Collins fragmentation functions depending on the transverse momentum and the hard momentum scale. We make detailed predictions for future experiments and discuss their impact.

  6. Extraction of quark transversity distribution and Collins fragmentation functions with QCD evolution

    SciTech Connect

    Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; Yuan, Feng

    2016-01-13

    In this paper, we study the transverse momentum dependent (TMD) evolution of the Collins azimuthal asymmetries in e+e- annihilations and semi-inclusive hadron production in deep inelastic scattering (SIDIS) processes. All the relevant coefficients are calculated up to the next-to-leading logarithmic (NLL) order accuracy. By applying the TMD evolution at the approximate NLL order in the Collins- Soper-Sterman (CSS) formalism, we extract transversity distributions for u and d quarks and Collins fragmentation functions from current experimental data by a global analysis of the Collins asymmetries in back-to-back di-hadron productions in e+e- annihilations measured by BELLE and BABAR Collaborations and SIDIS data from HERMES, COMPASS, and JLab HALL A experiments. The impact of the evolution effects and the relevant theoretical uncertainties are discussed. We further discuss the TMD interpretation for our results, and illustrate the unpolarized quark distribution, transversity distribution, unpolarized quark fragmentation and Collins fragmentation functions depending on the transverse momentum and the hard momentum scale. Finally, we give predictions and discuss impact of future experiments.

  7. Extraction of quark transversity distribution and Collins fragmentation functions with QCD evolution

    DOE PAGES

    Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; ...

    2016-01-13

    In this paper, we study the transverse momentum dependent (TMD) evolution of the Collins azimuthal asymmetries in e+e- annihilations and semi-inclusive hadron production in deep inelastic scattering (SIDIS) processes. All the relevant coefficients are calculated up to the next-to-leading logarithmic (NLL) order accuracy. By applying the TMD evolution at the approximate NLL order in the Collins- Soper-Sterman (CSS) formalism, we extract transversity distributions for u and d quarks and Collins fragmentation functions from current experimental data by a global analysis of the Collins asymmetries in back-to-back di-hadron productions in e+e- annihilations measured by BELLE and BABAR Collaborations and SIDIS datamore » from HERMES, COMPASS, and JLab HALL A experiments. The impact of the evolution effects and the relevant theoretical uncertainties are discussed. We further discuss the TMD interpretation for our results, and illustrate the unpolarized quark distribution, transversity distribution, unpolarized quark fragmentation and Collins fragmentation functions depending on the transverse momentum and the hard momentum scale. Finally, we give predictions and discuss impact of future experiments.« less

  8. Small-x evolution of structure functions in the next-to-leading order

    SciTech Connect

    Giovanni A. Chirilli

    2010-01-01

    The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the non-linear Balitsky-Kovchegov (BK) equation. In QCD the NLO kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N = 4 SYM theory, then we define the "composite dipole operator", and the resulting Mobius invariant kernel for this operator agrees with the forward NLO BFKL calculation.

  9. The Chroma Software System for Lattice QCD

    SciTech Connect

    Robert Edwards; Balint Joo

    2004-06-01

    We describe aspects of the Chroma software system for lattice QCD calculations. Chroma is an open source C++ based software system developed using the software infrastructure of the US SciDAC initiative. Chroma interfaces with output from the BAGEL assembly generator for optimized lattice fermion kernels on some architectures. It can be run on workstations, clusters and the QCDOC supercomputer.

  10. QCD at nonzero chemical potential: Recent progress on the lattice

    SciTech Connect

    Aarts, Gert; Jäger, Benjamin; Attanasio, Felipe; Seiler, Erhard; Sexty, Dénes; Stamatescu, Ion-Olimpiu

    2016-01-22

    We summarise recent progress in simulating QCD at nonzero baryon density using complex Langevin dynamics. After a brief outline of the main idea, we discuss gauge cooling as a means to control the evolution. Subsequently we present a status report for heavy dense QCD and its phase structure, full QCD with staggered quarks, and full QCD with Wilson quarks, both directly and using the hopping parameter expansion to all orders.

  11. Weighted Bergman kernels and virtual Bergman kernels

    NASA Astrophysics Data System (ADS)

    Roos, Guy

    2005-12-01

    We introduce the notion of "virtual Bergman kernel" and apply it to the computation of the Bergman kernel of "domains inflated by Hermitian balls", in particular when the base domain is a bounded symmetric domain.

  12. Recent progress on the understanding of the medium-induced jet evolution and energy loss in pQCD

    NASA Astrophysics Data System (ADS)

    Apolinário, Liliana

    2017-03-01

    Motivated by the striking modifications of jets observed both at RHIC and the LHC, significant progress towards the understanding of jet dynamics within QGP has occurred over the last few years. In this talk, I review the recent theoretical developments in the study of medium-induced jet evolution and energy loss within a perturbative framework. The main mechanisms of energy loss and broadening will be firstly addressed with focus on leading particle calculations beyond the eikonal approximation. Then, I will provide an overview of the modifications of the interference pattern between the different parton emitters that build up the parton shower when propagating through an extended coloured medium. I will show that the interplay between color coherence/decoherence that arises from such effects is the main mechanism for the modification of the jet core angular structure. Finally, I discuss the possibility of a probabilistic picture of the parton shower evolution in the limit of a very dense or infinite medium.

  13. Random walk through recent CDF QCD results

    SciTech Connect

    C. Mesropian

    2003-04-09

    We present recent results on jet fragmentation, jet evolution in jet and minimum bias events, and underlying event studies. The results presented in this talk address significant questions relevant to QCD and, in particular, to jet studies. One topic discussed is jet fragmentation and the possibility of describing it down to very small momentum scales in terms of pQCD. Another topic is the studies of underlying event energy originating from fragmentation of partons not associated with the hard scattering.

  14. Conformal symmetry of the Lange-Neubert evolution equation

    NASA Astrophysics Data System (ADS)

    Braun, V. M.; Manashov, A. N.

    2014-04-01

    The Lange-Neubert evolution equation describes the scale dependence of the wave function of a meson built of an infinitely heavy quark and light antiquark at light-like separations, which is the hydrogen atom problem of QCD. It has numerous applications to the studies of B-meson decays. We show that the kernel of this equation can be written in a remarkably compact form, as a logarithm of the generator of special conformal transformation in the light-ray direction. This representation allows one to study solutions of this equation in a very simple and mathematically consistent manner. Generalizing this result, we show that all heavy-light evolution kernels that appear in the renormalization of higher-twist B-meson distribution amplitudes can be written in the same form.

  15. Modeling QCD for Hadron Physics

    SciTech Connect

    Tandy, P. C.

    2011-10-24

    We review the approach to modeling soft hadron physics observables based on the Dyson-Schwinger equations of QCD. The focus is on light quark mesons and in particular the pseudoscalar and vector ground states, their decays and electromagnetic couplings. We detail the wide variety of observables that can be correlated by a ladder-rainbow kernel with one infrared parameter fixed to the chiral quark condensate. A recently proposed novel perspective in which the quark condensate is contained within hadrons and not the vacuum is mentioned. The valence quark parton distributions, in the pion and kaon, as measured in the Drell Yan process, are investigated with the same ladder-rainbow truncation of the Dyson-Schwinger and Bethe-Salpeter equations.

  16. Semisupervised kernel matrix learning by kernel propagation.

    PubMed

    Hu, Enliang; Chen, Songcan; Zhang, Daoqiang; Yin, Xuesong

    2010-11-01

    The goal of semisupervised kernel matrix learning (SS-KML) is to learn a kernel matrix on all the given samples on which just a little supervised information, such as class label or pairwise constraint, is provided. Despite extensive research, the performance of SS-KML still leaves some space for improvement in terms of effectiveness and efficiency. For example, a recent pairwise constraints propagation (PCP) algorithm has formulated SS-KML into a semidefinite programming (SDP) problem, but its computation is very expensive, which undoubtedly restricts PCPs scalability in practice. In this paper, a novel algorithm, called kernel propagation (KP), is proposed to improve the comprehensive performance in SS-KML. The main idea of KP is first to learn a small-sized sub-kernel matrix (named seed-kernel matrix) and then propagate it into a larger-sized full-kernel matrix. Specifically, the implementation of KP consists of three stages: 1) separate the supervised sample (sub)set X(l) from the full sample set X; 2) learn a seed-kernel matrix on X(l) through solving a small-scale SDP problem; and 3) propagate the learnt seed-kernel matrix into a full-kernel matrix on X . Furthermore, following the idea in KP, we naturally develop two conveniently realizable out-of-sample extensions for KML: one is batch-style extension, and the other is online-style extension. The experiments demonstrate that KP is encouraging in both effectiveness and efficiency compared with three state-of-the-art algorithms and its related out-of-sample extensions are promising too.

  17. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  18. Nuclear reactions from lattice QCD

    DOE PAGES

    Briceño, Raúl A.; Davoudi, Zohreh; Luu, Thomas C.

    2015-01-13

    In this study, one of the overarching goals of nuclear physics is to rigorously compute properties of hadronic systems directly from the fundamental theory of strong interactions, Quantum Chromodynamics (QCD). In particular, the hope is to perform reliable calculations of nuclear reactions which will impact our understanding of environments that occur during big bang nucleosynthesis, the evolution of stars and supernovae, and within nuclear reactors and high energy/density facilities. Such calculations, being truly ab initio, would include all two-nucleon and three- nucleon (and higher) interactions in a consistent manner. Currently, lattice QCD provides the only reliable option for performing calculationsmore » of some of the low-energy hadronic observables. With the aim of bridging the gap between lattice QCD and nuclear many-body physics, the Institute for Nuclear Theory held a workshop on Nuclear Reactions from Lattice QCD on March 2013. In this review article, we report on the topics discussed in this workshop and the path planned to move forward in the upcoming years.« less

  19. Nuclear reactions from lattice QCD

    SciTech Connect

    Briceño, Raúl A.; Davoudi, Zohreh; Luu, Thomas C.

    2015-01-13

    In this study, one of the overarching goals of nuclear physics is to rigorously compute properties of hadronic systems directly from the fundamental theory of strong interactions, Quantum Chromodynamics (QCD). In particular, the hope is to perform reliable calculations of nuclear reactions which will impact our understanding of environments that occur during big bang nucleosynthesis, the evolution of stars and supernovae, and within nuclear reactors and high energy/density facilities. Such calculations, being truly ab initio, would include all two-nucleon and three- nucleon (and higher) interactions in a consistent manner. Currently, lattice QCD provides the only reliable option for performing calculations of some of the low-energy hadronic observables. With the aim of bridging the gap between lattice QCD and nuclear many-body physics, the Institute for Nuclear Theory held a workshop on Nuclear Reactions from Lattice QCD on March 2013. In this review article, we report on the topics discussed in this workshop and the path planned to move forward in the upcoming years.

  20. Small-x Evolution in the Next-to-Leading Order

    SciTech Connect

    Ian Balitsky

    2009-10-01

    The high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the non-linear BK equation. The NLO corrections define the scale of the running-coupling constant in the BK equation and in QCD, its kernel has both conformal and non-conformal parts. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformal N=4 SYM theory, then we define the 'composite dipole operator' with the rapidity cutoff preserving conformal invariance, and the resulting Möbius invariant kernel for this operator agrees with the forward NLO BFKL calculation.

  1. Renormalization-group evolution of the B-meson light-cone distribution amplitude.

    PubMed

    Lange, Björn O; Neubert, Matthias

    2003-09-05

    An integro-differential equation governing the evolution of the leading-order B-meson light-cone distribution amplitude is derived. The anomalous dimension in this equation contains a logarithm of the renormalization scale, whose coefficient is identified with the cusp anomalous dimension of Wilson loops. The exact solution of the evolution equation is obtained, from which the asymptotic behavior of the distribution amplitude is derived. These results can be used to resum Sudakov logarithms entering the hard-scattering kernels in QCD factorization theorems for exclusive B decays.

  2. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2016-02-25

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  3. QCD dynamics in mesons at soft and hard scales

    SciTech Connect

    Nguyen, T.; Souchlas, N. A.; Tandy, P. C.

    2010-07-27

    Using a ladder-rainbow kernel previously established for the soft scale of light quark hadrons, we explore, within a Dyson-Schwinger approach, phenomena that mix soft and hard scales of QCD. The difference between vector and axial vector current correlators is examined to estimate the four quark chiral condensate and the leading distance scale for the onset of non-perturbative phenomena in QCD. The valence quark distributions, in the pion and kaon, defined in deep inelastic scattering, and measured in the Drell Yan process, are investigated with the same ladder-rainbow truncation of the Dyson-Schwinger and Bethe-Salpeter equations.

  4. Scale of dark QCD

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Schwaller, Pedro

    2014-03-01

    Most of the mass of ordinary matter has its origin from quantum chromodynamics (QCD). A similar strong dynamics, dark QCD, could exist to explain the mass origin of dark matter. Using infrared fixed points of the two gauge couplings, we provide a dynamical mechanism that relates the dark QCD confinement scale to our QCD scale, and hence provides an explanation for comparable dark baryon and proton masses. Together with a mechanism that generates equal amounts of dark baryon and ordinary baryon asymmetries in the early Universe, the similarity of dark matter and ordinary matter energy densities can be naturally explained. For a large class of gauge group representations, the particles charged under both QCD and dark QCD, necessary ingredients for generating the infrared fixed points, are found to have masses at 1-2 TeV, which sets the scale for dark matter direct detection and novel collider signatures involving visible and dark jets.

  5. QCD In Extreme Conditions

    NASA Astrophysics Data System (ADS)

    Wilczek, Frank

    Introduction Symmetry and the Phenomena of QCD Apparent and Actual Symmetries Asymptotic Freedom Confinement Chiral Symmetry Breaking Chiral Anomalies and Instantons High Temperature QCD: Asymptotic Properties Significance of High Temperature QCD Numerical Indications for Quasi-Free Behavior Ideas About Quark-Gluon Plasma Screening Versus Confinement Models of Chiral Symmetry Breaking More Refined Numerical Experiments High-Temperature QCD: Phase Transitions Yoga of Phase Transitions and Order Parameters Application to Glue Theories Application to Chiral Transitions Close Up on Two Flavors A Genuine Critical Point! (?) High-Density QCD: Methods Hopes, Doubts, and Fruition Another Renormalization Group Pairing Theory Taming the Magnetic Singularity High-Density QCD: Color-Flavor Locking and Quark-Hadron Continuity Gauge Symmetry (Non)Breaking Symmetry Accounting Elementary Excitations A Modified Photon Quark-Hadron Continuity Remembrance of Things Past More Quarks Fewer Quarks and Reality

  6. Iterative software kernels

    SciTech Connect

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  7. Learning with Box Kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-04-12

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, since the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given which dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  8. Learning with box kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-11-01

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, because the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given that dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  9. Kernel Affine Projection Algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  10. QCD for Postgraduates (3/5)

    ScienceCinema

    None

    2016-07-12

    Modern QCD - Lecture 3 We will introduce processes with initial-state hadrons and discuss parton distributions, sum rules, as well as the need for a factorization scale once radiative corrections are taken into account. We will then discuss the DGLAP equation, the evolution of parton densities, as well as ways in which parton densities are extracted from data.

  11. Analog forecasting with dynamics-adapted kernels

    NASA Astrophysics Data System (ADS)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  12. Inheritance of Kernel Color in Corn: Explanations and Investigations.

    ERIC Educational Resources Information Center

    Ford, Rosemary H.

    2000-01-01

    Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)

  13. Continuous Advances in QCD 2008

    NASA Astrophysics Data System (ADS)

    Peloso, Marco M.

    2008-12-01

    1. High-order calculations in QCD and in general gauge theories. NLO evolution of color dipoles / I. Balitsky. Recent perturbative results on heavy quark decays / J. H. Piclum, M. Dowling, A. Pak. Leading and non-leading singularities in gauge theory hard scattering / G. Sterman. The space-cone gauge, Lorentz invariance and on-shell recursion for one-loop Yang-Mills amplitudes / D. Vaman, Y.-P. Yao -- 2. Heavy flavor physics. Exotic cc¯ mesons / E. Braaten. Search for new physics in B[symbol]-mixing / A. J. Lenz. Implications of D[symbol]-D[symbol] mixing for new physics / A. A. Petrov. Precise determinations of the charm quark mass / M. Steinhauser -- 3. Quark-gluon dynamics at high density and/or high temperature. Crystalline condensate in the chiral Gross-Neveu model / G. V. Dunne, G. Basar. The strong coupling constant at low and high energies / J. H. Kühn. Quarkyonic matter and the phase diagram of QCD / L. McLerran. Statistical QCD with non-positive measure / J. C. Osborn, K. Splittorff, J. J. M. Verbaarschot. From equilibrium to transport properties of strongly correlated fermi liquids / T. Schäfer. Lessons from random matrix theory for QCD at finite density / K. Splittorff, J. J. M. Verbaarschot -- 4. Methods and models of holographic correspondence. Soft-wall dynamics in AdS/QCD / B. Batell. Holographic QCD / N. Evans, E. Threlfall. QCD glueball sum rules and vacuum topology / H. Forkel. The pion form factor in AdS/QCD / H. J. Kwee, R. F. Lebed. The fast life of holographic mesons / R. C. Myers, A. Sinha. Properties of Baryons from D-branes and instantons / S. Sugimoto. The master space of N = 1 quiver gauge theories: counting BPS operators / A. Zaffaroni. Topological field congurations. Skyrmions in theories with massless adjoint quarks / R. Auzzi. Domain walls, localization and confinement: what binds strings inside walls / S. Bolognesi. Static interactions of non-abelian vortices / M. Eto. Vortices which do not abelianize dynamically: semi

  14. Multiple collaborative kernel tracking.

    PubMed

    Fan, Zhimin; Yang, Ming; Wu, Ying

    2007-07-01

    Those motion parameters that cannot be recovered from image measurements are unobservable in the visual dynamic system. This paper studies this important issue of singularity in the context of kernel-based tracking and presents a novel approach that is based on a motion field representation which employs redundant but sparsely correlated local motion parameters instead of compact but uncorrelated global ones. This approach makes it easy to design fully observable kernel-based motion estimators. This paper shows that these high-dimensional motion fields can be estimated efficiently by the collaboration among a set of simpler local kernel-based motion estimators, which makes the new approach very practical.

  15. QCD results at CDF

    SciTech Connect

    Norniella, Olga; /Barcelona, IFAE

    2005-01-01

    Recent QCD measurements from the CDF collaboration at the Tevatron are presented, together with future prospects as the luminosity increases. The measured inclusive jet cross section is compared to pQCD NLO predictions. Precise measurements on jet shapes and hadronic energy flows are compared to different phenomenological models that describe gluon emissions and the underlying event in hadron-hadron interactions.

  16. Wilson loops and QCD/string scattering amplitudes

    SciTech Connect

    Makeenko, Yuri; Olesen, Poul

    2009-07-15

    We generalize modern ideas about the duality between Wilson loops and scattering amplitudes in N=4 super Yang-Mills theory to large N QCD by deriving a general relation between QCD meson scattering amplitudes and Wilson loops. We then investigate properties of the open-string disk amplitude integrated over reparametrizations. When the Wilson-loop is approximated by the area behavior, we find that the QCD scattering amplitude is a convolution of the standard Koba-Nielsen integrand and a kernel. As usual poles originate from the first factor, whereas no (momentum-dependent) poles can arise from the kernel. We show that the kernel becomes a constant when the number of external particles becomes large. The usual Veneziano amplitude then emerges in the kinematical regime, where the Wilson loop can be reliably approximated by the area behavior. In this case, we obtain a direct duality between Wilson loops and scattering amplitudes when spatial variables and momenta are interchanged, in analogy with the N=4 super Yang-Mills theory case.

  17. Robotic Intelligence Kernel: Communications

    SciTech Connect

    Walton, Mike C.

    2009-09-16

    The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.

  18. Precision QCD measurements in DIS at HERA

    NASA Astrophysics Data System (ADS)

    Britzger, Daniel

    2016-08-01

    New and recent results on QCD measurements from the H1 and ZEUS experiments at the HERA ep collider are reviewed. The final results on the combined deep-inelastic neutral and charged current cross-sections are presented and their role in the extractions of parton distribution functions (PDFs) is studied. The PDF fits give insight into the compatibility of QCD evolution and heavy flavor schemes with the data as a function of kinematic variables such as the scale Q2. Measurements of jet production cross-sections in ep collisions provide direct proves of QCD and extractions of the strong coupling constants are performed. Charm and beauty cross-section measurements are used for the determination of the heavy quark masses. Their role in PDF fits is investigated. In the regime of diffractive DIS and photoproduction, dijet and prompt photon production cross-sections provide insights into the process of factorization and the nature of the diffractive exchange.

  19. Parametric kernel-driven active contours for image segmentation

    NASA Astrophysics Data System (ADS)

    Wu, Qiongzhi; Fang, Jiangxiong

    2012-10-01

    We investigated a parametric kernel-driven active contour (PKAC) model, which implicitly transfers kernel mapping and piecewise constant to modeling the image data via kernel function. The proposed model consists of curve evolution functional with three terms: global kernel-driven and local kernel-driven terms, which evaluate the deviation of the mapped image data within each region from the piecewise constant model, and a regularization term expressed as the length of the evolution curves. In the local kernel-driven term, the proposed model can effectively segment images with intensity inhomogeneity by incorporating the local image information. By balancing the weight between the global kernel-driven term and the local kernel-driven term, the proposed model can segment the images with either intensity homogeneity or intensity inhomogeneity. To ensure the smoothness of the level set function and reduce the computational cost, the distance regularizing term is applied to penalize the deviation of the level set function and eliminate the requirement of re-initialization. Compared with the local image fitting model and local binary fitting model, experimental results show the advantages of the proposed method in terms of computational efficiency and accuracy.

  20. Robotic Intelligence Kernel: Driver

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.

  1. Lattice QCD in rotating frames.

    PubMed

    Yamamoto, Arata; Hirono, Yuji

    2013-08-23

    We formulate lattice QCD in rotating frames to study the physics of QCD matter under rotation. We construct the lattice QCD action with the rotational metric and apply it to the Monte Carlo simulation. As the first application, we calculate the angular momenta of gluons and quarks in the rotating QCD vacuum. This new framework is useful to analyze various rotation-related phenomena in QCD.

  2. Lattice QCD for nuclei

    NASA Astrophysics Data System (ADS)

    Beane, Silas

    2016-09-01

    Over the last several decades, theoretical nuclear physics has been evolving from a very-successful phenomenology of the properties of nuclei, to a first-principles derivation of the properties of visible matter in the Universe from the known underlying theories of Quantum Chromodynamics (QCD) and Electrodynamics. Many nuclear properties have now been calculated using lattice QCD, a method for treating QCD numerically with large computers. In this talk, some of the most recent results in this frontier area of nuclear theory will be reviewed.

  3. Light-Front Holographic QCD

    SciTech Connect

    Brodsky, Stanley J.; de Teramond, Guy F.; /Costa Rica U.

    2012-02-16

    -front QCD Hamiltonian 'Light-Front Holography'. Light-Front Holography is in fact one of the most remarkable features of the AdS/CFT correspondence. The Hamiltonian equation of motion in the light-front (LF) is frame independent and has a structure similar to eigenmode equations in AdS space. This makes a direct connection of QCD with AdS/CFT methods possible. Remarkably, the AdS equations correspond to the kinetic energy terms of the partons inside a hadron, whereas the interaction terms build confinement and correspond to the truncation of AdS space in an effective dual gravity approximation. One can also study the gauge/gravity duality starting from the bound-state structure of hadrons in QCD quantized in the light-front. The LF Lorentz-invariant Hamiltonian equation for the relativistic bound-state system is P{sub {mu}}P{sup {mu}}|{psi}(P)> = (P{sup +}P{sup -} - P{sub {perpendicular}}{sup 2})|{psi}(P)> = M{sup 2}|{psi}(P)>, P{sup {+-}} = P{sup 0} {+-} P{sup 3}, where the LF time evolution operator P{sup -} is determined canonically from the QCD Lagrangian. To a first semiclassical approximation, where quantum loops and quark masses are not included, this leads to a LF Hamiltonian equation which describes the bound-state dynamics of light hadrons in terms of an invariant impact variable {zeta} which measures the separation of the partons within the hadron at equal light-front time {tau} = x{sup 0} + x{sup 3}. This allows us to identify the holographic variable z in AdS space with an impact variable {zeta}. The resulting Lorentz-invariant Schroedinger equation for general spin incorporates color confinement and is systematically improvable. Light-front holographic methods were originally introduced by matching the electromagnetic current matrix elements in AdS space with the corresponding expression using LF theory in physical space time. It was also shown that one obtains identical holographic mapping using the matrix elements of the energy-momentum tensor by perturbing

  4. Resonances in QCD

    SciTech Connect

    Lutz, Matthias F. M.; Lange, Jens Sören; Pennington, Michael; Bettoni, Diego; Brambilla, Nora; Crede, Volker; Eidelman, Simon; Gillitzer, Albrecht; Gradl, Wolfgang; Lang, Christian B.; Metag, Volker; Nakano, Takashi; Nieves, Juan; Neubert, Sebastian; Oka, Makoto; Olsen, Stephen L.; Pappagallo, Marco; Paul, Stephan; Pelizäus, Marc; Pilloni, Alessandro; Prencipe, Elisabetta; Ritman, Jim; Ryan, Sinead; Thoma, Ulrike; Uwer, Ulrich; Weise, Wolfram

    2016-04-01

    We report on the EMMI Rapid Reaction Task Force meeting 'Resonances in QCD', which took place at GSI October 12-14, 2015 (Fig.~1). A group of 26 people met to discuss the physics of resonances in QCD. The aim of the meeting was defined by the following three key questions; what is needed to understand the physics of resonances in QCD?; where does QCD lead us to expect resonances with exotic quantum numbers?; and what experimental efforts are required to arrive at a coherent picture? For light mesons and baryons only those with up, down and strange quark content were considered. For heavy-light and heavy-heavy meson systems, those with charm quarks were the focus.This document summarizes the discussions by the participants, which in turn led to the coherent conclusions we present here.

  5. Soft QCD at Tevatron

    SciTech Connect

    Rangel, Murilo; /Orsay, LAL

    2010-06-01

    Experimental studies of soft Quantum Chromodynamics (QCD) at Tevatron are reported in this note. Results on inclusive inelastic interactions, underlying events, double parton interaction and exclusive diffractive production and their implications to the Large Hadron Collider (LHC) physics are discussed.

  6. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  7. The QCD running coupling

    NASA Astrophysics Data System (ADS)

    Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.

    2016-09-01

    We review the present theoretical and empirical knowledge for αs, the fundamental coupling underlying the interactions of quarks and gluons in Quantum Chromodynamics (QCD). The dependence of αs(Q2) on momentum transfer Q encodes the underlying dynamics of hadron physics-from color confinement in the infrared domain to asymptotic freedom at short distances. We review constraints on αs(Q2) at high Q2, as predicted by perturbative QCD, and its analytic behavior at small Q2, based on models of nonperturbative dynamics. In the introductory part of this review, we explain the phenomenological meaning of the coupling, the reason for its running, and the challenges facing a complete understanding of its analytic behavior in the infrared domain. In the second, more technical, part of the review, we discuss the behavior of αs(Q2) in the high momentum transfer domain of QCD. We review how αs is defined, including its renormalization scheme dependence, the definition of its renormalization scale, the utility of effective charges, as well as "Commensurate Scale Relations" which connect the various definitions of the QCD coupling without renormalization-scale ambiguity. We also report recent significant measurements and advanced theoretical analyses which have led to precise QCD predictions at high energy. As an example of an important optimization procedure, we discuss the "Principle of Maximum Conformality", which enhances QCD's predictive power by removing the dependence of the predictions for physical observables on the choice of theoretical conventions such as the renormalization scheme. In the last part of the review, we discuss the challenge of understanding the analytic behavior αs(Q2) in the low momentum transfer domain. We survey various theoretical models for the nonperturbative strongly coupled regime, such as the light-front holographic approach to QCD. This new framework predicts the form of the quark-confinement potential underlying hadron spectroscopy and

  8. QCD (&) event generators

    SciTech Connect

    Skands, Peter Z.; /Fermilab

    2005-07-01

    Recent developments in QCD phenomenology have spurred on several improved approaches to Monte Carlo event generation, relative to the post-LEP state of the art. In this brief review, the emphasis is placed on approaches for (1) consistently merging fixed-order matrix element calculations with parton shower descriptions of QCD radiation, (2) improving the parton shower algorithms themselves, and (3) improving the description of the underlying event in hadron collisions.

  9. Kernel mucking in top

    SciTech Connect

    LeFebvre, W.

    1994-08-01

    For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.

  10. FOREWORD: Extreme QCD 2012 (xQCD)

    NASA Astrophysics Data System (ADS)

    Alexandru, Andrei; Bazavov, Alexei; Liu, Keh-Fei

    2013-04-01

    The Extreme QCD 2012 conference, held at the George Washington University in August 2012, celebrated the 10th event in the series. It has been held annually since 2003 at different locations: San Carlos (2011), Bad Honnef (2010), Seoul (2009), Raleigh (2008), Rome (2007), Brookhaven (2006), Swansea (2005), Argonne (2004), and Nara (2003). As usual, it was a very productive and inspiring meeting that brought together experts in the field of finite-temperature QCD, both theoretical and experimental. On the experimental side, we heard about recent results from major experiments, such as PHENIX and STAR at Brookhaven National Laboratory, ALICE and CMS at CERN, and also about the constraints on the QCD phase diagram coming from astronomical observations of one of the largest laboratories one can imagine, neutron stars. The theoretical contributions covered a wide range of topics, including QCD thermodynamics at zero and finite chemical potential, new ideas to overcome the sign problem in the latter case, fluctuations of conserved charges and how they allow one to connect calculations in lattice QCD with experimentally measured quantities, finite-temperature behavior of theories with many flavors of fermions, properties and the fate of heavy quarkonium states in the quark-gluon plasma, and many others. The participants took the time to write up and revise their contributions and submit them for publication in these proceedings. Thanks to their efforts, we have now a good record of the ideas presented and discussed during the workshop. We hope that this will serve both as a reminder and as a reference for the participants and for other researchers interested in the physics of nuclear matter at high temperatures and density. To preserve the atmosphere of the event the contributions are ordered in the same way as the talks at the conference. We are honored to have helped organize the 10th meeting in this series, a milestone that reflects the lasting interest in this

  11. Robotic Intelligence Kernel: Architecture

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  12. Robotic Intelligence Kernel: Visualization

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.

  13. Ultrahigh energy neutrinos and nonlinear QCD dynamics

    SciTech Connect

    Machado, Magno V.T.

    2004-09-01

    The ultrahigh energy neutrino-nucleon cross sections are computed taking into account different phenomenological implementations of the nonlinear QCD dynamics. Based on the color dipole framework, the results for the saturation model supplemented by the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution as well as for the Balitskii-Fadin-Kuraev-Lipatov (BFKL) formalism in the geometric scaling regime are presented. They are contrasted with recent calculations using next-to-leading order DGLAP and unified BFKL-DGLAP formalisms.

  14. QCD at D0 and CDF

    SciTech Connect

    Blazey, G.C.

    1995-05-01

    Selected recent Quantum Chromodynamics (QCD) results from the D0 and CDF experiments at the Fermilab Tevatron are presented and discussed. The inclusive jet and inclusive triple differential dijet cross sections are compared to next-to-leading order QCD calculations. The sensitivity of the dijet cross section to parton distribution functions (for hadron momentum fractions {approximately} 0.01 to {approximately} 0.4) will constrain the gluon distribution of the proton. Two analyses of dijet production at large rapidity separation are presented. The first analysis tests the contributions of higher order processes to dijet production and can be considered a test of BFKL or GLAP parton evolution. The second analysis yields a strong rapidity gap signal consistent with colorless exchange between the scattered partons. The prompt photon inclusive cross section is consistent with next-to-leading order QCD only at the highest transverse momenta. The discrepancy at lower momenta may be indicative of higher order processes impacting a transverse momentum or ``k{sub T}`` to the partonic interaction. The first measurement of the strong coupling constant from the Tevatron is also presented. The coupling constant can be determined from the ratio of W + 1jet to W + 0jet cross sections and a next-to-leading order QCD calculation.

  15. Electroweak symmetry breaking via QCD.

    PubMed

    Kubo, Jisuke; Lim, Kher Sham; Lindner, Manfred

    2014-08-29

    We propose a new mechanism to generate the electroweak scale within the framework of QCD, which is extended to include conformally invariant scalar degrees of freedom belonging to a larger irreducible representation of SU(3)c. The electroweak symmetry breaking is triggered dynamically via the Higgs portal by the condensation of the colored scalar field around 1 TeV. The mass of the colored boson is restricted to be 350  GeV≲mS≲3  TeV, with the upper bound obtained from perturbative renormalization group evolution. This implies that the colored boson can be produced at the LHC. If the colored boson is electrically charged, the branching fraction of the Higgs boson decaying into two photons can slightly increase, and moreover, it can be produced at future linear colliders. Our idea of nonperturbative electroweak scale generation can serve as a new starting point for more realistic model building in solving the hierarchy problem.

  16. The QCD running coupling

    DOE PAGES

    Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.

    2016-05-09

    Here, we review present knowledge onmore » $$\\alpha_{s}$$, the Quantum Chromodynamics (QCD) running coupling. The dependence of $$\\alpha_s(Q^2)$$ on momentum transfer $Q$ encodes the underlying dynamics of hadron physics --from color confinement in the infrared domain to asymptotic freedom at short distances. We will survey our present theoretical and empirical knowledge of $$\\alpha_s(Q^2)$$, including constraints at high $Q^2$ predicted by perturbative QCD, and constraints at small $Q^2$ based on models of nonperturbative dynamics. In the first, introductory, part of this review, we explain the phenomenological meaning of the coupling, the reason for its running, and the challenges facing a complete understanding of its analytic behavior in the infrared domain. In the second, more technical, part of the review, we discuss $$\\alpha_s(Q^2)$$ in the high momentum transfer domain of QCD. We review how $$\\alpha_s$$ is defined, including its renormalization scheme dependence, the definition of its renormalization scale, the utility of effective charges, as well as `` Commensurate Scale Relations" which connect the various definitions of the QCD coupling without renormalization scale ambiguity. We also report recent important experimental measurements and advanced theoretical analyses which have led to precise QCD predictions at high energy. As an example of an important optimization procedure, we discuss the ``Principle of Maximum Conformality" which enhances QCD's predictive power by removing the dependence of the predictions for physical observables on the choice of the gauge and renormalization scheme. In last part of the review, we discuss $$\\alpha_s(Q^2)$$ in the low momentum transfer domain, where there has been no consensus on how to define $$\\alpha_s(Q^2)$$ or its analytic behavior. We will discuss the various approaches used for low energy calculations. Among them, we will discuss the light-front holographic approach to QCD in the strongly coupled

  17. The QCD running coupling

    SciTech Connect

    Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.

    2016-05-09

    Here, we review present knowledge on $\\alpha_{s}$, the Quantum Chromodynamics (QCD) running coupling. The dependence of $\\alpha_s(Q^2)$ on momentum transfer $Q$ encodes the underlying dynamics of hadron physics --from color confinement in the infrared domain to asymptotic freedom at short distances. We will survey our present theoretical and empirical knowledge of $\\alpha_s(Q^2)$, including constraints at high $Q^2$ predicted by perturbative QCD, and constraints at small $Q^2$ based on models of nonperturbative dynamics. In the first, introductory, part of this review, we explain the phenomenological meaning of the coupling, the reason for its running, and the challenges facing a complete understanding of its analytic behavior in the infrared domain. In the second, more technical, part of the review, we discuss $\\alpha_s(Q^2)$ in the high momentum transfer domain of QCD. We review how $\\alpha_s$ is defined, including its renormalization scheme dependence, the definition of its renormalization scale, the utility of effective charges, as well as `` Commensurate Scale Relations" which connect the various definitions of the QCD coupling without renormalization scale ambiguity. We also report recent important experimental measurements and advanced theoretical analyses which have led to precise QCD predictions at high energy. As an example of an important optimization procedure, we discuss the ``Principle of Maximum Conformality" which enhances QCD's predictive power by removing the dependence of the predictions for physical observables on the choice of the gauge and renormalization scheme. In last part of the review, we discuss $\\alpha_s(Q^2)$ in the low momentum transfer domain, where there has been no consensus on how to define $\\alpha_s(Q^2)$ or its analytic behavior. We will discuss the various approaches used for low energy calculations. Among them, we will discuss the light-front holographic approach to QCD in the strongly coupled regime and its prediction

  18. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2015-12-22

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  19. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2016-06-01

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  20. Kernel Optimization in Discriminant Analysis

    PubMed Central

    You, Di; Hamsici, Onur C.; Martinez, Aleix M.

    2011-01-01

    Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results using a large number of databases and classifiers demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates. PMID:20820072

  1. Novel QCD Phenomena

    SciTech Connect

    Brodsky, Stanley J.; /SLAC

    2007-07-06

    I discuss a number of novel topics in QCD, including the use of the AdS/CFT correspondence between Anti-de Sitter space and conformal gauge theories to obtain an analytically tractable approximation to QCD in the regime where the QCD coupling is large and constant. In particular, there is an exact correspondence between the fifth-dimension coordinate z of AdS space and a specific impact variable {zeta} which measures the separation of the quark constituents within the hadron in ordinary space-time. This connection allows one to compute the analytic form of the frame-independent light-front wavefunctions of mesons and baryons, the fundamental entities which encode hadron properties and allow the computation of exclusive scattering amplitudes. I also discuss a number of novel phenomenological features of QCD. Initial- and final-state interactions from gluon-exchange, normally neglected in the parton model, have a profound effect in QCD hard-scattering reactions, leading to leading-twist single-spin asymmetries, diffractive deep inelastic scattering, diffractive hard hadronic reactions, the breakdown of the Lam Tung relation in Drell-Yan reactions, and nuclear shadowing and non-universal antishadowing--leading-twist physics not incorporated in the light-front wavefunctions of the target computed in isolation. I also discuss tests of hidden color in nuclear wavefunctions, the use of diffraction to materialize the Fock states of a hadronic projectile and test QCD color transparency, and anomalous heavy quark effects. The presence of direct higher-twist processes where a proton is produced in the hard subprocess can explain the large proton-to-pion ratio seen in high centrality heavy ion collisions.

  2. Small-x evolution in the next-to-leading order

    SciTech Connect

    Giovanni Antonio Chirilli

    2009-12-01

    After a brief introduction to Deep Inelastic Scattering in the Bjorken limit and in the Regge Limit we discuss the operator product expansion in terms of non local string operator and in terms of Wilson lines. We will show how the high-energy behavior of amplitudes in gauge theories can be reformulated in terms of the evolution of Wilson-line operators. In the leading order this evolution is governed by the non-linear Balitsky-Kovchegov (BK) equation. In order to see if this equation is relevant for existing or future deep inelastic scattering (DIS) accelerators (like Electron Ion Collider (EIC) or Large Hadron electron Collider (LHeC)) one needs to know the next-to-leading order (NLO) corrections. In addition, the NLO corrections define the scale of the running-coupling constant in the BK equation and therefore determine the magnitude of the leading-order cross sections. In Quantum Chromodynamics (QCD), the next-to-leading order BK equation has both conformal and non-conformal parts. The NLO kernel for the composite operators resolves in a sum of the conformal part and the running-coupling part. The QCD and kernel of the BK equation is presented.

  3. Baryons and QCD

    SciTech Connect

    Nathan Isgur

    1997-03-01

    The author presents an idiosyncratic view of baryons which calls for a marriage between quark-based and hadronic models of QCD. He advocates a treatment based on valence quark plus glue dominance of hadron structure, with the sea of q pairs (in the form of virtual hadron pairs) as important corrections.

  4. QCD: Quantum Chromodynamics

    ScienceCinema

    Lincoln, Don

    2016-07-12

    The strongest force in the universe is the strong nuclear force and it governs the behavior of quarks and gluons inside protons and neutrons. The name of the theory that governs this force is quantum chromodynamics, or QCD. In this video, Fermilab’s Dr. Don Lincoln explains the intricacies of this dominant component of the Standard Model.

  5. Hadronic laws from QCD

    NASA Astrophysics Data System (ADS)

    Cahill, R. T.

    1992-06-01

    A review is given of progress in deriving the effective action for hadronic physics, S[π, ϱ, ω,.., overlineN, N,..] , from the fundamental defining action of QCD, S[ overlineq, q, A μa] . This is a problem in quantum field theory and the most success so far has been achieved using functional integral calculus (FIC) techniques. This formulates the problem as an exercise in changing the variables of integration in the functional integrals, from those of the quark and gluon fields to those of the (bare) meson and baryon fields. The appropriate variables are determined by the dynamics of QCD, and the final hadronic variables (essentially the 'normal modes' of QCD) are local fields describing the 'centre-of-mass' motion of extended bound states of quarks. The quarks are extensively dressed by the gluons, and the detailed aspects of the hidden chiral symmetry emerge naturally from the formalism. Particular attention is given to covariant integral equations which determine bare nucleon structure (i.e. in the quenched approximation). These equations, which arise from the closed double-helix diagrams of the FIC analysis, describe the baryons in terms of quark-diquark structure, in the form of Faddeev equations. This hadronisation of QCD also generates the dressing of these baryons by the pions, and the non-local πNN coupling.

  6. REGGE TRAJECTORIES IN QCD

    SciTech Connect

    Radyushkin, Anatoly V.; Efremov, Anatoly Vasilievich; Ginzburg, Ilya F.

    2013-04-01

    We discuss some problems concerning the application of perturbative QCD to high energy soft processes. We show that summing the contributions of the lowest twist operators for non-singlet $t$-channel leads to a Regge-like amplitude. Singlet case is also discussed.

  7. QCD: Quantum Chromodynamics

    SciTech Connect

    Lincoln, Don

    2016-06-17

    The strongest force in the universe is the strong nuclear force and it governs the behavior of quarks and gluons inside protons and neutrons. The name of the theory that governs this force is quantum chromodynamics, or QCD. In this video, Fermilab’s Dr. Don Lincoln explains the intricacies of this dominant component of the Standard Model.

  8. QCD and Hadron Physics

    SciTech Connect

    Brodsky, Stanley J.; Deshpande, Abhay L.; Gao, Haiyan; McKeown, Robert D.; Meyer, Curtis A.; Meziani, Zein-Eddine; Milner, Richard G.; Qiu, Jianwei; Richards, David G.; Roberts, Craig D.

    2015-02-26

    This White Paper presents the recommendations and scientific conclusions from the Town Meeting on QCD and Hadronic Physics that took place in the period 13-15 September 2014 at Temple University as part of the NSAC 2014 Long Range Planning process. The meeting was held in coordination with the Town Meeting on Phases of QCD and included a full day of joint plenary sessions of the two meetings. The goals of the meeting were to report and highlight progress in hadron physics in the seven years since the 2007 Long Range Plan (LRP07), and present a vision for the future by identifying the key questions and plausible paths to solutions which should define the next decade. The introductory summary details the recommendations and their supporting rationales, as determined at the Town Meeting on QCD and Hadron Physics, and the endorsements that were voted upon. The larger document is organized as follows. Section 2 highlights major progress since the 2007 LRP. It is followed, in Section 3, by a brief overview of the physics program planned for the immediate future. Finally, Section 4 provides an overview of the physics motivations and goals associated with the next QCD frontier: the Electron-Ion-Collider.

  9. QCD results from CDF

    SciTech Connect

    Plunkett, R.; The CDF Collaboration

    1991-10-01

    Results are presented for hadronic jet and direct photon production at {radical}{bar s} = 1800 GeV. The data are compared with next-to-leading QCD calculations. A new limit on the scale of possible composite structure of the quarks is also reported. 12 refs., 4 figs.

  10. QCD physics at CDF

    SciTech Connect

    Devlin, T.; CDF Collaboration

    1996-10-01

    The CDF collaboration is engaged in a broad program of QCD measurements at the Fermilab Tevatron Collider. I will discuss inclusive jet production at center-of-mass energies of 1800 GeV and 630 GeV, properties of events with very high total transverse energy and dijet angular distributions.

  11. Progress in lattice QCD

    SciTech Connect

    Andreas S. Kronfeld

    2002-09-30

    After reviewing some of the mathematical foundations and numerical difficulties facing lattice QCD, I review the status of several calculations relevant to experimental high-energy physics. The topics considered are moments of structure functions, which may prove relevant to search for new phenomena at the LHC, and several aspects of flavor physics, which are relevant to understanding CP and flavor violation.

  12. Phenomenology Using Lattice QCD

    NASA Astrophysics Data System (ADS)

    Gupta, R.

    2005-08-01

    This talk provides a brief summary of the status of lattice QCD calculations of the light quark masses and the kaon bag parameter BK. Precise estimates of these four fundamental parameters of the standard model, i.e., mu, md, ms and the CP violating parameter η, help constrain grand unified models and could provide a window to new physics.

  13. Phenomenology Using Lattice QCD

    NASA Astrophysics Data System (ADS)

    Gupta, R.

    This talk provides a brief summary of the status of lattice QCD calculations of the light quark masses and the kaon bag parameter BK. Precise estimates of these four fundamental parameters of the standard model, i.e., mu, md, ms and the CP violating parameter η, help constrain grand unified models and could provide a window to new physics.

  14. Kernel machine SNP-set testing under multiple candidate kernels.

    PubMed

    Wu, Michael C; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M; Harmon, Quaker E; Lin, Xinyi; Engel, Stephanie M; Molldrem, Jeffrey J; Armistead, Paul M

    2013-04-01

    Joint testing for the cumulative effect of multiple single-nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large-scale genetic association studies. The kernel machine (KM)-testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori because this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest P-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power vs. using the best candidate kernel.

  15. Baryons in holographic QCD

    NASA Astrophysics Data System (ADS)

    Nawa, Kanabu; Suganuma, Hideo; Kojo, Toru

    2007-04-01

    We study baryons in holographic QCD with D4/D8/D8¯ multi-D-brane system. In holographic QCD, the baryon appears as a topologically nontrivial chiral soliton in a four-dimensional effective theory of mesons. We call this topological soliton brane-induced Skyrmion. Some review of D4/D8/D8¯ holographic QCD is presented from the viewpoints of recent hadron physics and QCD phenomenologies. A four-dimensional effective theory with pions and ρ mesons is uniquely derived from the non-Abelian Dirac-Born-Infeld (DBI) action of D8 brane with D4 supergravity background at the leading order of large Nc, without small amplitude expansion of meson fields to discuss chiral solitons. For the hedgehog configuration of pion and ρ-meson fields, we derive the energy functional and the Euler-Lagrange equation of brane-induced Skyrmion from the meson effective action induced by holographic QCD. Performing the numerical calculation, we obtain the soliton solution and figure out the pion profile F(r) and the ρ-meson profile G˜(r) of the brane-induced Skyrmion with its total energy, energy density distribution, and root-mean-square radius. These results are compared with the experimental quantities of baryons and also with the profiles of standard Skyrmion without ρ mesons. We analyze interaction terms of pions and ρ mesons in brane-induced Skyrmion, and find a significant ρ-meson component appearing in the core region of a baryon.

  16. Novel QCD Phenomenology

    SciTech Connect

    Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins

    2011-08-12

    I review a number of topics where conventional wisdom in hadron physics has been challenged. For example, hadrons can be produced at large transverse momentum directly within a hard higher-twist QCD subprocess, rather than from jet fragmentation. Such 'direct' processes can explain the deviations from perturbative QCD predictions in measurements of inclusive hadron cross sections at fixed x{sub T} = 2p{sub T}/{radical}s, as well as the 'baryon anomaly', the anomalously large proton-to-pion ratio seen in high centrality heavy ion collisions. Initial-state and final-state interactions of the struck quark, the soft-gluon rescattering associated with its Wilson line, lead to Bjorken-scaling single-spin asymmetries, diffractive deep inelastic scattering, the breakdown of the Lam-Tung relation in Drell-Yan reactions, as well as nuclear shadowing and antishadowing. The Gribov-Glauber theory predicts that antishadowing of nuclear structure functions is not universal, but instead depends on the flavor quantum numbers of each quark and antiquark, thus explaining the anomalous nuclear dependence measured in deep-inelastic neutrino scattering. Since shadowing and antishadowing arise from the physics of leading-twist diffractive deep inelastic scattering, one cannot attribute such phenomena to the structure of the nucleus itself. It is thus important to distinguish 'static' structure functions, the probability distributions computed from the square of the target light-front wavefunctions, versus 'dynamical' structure functions which include the effects of the final-state rescattering of the struck quark. The importance of the J = 0 photon-quark QCD contact interaction in deeply virtual Compton scattering is also emphasized. The scheme-independent BLM method for setting the renormalization scale is discussed. Eliminating the renormalization scale ambiguity greatly improves the precision of QCD predictions and increases the sensitivity of searches for new physics at the LHC

  17. Novel QCD Phenomenology

    NASA Astrophysics Data System (ADS)

    Brodsky, Stanley J.

    2011-04-01

    I review a number of topics where conventional wisdom in hadron physics has been challenged. For example, hadrons can be produced at large transverse momentum directly within a hard QCD subprocess, rather than from jet fragmentation. Such "direct" higher-twist processes can explain the deviations from perturbative QCD predictions in measurements of inclusive hadron cross sections at fixed {xT} = 2{pT}/√ s , as well as the "baryon anomaly, the anomalously large proton-to-pion ratio seen in high centrality heavy ion collisions. Initial-state and final-state interactions of the struck quark, soft-gluon rescattering associated with its Wilson line lead to Bjorken-scaling single-spin asymmetries, diffractive deep inelastic scattering, the breakdown of the Lam-Tung relation in Drell-Yan reactions, as well as nuclear shadowing and antishadowing. The Gribov-Glauber theory predicts that antishadowing of nuclear structure functions is not universal, but instead depends on the flavor quantum numbers of each quark and antiquark, thus explaining the anomalous nuclear dependence measured in deep-inelastic neutrino scattering. Since shadowing and antishadowing arise from the physics of leading-twist diffractive deep inelastic scattering, one cannot attribute such phenomena to the structure of the nucleus itself. It is thus important to distinguish "static" structure functions, the probability distributions computed from the square of the target light-front wavefunctions, versus "dynamical" structure functions which include the effects of the final-state rescattering of the struck quark. The importance of the J = 0 photon-quark QCD contact interaction in deeply virtual Compton scattering is also emphasized. The scheme-independent BLM method for setting the renormalization scale is discussed. The elimination of the renormalization scale ambiguity would greatly improve the precision of QCD predictions and increase the sensitivity of searches for new physics at the LHC. Other novel

  18. Baryons in holographic QCD

    SciTech Connect

    Nawa, Kanabu; Suganuma, Hideo; Kojo, Toru

    2007-04-15

    We study baryons in holographic QCD with D4/D8/D8 multi-D-brane system. In holographic QCD, the baryon appears as a topologically nontrivial chiral soliton in a four-dimensional effective theory of mesons. We call this topological soliton brane-induced Skyrmion. Some review of D4/D8/D8 holographic QCD is presented from the viewpoints of recent hadron physics and QCD phenomenologies. A four-dimensional effective theory with pions and {rho} mesons is uniquely derived from the non-Abelian Dirac-Born-Infeld (DBI) action of D8 brane with D4 supergravity background at the leading order of large N{sub c}, without small amplitude expansion of meson fields to discuss chiral solitons. For the hedgehog configuration of pion and {rho}-meson fields, we derive the energy functional and the Euler-Lagrange equation of brane-induced Skyrmion from the meson effective action induced by holographic QCD. Performing the numerical calculation, we obtain the soliton solution and figure out the pion profile F(r) and the {rho}-meson profile G-tilde(r) of the brane-induced Skyrmion with its total energy, energy density distribution, and root-mean-square radius. These results are compared with the experimental quantities of baryons and also with the profiles of standard Skyrmion without {rho} mesons. We analyze interaction terms of pions and {rho} mesons in brane-induced Skyrmion, and find a significant {rho}-meson component appearing in the core region of a baryon.

  19. MAGNETIC FIELDS FROM QCD PHASE TRANSITIONS

    SciTech Connect

    Tevzadze, Alexander G.; Kisslinger, Leonard; Kahniashvili, Tina; Brandenburg, Axel

    2012-11-01

    We study the evolution of QCD phase transition-generated magnetic fields (MFs) in freely decaying MHD turbulence of the expanding universe. We consider an MF generation model that starts from basic non-perturbative QCD theory and predicts stochastic MFs with an amplitude of the order of 0.02 {mu}G and small magnetic helicity. We employ direct numerical simulations to model the MHD turbulence decay and identify two different regimes: a 'weakly helical' turbulence regime, when magnetic helicity increases during decay, and 'fully helical' turbulence, when maximal magnetic helicity is reached and an inverse cascade develops. The results of our analysis show that in the most optimistic scenario the magnetic correlation length in the comoving frame can reach 10 kpc with the amplitude of the effective MF being 0.007 nG. We demonstrate that the considered model of magnetogenesis can provide the seed MF for galaxies and clusters.

  20. Non-perturbative QCD Modeling and Meson Physics

    SciTech Connect

    Nguyen, T.; Souchlas, N. A.; Tandy, P. C.

    2009-04-20

    Using a ladder-rainbow kernel previously established for light quark hadron physics, we explore the extension to masses and electroweak decay constants of ground state pseudoscalar and vector quarkonia and heavy-light mesons in the c- and b-quark regions. We make a systematic study of the effectiveness of a constituent mass concept as a replacement for a heavy quark dressed propagator for such states. The difference between vector and axial vector current correlators is explored within the same model to provide an estimate of the four quark chiral condensate and the leading distance scale for the onset of non-perturbative phenomena in QCD.

  1. Quark-gluon vertex model and lattice-QCD data

    SciTech Connect

    Bhagwat, M.S.; Tandy, P.C.

    2004-11-01

    A model for the dressed-quark-gluon vertex, at zero gluon momentum, is formed from a nonperturbative extension of the two Feynman diagrams that contribute at one loop in perturbation theory. The required input is an existing ladder-rainbow model Bethe-Salpeter kernel from an approach based on the Dyson-Schwinger equations; no new parameters are introduced. The model includes an Ansatz for the triple-gluon vertex. Two of the three vertex amplitudes from the model provide a pointwise description of the recent quenched-lattice-QCD data. An estimate of the effects of quenching is made.

  2. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or...

  3. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored...

  4. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel,...

  5. The quark propagator in QCD and G2 QCD

    NASA Astrophysics Data System (ADS)

    Contant, Romain; Huber, Markus Q.

    2017-03-01

    QCD-like theories provide testing grounds for truncations of functional equations at non-zero density, since comparisons with lattice results are possible due to the absence of the sign problem. As a first step towards such a comparison, we determine for QCD and G2 QCD the chiral and confinement/deconfinement transitions from the quark propagator Dyson-Schwinger equation at zero chemical potential by calculating the chiral and dual chiral condensates, respectively.

  6. Kernel phase and kernel amplitude in Fizeau imaging

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin J. S.

    2016-12-01

    Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent history of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.

  7. Hybrid baryons in QCD

    SciTech Connect

    Dudek, Jozef J.; Edwards, Robert G.

    2012-03-21

    In this study, we present the first comprehensive study of hybrid baryons using lattice QCD methods. Using a large basis of composite QCD interpolating fields we extract an extensive spectrum of baryon states and isolate those of hybrid character using their relatively large overlap onto operators which sample gluonic excitations. We consider the spectrum of Nucleon and Delta states at several quark masses finding a set of positive parity hybrid baryons with quantum numbers $N_{1/2^+},\\,N_{1/2^+},\\,N_{3/2^+},\\, N_{3/2^+},\\,N_{5/2^+},\\,$ and $\\Delta_{1/2^+},\\, \\Delta_{3/2^+}$ at an energy scale above the first band of `conventional' excited positive parity baryons. This pattern of states is compatible with a color octet gluonic excitation having $J^{P}=1^{+}$ as previously reported in the hybrid meson sector and with a comparable energy scale for the excitation, suggesting a common bound-state construction for hybrid mesons and baryons.

  8. Hadron Resonances from QCD

    NASA Astrophysics Data System (ADS)

    Dudek, Jozef J.

    2016-03-01

    I describe how hadron-hadron scattering amplitudes are related to the eigenstates of QCD in a finite cubic volume. The discrete spectrum of such eigenstates can be determined from correlation functions computed using lattice QCD, and the corresponding scattering amplitudes extracted. I review results from the Hadron Spectrum Collaboration who have used these finite volume methods to study ππ elastic scattering, including the ρ resonance, as well as coupled-channel πK, ηK scattering. The very recent extension to the case where an external current acts is also presented, considering the reaction πγ* → ππ, from which the unstable ρ → πγ transition form factor is extracted. Ongoing calculations are advertised and the outlook for finite volume approaches is presented.

  9. QCD tests at CDF

    SciTech Connect

    Kovacs, E.; CDF Collaboration

    1996-02-01

    We present results for the inclusive jet cross section and the dijet mass distribution. The inclusive cross section and dijet mass both exhibit significant deviations from the predictions of NLO QCD for jets with E{sub T}>200 GeV, or dijet masses > 400 GeV/c{sup 2}. We show that it is possible, within a global QCD analysis that includes the CDF inclusive jet data, to modify the gluon distribution at high x. The resulting increase in the jet cross-section predictions is 25-35%. Owing to the presence of k{sub T} smearing effects, the direct photon data does not provide as strong a constraint on the gluon distribution as previously thought. A comparison of the CDF and UA2 jet data, which have a common range in x, is plagued by theoretical and experimental uncertainties, and cannot at present confirm the CDF excess or the modified gluon distribution.

  10. Charmonium from Lattice QCD

    SciTech Connect

    Jozef Dudek

    2007-08-05

    Charmonium is an attractive system for the application of lattice QCD methods. While the sub-threshold spectrum has been considered in some detail in previous works, it is only very recently that excited and higher-spin states and further properties such as radiative transitions and two-photon decays have come to be calculated. I report on this recent progress with reference to work done at Jefferson Lab.

  11. QCD tests with CDF

    SciTech Connect

    Flaugher, B.

    1992-09-01

    Measurement of scaling violations, the inclusive photon and diphoton cross sections as well as the photon-jet and jet-jet angular distributions are discussed and compared to leading order and next-to-leading order QCD. A study of four-jet events is described, with a limit on the cross section for double parton scattering. The multiplicity of jets in W boson events is compared to theoretical predictions.

  12. Hadronic Resonances from Lattice QCD

    SciTech Connect

    John Bulava; Robert Edwards; George Fleming; K. Jimmy Juge; Adam C. Lichtl; Nilmani Mathur; Colin Morningstar; David Richards; Stephen J. Wallace

    2007-06-16

    The determination of the pattern of hadronic resonances as predicted by Quantum Chromodynamics requires the use of non-perturbative techniques. Lattice QCD has emerged as the dominant tool for such calculations, and has produced many QCD predictions which can be directly compared to experiment. The concepts underlying lattice QCD are outlined, methods for calculating excited states are discussed, and results from an exploratory Nucleon and Delta baryon spectrum study are presented.

  13. Hadronic Resonances from Lattice QCD

    SciTech Connect

    Lichtl, Adam C.; Bulava, John; Morningstar, Colin; Edwards, Robert; Mathur, Nilmani; Richards, David; Fleming, George; Juge, K. Jimmy; Wallace, Stephen J.

    2007-10-26

    The determination of the pattern of hadronic resonances as predicted by Quantum Chromodynamics requires the use of non-perturbative techniques. Lattice QCD has emerged as the dominant tool for such calculations, and has produced many QCD predictions which can be directly compared to experiment. The concepts underlying lattice QCD are outlined, methods for calculating excited states are discussed, and results from an exploratory Nucleon and Delta baryon spectrum study are presented.

  14. QCD results from the Tevatron

    SciTech Connect

    C. Mesropian

    2002-07-12

    The Tevatron hadron collider provides the unique opportunity to study Quantum Chromodynamics, QCD, at the highest energies. The results summarized in this talk, although representing different experimental objects, as hadronic jets and electromagnetic clusters, serve to determine the fundamental input ingredients of QCD as well as to search for new physics. The authors present results from QCD studies at the Tevatron from Run 1 data, including jet and direct photon production, and a measurement of the strong coupling constant.

  15. CGC/saturation approach: A new impact-parameter-dependent model in the next-to-leading order of perturbative QCD

    NASA Astrophysics Data System (ADS)

    Contreras, Carlos; Levin, Eugene; Meneses, Rodrigo; Potashnikova, Irina

    2016-12-01

    This paper is the first attempt to build a color glass condensate/saturation model based on the next-to-leading-order (NLO) corrections to linear and nonlinear evolution in QCD. We assume that the renormalization scale is the saturation momentum and find that the scattering amplitude has geometric scaling behavior deep in the saturation domain with the explicit formula of this behavior at large τ =r2Qs2. We build a model that includes this behavior, as well as the known ingredients: (i) the behavior of the scattering amplitude in the vicinity of the saturation momentum, using the NLO Balitsky-Fadin-Kuraev-Lipatov kernel, (ii) the pre-asymptotic behavior of ln (Qs2(Y ) ) , as a function of Y , and (iii) the impact parameter behavior of the saturation momentum, which has exponential behavior ∝exp (-m b ) at large b . We demonstrate that the model is able to describe the experimental data for the deep inelastic structure function. Despite this, our model has difficulties that are related to the small value of the QCD coupling at Qs(Y0) and the large values of the saturation momentum, which indicate the theoretical inconsistency of our description.

  16. Recent QCD results from CDF

    SciTech Connect

    I. Gorelov

    2001-12-28

    Experimental results on QCD measurements obtained in recent analyses and based on data collected with CDF Detector from the Run 1b Tevatron running cycle are presented. The scope of the talk includes major QCD topics: a measurement of the strong coupling constant {alpha}{sub s}, extracted from inclusive jet spectra and the underlying event energy contribution to a jet cone. Another experimental object of QCD interest, prompt photon production, is also discussed and the updated measurements by CDF of the inclusive photon cross section at 630 GeV and 1800 GeV, and the comparison with NLO QCD predictions is presented.

  17. The Adaptive Kernel Neural Network

    DTIC Science & Technology

    1989-10-01

    A neural network architecture for clustering and classification is described. The Adaptive Kernel Neural Network (AKNN) is a density estimation...classification layer. The AKNN retains the inherent parallelism common in neural network models. Its relationship to the kernel estimator allows the network to

  18. Evolution.

    ERIC Educational Resources Information Center

    Mayr, Ernst

    1978-01-01

    Traces the history of evolution theory from Lamarck and Darwin to the present. Discusses natural selection in detail. Suggests that, besides biological evolution, there is also a cultural evolution which is more rapid than the former. (MA)

  19. Nanosized Pd37(CO)28{P(p-Tolyl)3}12 containing geometrically unprecedented central 23-atom interpenetrating tri-icosahedral palladium kernel of double icosahedral units: its postulated metal-core evolution and resulting stereochemical implications.

    PubMed

    Mednikov, Evgueni G; Dahl, Lawrence F

    2008-11-05

    Pd37(CO)28{P(p-Tolyl)3}12 (1) was obtained in approximately 50% yield by the short-time thermolysis of Pd10(CO)12{P(p-Tolyl)3}6 in THF solution followed by crystallization via layering with hexane under N2. The low-temperature (100 K) CCD X-ray diffraction study of 1 revealed an unusual non-spheroidal Pd37-atom polyhedron, which may be readily envisioned to originate via the initial formation of a heretofore non-isolated central Pd23 kernel composed of three interpenetrating trigonal-planar double icosahedra (DI) that are oriented along the three bonding edges of its interior Pd3 triangle. This central Pd23 kernel is augmented by face condensations with two additional phosphorus-free and 12 tri(p-C6H4Me)phosphine-ligated Pd atoms, which lower the pseudo-symmetry of the resulting 37-atom metal core from D(3h) to C2. The 12 P atoms and 28 bridging CO connectivities preserve the pseudo-C2 symmetry. The central Pd23 kernel in 1 provides the only crystallographic example of the 23-atom member of the double icosahedral family of "twinned" interpenetrating icosahedra (II), which includes the 19-atom two II (1 DI), the 23-atom three II (3 DI), the 26-atom four II (6 DI), and the 29-atom five II (9 DI). The n-atoms of these DI models coincide exactly with prominent atom-peak maxima of 19, 23, 26, and 29, respectively, in the mass spectrum of charged argon clusters formed in a low-temperature free-jet expansion. The only previous crystallographically proven 26- and 29-atom DI members are the central pseudo-T(d) tetrahedral Pd26 kernel (4 II, 6 DI) in the PMe3-ligated Pd29Ni3(CO)22(PMe3)13 (2) and the central pseudo-D(3h) trigonal-bipyramidal Pd29 kernel (5 II, 9 DI) in the PMe3-ligated Pd35(CO)23(PMe3)15 (3). Two highly important major stereochemical implications are noted: (1) The formation of geometrically identical idealized architectures for these three II palladium kernels with corresponding DI models constructed for the charged argon clusters provides compelling

  20. Robotic intelligence kernel

    DOEpatents

    Bruemmer, David J.

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  1. Flexible Kernel Memory

    PubMed Central

    Nowicki, Dimitri; Siegelmann, Hava

    2010-01-01

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013

  2. QCD coupling constants and VDM

    SciTech Connect

    Erkol, G.; Ozpineci, A.; Zamiralov, V. S.

    2012-10-23

    QCD sum rules for coupling constants of vector mesons with baryons are constructed. The corresponding QCD sum rules for electric charges and magnetic moments are also derived and with the use of vector-meson-dominance model related to the coupling constants. The VDM role as the criterium of reciprocal validity of the sum rules is considered.

  3. Recent Developments in Perturbative QCD

    SciTech Connect

    Dixon, Lance J.; /SLAC

    2005-07-11

    I review recent progress in perturbative QCD on two fronts: extending next-to-next-to-leading order QCD corrections to a broader range of collider processes, and applying twistor-space methods (and related spinoffs) to computations of multi-parton scattering amplitudes.

  4. QCD: Questions, challenges, and dilemmas

    SciTech Connect

    Bjorken, J.

    1996-11-01

    An introduction to some outstanding issues in QCD is presented, with emphasis on work by Diakonov and co-workers on the influence of the instanton vacuum on low-energy QCD observables. This includes the calculation of input valence-parton distributions for deep-inelastic scattering. 35 refs., 3 figs.

  5. QCD, with strings attached

    NASA Astrophysics Data System (ADS)

    Güijosa, Alberto

    2016-10-01

    In the nearly 20 years that have elapsed since its discovery, the gauge-gravity correspondence has become established as an efficient tool to explore the physics of a large class of strongly-coupled field theories. A brief overview is given here of its formulation and a few of its applications, emphasizing attempts to emulate aspects of the strong-coupling regime of quantum chromodynamics (QCD). To the extent possible, the presentation is self-contained, and does not presuppose knowledge of string theory.

  6. QCD and strings

    SciTech Connect

    Sakai, Tadakatsu; Sugimoto, Shigeki

    2005-12-02

    We propose a holographic dual of QCD with massless flavors on the basis of a D4/D8-brane configuration within a probe approximation. We are led to a five-dimensional Yang-Mills theory on a curved space-time along with a Chern-Simons five-form on it, both of which provide us with a unifying framework to study the massless pion and an infinite number of massive vector mesons. We make sample computations of the physical quantities that involve the mesons and compare them with the experimental data. It is found that most of the results of this model are compatible with the experiments.

  7. Nuclear reactions from lattice QCD

    NASA Astrophysics Data System (ADS)

    Briceño, Raúl A.; Davoudi, Zohreh; Luu, Thomas C.

    2015-02-01

    One of the overarching goals of nuclear physics is to rigorously compute properties of hadronic systems directly from the fundamental theory of strong interactions, quantum chromodynamics (QCD). In particular, the hope is to perform reliable calculations of nuclear reactions which will impact our understanding of environments that occur during big bang nucleosynthesis, the evolution of stars and supernovae, and within nuclear reactors and high energy/density facilities. Such calculations, being truly ab initio, would include all two-nucleon and three-nucleon (and higher) interactions in a consistent manner. Currently, lattice quantum chromodynamics (LQCD) provides the only reliable option for performing calculations of some of the low-energy hadronic observables. With the aim of bridging the gap between LQCD and nuclear many-body physics, the Institute for Nuclear Theory held a workshop on Nuclear Reactions from LQCD on March 2013. In this review article, we report on the topics discussed in this workshop and the path planned to move forward in the upcoming years.

  8. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off....

  9. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...

  10. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle...

  11. High energy asymptotics of scattering processes in QCD

    NASA Astrophysics Data System (ADS)

    Enberg, R.; Golec-Biernat, K.; Munier, S.

    2005-10-01

    High energy scattering in the QCD parton model was recently shown to be a reaction-diffusion process and, thus, to lie in the universality class of the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation. We recall that the latter appears naturally in the context of the parton model. We provide a thorough numerical analysis of the mean-field approximation, given in QCD by the Balitsky-Kovchegov equation. In the framework of a simple stochastic toy model that captures the relevant features of QCD, we discuss and illustrate the universal properties of such stochastic models. We investigate, in particular, the validity of the mean-field approximation and how it is broken by fluctuations. We find that the mean-field approximation is a good approximation in the initial stages of the evolution in rapidity.

  12. QCD for Postgraduates (1/5)

    ScienceCinema

    None

    2016-07-12

    Modern QCD - Lecture 1 Starting from the QCD Lagrangian we will revisit some basic QCD concepts and derive fundamental properties like gauge invariance and isospin symmetry and will discuss the Feynman rules of the theory. We will then focus on the gauge group of QCD and derive the Casimirs CF and CA and some useful color identities.

  13. Dynamics for QCD on an Infinite Lattice

    NASA Astrophysics Data System (ADS)

    Grundling, Hendrik; Rudolph, Gerd

    2017-02-01

    We prove the existence of the dynamics automorphism group for Hamiltonian QCD on an infinite lattice in R^3, and this is done in a C*-algebraic context. The existence of ground states is also obtained. Starting with the finite lattice model for Hamiltonian QCD developed by Kijowski, Rudolph (cf. J Math Phys 43:1796-1808 [15], J Math Phys 46:032303 [16]), we state its field algebra and a natural representation. We then generalize this representation to the infinite lattice, and construct a Hilbert space which has represented on it all the local algebras (i.e., kinematics algebras associated with finite connected sublattices) equipped with the correct graded commutation relations. On a suitably large C*-algebra acting on this Hilbert space, and containing all the local algebras, we prove that there is a one parameter automorphism group, which is the pointwise norm limit of the local time evolutions along a sequence of finite sublattices, increasing to the full lattice. This is our global time evolution. We then take as our field algebra the C*-algebra generated by all the orbits of the local algebras w.r.t. the global time evolution. Thus the time evolution creates the field algebra. The time evolution is strongly continuous on this choice of field algebra, though not on the original larger C*-algebra. We define the gauge transformations, explain how to enforce the Gauss law constraint, show that the dynamics automorphism group descends to the algebra of physical observables and prove that gauge invariant ground states exist.

  14. QCD and Supernovas

    NASA Astrophysics Data System (ADS)

    Barnes, T.

    2005-12-01

    In this contribution we briefly summarize aspects of the physics of QCD which are relevant to the supernova problem. The topic of greatest importance is the equation of state (EOS) of nuclear and strongly-interacting matter, which is required to describe the physics of the proto-neutron star (PNS) and the neutron star remnant (NSR) formed during a supernova event. Evaluation of the EOS in the regime of relevance for these systems, especially the NSR, requires detailed knowledge of the spectrum and strong interactions of hadrons of the accessible hadronic species, as well as other possible phases of strongly interacting matter, such as the quark-gluon plasma (QGP). The forces between pairs of baryons (both nonstrange and strange) are especially important in determining the EOS at NSR densities. Predictions for these forces are unfortunately rather model dependent where not constrained by data, and there are several suggestions for the QCD mechanism underlying these short-range hadronic interactions. The models most often employed for determining these strong interactions are broadly of two types, 1) meson exchange models (usually assumed in the existing neutron star and supernova literature), and 2) quark-gluon models (mainly encountered in the hadron, nuclear and heavy-ion literature). Here we will discuss the assumptions made in these models, and discuss how they are applied to the determination of hadronic forces that are relevant to the supernova problem.

  15. Hybrid baryons in QCD

    DOE PAGES

    Dudek, Jozef J.; Edwards, Robert G.

    2012-03-21

    In this study, we present the first comprehensive study of hybrid baryons using lattice QCD methods. Using a large basis of composite QCD interpolating fields we extract an extensive spectrum of baryon states and isolate those of hybrid character using their relatively large overlap onto operators which sample gluonic excitations. We consider the spectrum of Nucleon and Delta states at several quark masses finding a set of positive parity hybrid baryons with quantum numbersmore » $$N_{1/2^+},\\,N_{1/2^+},\\,N_{3/2^+},\\, N_{3/2^+},\\,N_{5/2^+},\\,$$ and $$\\Delta_{1/2^+},\\, \\Delta_{3/2^+}$$ at an energy scale above the first band of `conventional' excited positive parity baryons. This pattern of states is compatible with a color octet gluonic excitation having $$J^{P}=1^{+}$$ as previously reported in the hybrid meson sector and with a comparable energy scale for the excitation, suggesting a common bound-state construction for hybrid mesons and baryons.« less

  16. A Test of Ice Self-Collection Kernels Using Aircraft Data.

    NASA Astrophysics Data System (ADS)

    Field, P. R.; Heymsfield, A. J.; Bansemer, A.

    2006-02-01

    Aircraft observations from the Cirrus Regional Study of Tropical Anvils and Cirrus Layers (CRYSTAL) Florida Area Cirrus Experiment (FACE) campaign obtained in the anvil of a large convective storm from 26 July 2002 are presented. During this flight a Lagrangian spiral descent was made, allowing the evolution of the ice particle size distribution to be followed. Relative humidities during 1 km (from -11° to -3°C) of the descent were within 4% of ice saturation. It was assumed that the ice particle size distribution was evolving through the process of aggregation alone. Three idealized ice ice collection kernels were used in a model of ice aggregation and compared to the observed ice particle size distribution evolution: a geometric sweep-out kernel, a Golovin (sum of particle masses) kernel, and a modified-Golovin kernel (sum of particle masses raised to a power). The Golovin kernel performed worst. The sweep-out kernel produced good agreement with the observations when a constant aggregation efficiency of 0.09 was used. The modified-Golovin kernel performed the best and implied that the aggregation efficiency of sub-300-μm particles was greater than unity when compared with a geometric sweep-out kernel.


  17. CRKSPH - A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme

    NASA Astrophysics Data System (ADS)

    Frontiere, Nicholas; Raskin, Cody D.; Owen, J. Michael

    2017-03-01

    We present a formulation of smoothed particle hydrodynamics (SPH) that utilizes a first-order consistent reproducing kernel, a smoothing function that exactly interpolates linear fields with particle tracers. Previous formulations using reproducing kernel (RK) interpolation have had difficulties maintaining conservation of momentum due to the fact the RK kernels are not, in general, spatially symmetric. Here, we utilize a reformulation of the fluid equations such that mass, linear momentum, and energy are all rigorously conserved without any assumption about kernel symmetries, while additionally maintaining approximate angular momentum conservation. Our approach starts from a rigorously consistent interpolation theory, where we derive the evolution equations to enforce the appropriate conservation properties, at the sacrifice of full consistency in the momentum equation. Additionally, by exploiting the increased accuracy of the RK method's gradient, we formulate a simple limiter for the artificial viscosity that reduces the excess diffusion normally incurred by the ordinary SPH artificial viscosity. Collectively, we call our suite of modifications to the traditional SPH scheme Conservative Reproducing Kernel SPH, or CRKSPH. CRKSPH retains many benefits of traditional SPH methods (such as preserving Galilean invariance and manifest conservation of mass, momentum, and energy) while improving on many of the shortcomings of SPH, particularly the overly aggressive artificial viscosity and zeroth-order inaccuracy. We compare CRKSPH to two different modern SPH formulations (pressure based SPH and compatibly differenced SPH), demonstrating the advantages of our new formulation when modeling fluid mixing, strong shock, and adiabatic phenomena.

  18. Theta angle in holographic QCD

    NASA Astrophysics Data System (ADS)

    Järvinen, Matti

    2017-03-01

    V-QCD is a class of effective holographic models for QCD which fully includes the backreaction of quarks to gluon dynamics. The physics of the θ-angle and the axial anomaly can be consistently included in these models. We analyze their phase diagrams over ranges of values of the quark mass, Nf/Nc, and θ, computing observables such as the topological susceptibility and the meson masses. At small quark mass, where effective chiral Lagrangians are reliable, they agree with the predictions of V-QCD.

  19. Lattice QCD and Nuclear Physics

    SciTech Connect

    Konstantinos Orginos

    2007-03-01

    A steady stream of developments in Lattice QCD have made it possible today to begin to address the question of how nuclear physics emerges from the underlying theory of strong interactions. Central role in this understanding play both the effective field theory description of nuclear forces and the ability to perform accurate non-perturbative calculations in lo w energy QCD. Here I present some recent results that attempt to extract important low energy constants of the effective field theory of nuclear forces from lattice QCD.

  20. NLO Hierarchy of Wilson Lines Evolution

    SciTech Connect

    Balitsky, Ian

    2015-03-01

    The high-energy behavior of QCD amplitudes can be described in terms of the rapidity evolution of Wilson lines. I present the hierarchy of evolution equations for Wilson lines in the next-to-leading order.

  1. The Symmetries of QCD

    ScienceCinema

    Sekhar Chivukula

    2016-07-12

    The symmetries of a quantum field theory can be realized in a variety of ways. Symmetries can be realized explicitly, approximately, through spontaneous symmetry breaking or, via an anomaly, quantum effects can dynamically eliminate a symmetry of the theory that was present at the classical level.  Quantum Chromodynamics (QCD), the modern theory of the strong interactions, exemplify each of these possibilities. The interplay of these effects determine the spectrum of particles that we observe and, ultimately, account for 99% of the mass of ordinary matter. 

  2. Dyson-Schwinger equations : density, temperature and continuum strong QCD.

    SciTech Connect

    Roberts, C. D.; Schmidt, S. M.; Physics

    2000-01-01

    Continuum strong QCD is the application of models and continuum quantum field theory to the study of phenomena in hadronic physics, which includes; e.g., the spectrum of QCD bound states and their interactions; and the transition to, and properties of, a quark gluon plasma. We provide a contemporary perspective, couched primarily in terms of the Dyson-Schwinger equations but also making comparisons with other approaches and models. Our discourse provides a practitioners' guide to features of the Dyson-Schwinger equations [such as confinement and dynamical chiral symmetry breaking] and canvasses phenomenological applications to light meson and baryon properties in cold, sparse QCD. These provide the foundation for an extension to hot, dense QCD, which is probed via the introduction of the intensive thermodynamic variables: chemical potential and temperature. We describe order parameters whose evolution signals deconfinement and chiral symmetry restoration, and chronicle their use in demarcating the quark gluon plasma phase boundary and characterizing the plasma's properties. Hadron traits change in an equilibrated plasma. We exemplify this and discuss putative signals of the effects. Finally, since plasma formation is not an equilibrium process, we discuss recent developments in kinetic theory and its application to describing the evolution from a relativistic heavy ion collision to an equilibrated quark gluon plasma.

  3. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  4. LATTICE QCD AT FINITE TEMPERATURE.

    SciTech Connect

    PETRECZKY, P.

    2005-03-12

    I review recent progress in lattice QCD at finite temperature. Results on the transition temperature will be summarized. Recent progress in understanding in-medium modifications of interquark forces and quarkonia spectral functions at finite temperatures is discussed.

  5. Aligning Biomolecular Networks Using Modular Graph Kernels

    NASA Astrophysics Data System (ADS)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  6. RTOS kernel in portable electrocardiograph

    NASA Astrophysics Data System (ADS)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  7. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  8. QCD factorization for hadronic B decays: Proofs and higher-order corrections

    NASA Astrophysics Data System (ADS)

    Pecjak, Benjamin Dale

    Several issues related to the QCD factorization approach to exclusive hadronic B decays are discussed. This includes a proof of factorization in B → K*gamma using the soft-collinear effective theory, and an examination of higher-order corrections to QCD factorization for two-body decays into heavy-light states, such as B → Dpi, and light-light final states, such as B → Kpi,pipi. The proof of factorization in B → K*gamma is arguably the most complicated QCD factorization formula proven so far. It is shown that reparameterization invariance in the intermediate effective theory restricts the appearance of transverse momentum components and 3-particle Fock states to operators that can be absorbed into the QCD from factor. This proof also includes an extension of SCET to deal with two collinear directions. The examination of higher-order corrections to QCD factorization has implications for using this technique to extract CP violating weal; phases from data taken at the B factories. The renormalon calculus is used to calculate the b0a2s contributions to the hard scattering kernels, and also to analyze the strength of power corrections due to soft gluon exchange. It is shown that while power corrections are generally small, the higher-order perturbative contributions to the hard scattering kernels have much larger imaginary parts than those at next-to-leading order (NLO). This significantly enhances some CP asymmetries compared to the NLO results, which is an effect that would survive a two-loop calculation unless there were large multi-loop corrections not related to the b0a2s terms of the perturbative expansion.

  9. The Emergence of Hadrons from QCD Color

    NASA Astrophysics Data System (ADS)

    Brooks, William; Color Dynamics in Cold Matter (CDCM) Collaboration

    2015-10-01

    The formation of hadrons from energetic quarks, the dynamical enforcement of QCD confinement, is not well understood at a fundamental level. In Deep Inelastic Scattering, modifications of the distributions of identified hadrons emerging from nuclei of different sizes reveal a rich variety of spatial and temporal characteristics of the hadronization process, including its dependence on spin, flavor, energy, and hadron mass and structure. The EIC will feature a wide range of kinematics, allowing a complete investigation of medium-induced gluon bremsstrahlung by the propagating quarks, leading to partonic energy loss. This fundamental process, which is also at the heart of jet quenching in heavy ion collisions, can be studied for light and heavy quarks at the EIC through observables quantifying hadron ``attenuation'' for a variety of hadron species. Transverse momentum broadening of hadrons, which is sensitive to the nuclear gluonic field, will also be accessible, and can be used to test our understanding from pQCD of how this quantity evolves with pathlength, as well as its connection to partonic energy loss. The evolution of the forming hadrons in the medium will shed new light on the dynamical origins of the forces between hadrons, and thus ultimately on the nuclear force. Supported by the Comision Nacional de Investigacion Cientifica y Tecnologica (CONICYT) of Chile.

  10. On the convergence of the inverse diffraction transform kernel using Cesàro summability

    NASA Astrophysics Data System (ADS)

    Pallotta, M.

    1995-12-01

    In diffraction tomography, optical information processing, and, more generally, Fourier optics, the diffraction transform solves both the direct and the inverse boundary value propagation problem for the Helmholtz equation. Its kernel is itself an integral. It is the representation of the evolution operator associated with translations of a constrained Cartesian coordinate. This fact threatens the inverse scattering problem with a divergence if the transform kernel is understood as a Cauchy integral. The kernel is, however, everywhere convergent if its integral representation is interpreted as a summable integral.

  11. QCD at collider energies

    NASA Astrophysics Data System (ADS)

    Nicolaidis, A.; Bordes, G.

    1986-05-01

    We examine available experimental distributions of transverse energy and transverse momentum, obtained at the CERN pp¯ collider, in the context of quantum chromodynamics. We consider the following. (i) The hadronic transverse energy released during W+/- production. This hadronic transverse energy is made out of two components: a soft component which we parametrize using minimum-bias events and a semihard component which we calculate from QCD. (ii) The transverse momentum of the produced W+/-. If the transverse momentum (or the transverse energy) results from a single gluon jet we use the formalism of Dokshitzer, Dyakonov, and Troyan, while if it results from multiple-gluon emission we use the formalism of Parisi and Petronzio. (iii) The relative transverse momentum of jets. While for W+/- production quarks play an essential role, jet production at moderate pT and present energies is dominated by gluon-gluon scattering and therefore we can study the Sudakov form factor of the gluon. We suggest also how through a Hankel transform of experimental data we can have direct access to the Sudakov form factors of quarks and gluons.

  12. Induced QCD I: theory

    NASA Astrophysics Data System (ADS)

    Brandt, Bastian B.; Lohmayer, Robert; Wettig, Tilo

    2016-11-01

    We explore an alternative discretization of continuum SU( N c ) Yang-Mills theory on a Euclidean spacetime lattice, originally introduced by Budzcies and Zirnbauer. In this discretization the self-interactions of the gauge field are induced by a path integral over N b auxiliary boson fields, which are coupled linearly to the gauge field. The main progress compared to earlier approaches is that N b can be as small as N c . In the present paper we (i) extend the proof that the continuum limit of the new discretization reproduces Yang-Mills theory in two dimensions from gauge group U( N c ) to SU( N c ), (ii) derive refined bounds on N b for non-integer values, and (iii) perform a perturbative calculation to match the bare parameter of the induced gauge theory to the standard lattice coupling. In follow-up papers we will present numerical evidence in support of the conjecture that the induced gauge theory reproduces Yang-Mills theory also in three and four dimensions, and explore the possibility to integrate out the gauge fields to arrive at a dual formulation of lattice QCD.

  13. Chiral limit of QCD

    SciTech Connect

    Gupta, R.

    1994-12-31

    This talk contains an analysis of quenched chiral perturbation theory and its consequences. The chiral behavior of a number of quantities such as the pion mass m{sub pi}{sup 2}, the Bernard-Golterman ratios R and {sub X}, the masses of nucleons, and the kaon B-parameter are examined to see if the singular terms induced by the additional Goldstone boson, {eta}{prime}, are visible in present data. The overall conclusion (different from that presented at the lattice meeting) of this analysis is that even though there are some caveats attached to the indications of the extra terms induced by {eta}{prime} loops, the standard expressions break down when extrapolating the quenched data with m{sub q} < m{sub s}/2 to physical light quarks. I then show that due to the single and double poles in the quenched {eta}{prime}, the axial charge of the proton cannot be calculated using the Adler-Bell-Jackiw anomaly condition. I conclude with a review of the status of the calculation of light quark masses from lattice QCD.

  14. Recent QCD results from the Tevatron

    SciTech Connect

    Pickarz, Henryk; CDF and DO collaboration

    1997-02-01

    Recent QCD results from the CDF and D0 detectors at the Tevatron proton-antiproton collider are presented. An outlook for future QCD tests at the Tevatron collider is also breifly discussed. 27 refs., 11 figs.

  15. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  16. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  17. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  18. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  19. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  20. Travel-Time and Amplitude Sensitivity Kernels

    DTIC Science & Technology

    2011-09-01

    amplitude sensitivity kernels shown in the lower panels concentrate about the corresponding eigenrays . Each 3D kernel exhibits a broad negative...in 2 and 3 dimensions have similar 11 shapes to corresponding travel-time sensitivity kernels (TSKs), centered about the respective eigenrays

  1. Adaptive wiener image restoration kernel

    SciTech Connect

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  2. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  3. Twisted mass QCD for weak matrix elements

    NASA Astrophysics Data System (ADS)

    Pena, Carlos

    2006-12-01

    I report on the application of tmQCD techniques to the computation of hadronic matrix elements of four-fermion operators. Emphasis is put on the computation of BK in quenched QCD performed by the ALPHA Collaboration. The extension of tmQCD strategies to the study of neutral B- meson mixing is briefly discussed. Finally, some remarks are made concerning proposals to apply tmQCD to the computation of K → ππ amplitudes.

  4. Threefold Complementary Approach to Holographic QCD

    SciTech Connect

    Brodsky, Stanley J.; de Teramond, Guy F.; Dosch, Hans Gunter

    2013-12-27

    A complementary approach, derived from (a) higher-dimensional anti-de Sitter (AdS) space, (b) light-front quantization and (c) the invariance properties of the full conformal group in one dimension leads to a nonperturbative relativistic light-front wave equation which incorporates essential spectroscopic and dynamical features of hadron physics. The fundamental conformal symmetry of the classical QCD Lagrangian in the limit of massless quarks is encoded in the resulting effective theory. The mass scale for confinement emerges from the isomorphism between the conformal group andSO(2,1). This scale appears in the light-front Hamiltonian by mapping to the evolution operator in the formalism of de Alfaro, Fubini and Furlan, which retains the conformal invariance of the action. Remarkably, the specific form of the confinement interaction and the corresponding modification of AdS space are uniquely determined in this procedure.

  5. Quantum chromodynamics (QCD) and collider physics

    SciTech Connect

    Ellis, R.K. ); Stirling, W.J. )

    1990-08-14

    This report discusses: fundamentals of perturbative QCD; QCD in e{sup +}e{sup {minus}} {yields} hadrons; deep inelastic scattering and parton distributions; the QCD parton model in hadron-hadron collisions; large p{sub T} jet production in hadron-hadron collisions; the production of vector bosons in hadronic collisions; and the production of heavy quarks.

  6. LATTICE QCD THERMODYNAMICS WITH WILSON QUARKS.

    SciTech Connect

    EJIRI,S.

    2007-11-20

    We review studies of QCD thermodynamics by lattice QCD simulations with dynamical Wilson quarks. After explaining the basic properties of QCD with Wilson quarks at finite temperature including the phase structure and the scaling properties around the chiral phase transition, we discuss the critical temperature, the equation of state and heavy-quark free energies.

  7. Lattice QCD input for axion cosmology

    NASA Astrophysics Data System (ADS)

    Berkowitz, Evan; Buchoff, Michael I.; Rinaldi, Enrico

    2015-08-01

    One intriguing beyond-the-Standard-Model particle is the QCD axion, which could simultaneously provide a solution to the Strong C P Problem and account for some, if not all, of the dark matter density in the Universe. This particle is a pseudo-Nambu-Goldstone boson of the conjectured Peccei-Quinn symmetry of the Standard Model. Its mass and interactions are suppressed by a heavy symmetry-breaking scale, fa, the value of which is roughly greater than 109 GeV (or, conversely, the axion mass, ma, is roughly less than 104 μ eV ). The density of axions in the Universe, which cannot exceed the relic dark matter density and is a quantity of great interest in axion experiments like ADMX, is a result of the early Universe interplay between cosmological evolution and the axion mass as a function of temperature. The latter quantity is proportional to the second derivative of the temperature-dependent QCD free energy with respect to the C P -violating phase, θ . However, this quantity is generically nonperturbative, and previous calculations have only employed instanton models at the high temperatures of interest (roughly 1 GeV). In this and future works, we aim to calculate the temperature-dependent axion mass at small θ from first-principle lattice calculations, with controlled statistical and systematic errors. Once calculated, this temperature-dependent axion mass is input for the classical evolution equations of the axion density of the Universe, which is required to be less than or equal to the dark matter density. Due to a variety of lattice systematic effects at the very high temperatures required, we perform a calculation of the leading small-θ cumulant of the theta vacua on large volume lattices for SU(3) Yang-Mills with high statistics as a first proof of concept, before attempting a full QCD calculation in the future. From these pure glue results, the misalignment mechanism yields the axion mass bound ma≥(14.6 ±0.1 ) μ eV when Peccei-Quinn breaking occurs

  8. QCD inequalities for hadron interactions.

    PubMed

    Detmold, William

    2015-06-05

    We derive generalizations of the Weingarten-Witten QCD mass inequalities for particular multihadron systems. For systems of any number of identical pseudoscalar mesons of maximal isospin, these inequalities prove that near threshold interactions between the constituent mesons must be repulsive and that no bound states can form in these channels. Similar constraints in less symmetric systems are also extracted. These results are compatible with experimental results (where known) and recent lattice QCD calculations, and also lead to a more stringent bound on the nucleon mass than previously derived, m_{N}≥3/2m_{π}.

  9. Hadron scattering, resonances, and QCD

    SciTech Connect

    Briceno, Raul

    2016-12-01

    The non-perturbative nature of quantum chromodynamics (QCD) has historically left a gap in our understanding of the connection between the fundamental theory of the strong interactions and the rich structure of experimentally observed phenomena. For the simplest properties of stable hadrons, this is now circumvented with the use of lattice QCD (LQCD). In this talk I discuss a path towards a rigorous determination of few-hadron observables from LQCD. I illustrate the power of the methodology by presenting recently determined scattering amplitudes in the light-meson sector and their resonance content.

  10. The supercritical pomeron in QCD.

    SciTech Connect

    White, A. R.

    1998-06-29

    Deep-inelastic diffractive scaling violations have provided fundamental insight into the QCD pomeron, suggesting a single gluon inner structure rather than that of a perturbative two-gluon bound state. This talk outlines a derivation of a high-energy, transverse momentum cut-off, confining solution of QCD. The pomeron, in first approximation, is a single reggeized gluon plus a ''wee parton'' component that compensates for the color and particle properties of the gluon. This solution corresponds to a super-critical phase of Reggeon Field Theory.

  11. Neutron star structure from QCD

    NASA Astrophysics Data System (ADS)

    Fraga, Eduardo S.; Kurkela, Aleksi; Vuorinen, Aleksi

    2016-03-01

    In this review article, we argue that our current understanding of the thermodynamic properties of cold QCD matter, originating from first principles calculations at high and low densities, can be used to efficiently constrain the macroscopic properties of neutron stars. In particular, we demonstrate that combining state-of-the-art results from Chiral Effective Theory and perturbative QCD with the current bounds on neutron star masses, the Equation of State of neutron star matter can be obtained to an accuracy better than 30% at all densities.

  12. Lattice QCD: Status and Prospect

    SciTech Connect

    Ukawa, Akira

    2006-02-08

    A brief review is given of the current status and near-future prospect of lattice QCD studies of the Standard Model. After summarizing a bit of history, we describe current attempts toward inclusion of dynamical up, down and strange quarks. Recent results on the light hadron mass spectrum as well as those on the heavy quark quantities are described. Recent work on lattice pentaquark search is summarized. We touch upon the PACS-CS Project for building our next machine for lattice QCD, and conclude with a summary of computer situation and the physics possibilities over the next several years.

  13. Recent QCD results from CDF

    SciTech Connect

    Yun, J.C.

    1990-10-10

    In this paper we report recent QCD analysis with the new data taken from CDF detector. CDF recorded an integrated luminosity of 4.4 nb{sup {minus}1} during the 1988--1989 run at center of mass system (CMS) energy of 1.8 TeV. The major topics of this report are inclusive jet, dijet, trijet and direct photon analysis. These measurements are compared of QCD predictions. For the inclusive jet an dijet analysis, tests of quark compositeness are emphasized. 11 refs., 6 figs.

  14. Glueball decay in holographic QCD

    SciTech Connect

    Hashimoto, Koji; Tan, C.-I; Terashima, Seiji

    2008-04-15

    Using holographic QCD based on D4-branes and D8-anti-D8-branes, we have computed couplings of glueballs to light mesons. We describe glueball decay by explicitly calculating its decay widths and branching ratios. Interestingly, while glueballs remain less well understood both theoretically and experimentally, our results are found to be consistent with the experimental data for the scalar glueball candidate f{sub 0}(1500). More generally, holographic QCD predicts that decay of any glueball to 4{pi}{sup 0} is suppressed, and that mixing of the lightest glueball with qq mesons is small.

  15. J.J. Sakurai Prize for Theoretical Particle Physics: 40 Years of Lattice QCD

    NASA Astrophysics Data System (ADS)

    Lepage, Peter

    2016-03-01

    Lattice QCD was invented in 1973-74 by Ken Wilson, who passed away in 2013. This talk will describe the evolution of lattice QCD through the past 40 years with particular emphasis on its first years, and on the past decade, when lattice QCD simulations finally came of age. Thanks to theoretical breakthroughs in the late 1990s and early 2000s, lattice QCD simulations now produce the most accurate theoretical calculations in the history of strong-interaction physics. They play an essential role in high-precision experimental studies of physics within and beyond the Standard Model of Particle Physics. The talk will include a non-technical review of the conceptual ideas behind this revolutionary development in (highly) nonlinear quantum physics, together with a survey of its current impact on theoretical and experimental particle physics, and prospects for the future. Work supported by the National Science Foundation.

  16. Phenomenological consequences of enhanced bulk viscosity near the QCD critical point

    NASA Astrophysics Data System (ADS)

    Monnai, Akihiko; Mukherjee, Swagato; Yin, Yi

    2017-03-01

    In the proximity of the QCD critical point the bulk viscosity of quark-gluon matter is expected to be proportional to nearly the third power of the critical correlation length, and become significantly enhanced. This work is the first attempt to study the phenomenological consequences of enhanced bulk viscosity near the QCD critical point. For this purpose, we implement the expected critical behavior of the bulk viscosity within a non-boost-invariant, longitudinally expanding 1 +1 dimensional causal relativistic hydrodynamical evolution at nonzero baryon density. We demonstrate that the critically enhanced bulk viscosity induces a substantial nonequilibrium pressure, effectively softening the equation of state, and leads to sizable effects in the flow velocity and single-particle distributions at the freeze-out. The observable effects that may arise due to the enhanced bulk viscosity in the vicinity of the QCD critical point can be used as complementary information to facilitate searches for the QCD critical point.

  17. A Framework for Lattice QCD Calculations on GPUs

    SciTech Connect

    Winter, Frank; Clark, M A; Edwards, Robert G; Joo, Balint

    2014-08-01

    Computing platforms equipped with accelerators like GPUs have proven to provide great computational power. However, exploiting such platforms for existing scientific applications is not a trivial task. Current GPU programming frameworks such as CUDA C/C++ require low-level programming from the developer in order to achieve high performance code. As a result porting of applications to GPUs is typically limited to time-dominant algorithms and routines, leaving the remainder not accelerated which can open a serious Amdahl's law issue. The lattice QCD application Chroma allows to explore a different porting strategy. The layered structure of the software architecture logically separates the data-parallel from the application layer. The QCD Data-Parallel software layer provides data types and expressions with stencil-like operations suitable for lattice field theory and Chroma implements algorithms in terms of this high-level interface. Thus by porting the low-level layer one can effectively move the whole application in one swing to a different platform. The QDP-JIT/PTX library, the reimplementation of the low-level layer, provides a framework for lattice QCD calculations for the CUDA architecture. The complete software interface is supported and thus applications can be run unaltered on GPU-based parallel computers. This reimplementation was possible due to the availability of a JIT compiler (part of the NVIDIA Linux kernel driver) which translates an assembly-like language (PTX) to GPU code. The expression template technique is used to build PTX code generators and a software cache manages the GPU memory. This reimplementation allows us to deploy an efficient implementation of the full gauge-generation program with dynamical fermions on large-scale GPU-based machines such as Titan and Blue Waters which accelerates the algorithm by more than an order of magnitude.

  18. Two-color QCD at high density

    SciTech Connect

    Boz, Tamer; Skullerud, Jon-Ivar; Giudice, Pietro; Hands, Simon; Williams, Anthony G.

    2016-01-22

    QCD at high chemical potential has interesting properties such as deconfinement of quarks. Two-color QCD, which enables numerical simulations on the lattice, constitutes a laboratory to study QCD at high chemical potential. Among the interesting properties of two-color QCD at high density is the diquark condensation, for which we present recent results obtained on a finer lattice compared to previous studies. The quark propagator in two-color QCD at non-zero chemical potential is referred to as the Gor’kov propagator. We express the Gor’kov propagator in terms of form factors and present recent lattice simulation results.

  19. Controlling quark mass determinations non-perturbatively in three-flavour QCD

    NASA Astrophysics Data System (ADS)

    Campos, Isabel; Fritzsch, Patrick; Pena, Carlos; Preti, David; Ramos, Alberto; Vladikas, Anastassios

    2017-03-01

    The determination of quark masses from lattice QCD simulations requires a non-perturbative renormalization procedure and subsequent scale evolution to high energies, where a conversion to the commonly used \\overline {{{MS}}} scheme can be safely established. We present our results for the non-perturbative running of renormalized quark masses in Nf = 3 QCD between the electroweak and a hadronic energy scale, where lattice simulations are at our disposal. Recent theoretical advances in combination with well-established techniques allows to follow the scale evolution to very high statistical accuracy, and full control of systematic effects.

  20. Recent progress in lattice QCD

    SciTech Connect

    Sharpe, S.R.

    1992-12-01

    A brief overview of the status of lattice QCD is given, with emphasis on topics relevant to phenomenology. The calculation of the light quark spectrum, the lattice prediction of {alpha} {sub {ovr MS}} (M {sub Z}), and the calculation of f{sub B} are discussed. 3 figs., 3 tabs., 40 refs.

  1. Lattice QCD in Background Fields

    SciTech Connect

    William Detmold, Brian Tiburzi, Andre Walker-Loud

    2009-06-01

    Electromagnetic properties of hadrons can be computed by lattice simulations of QCD in background fields. We demonstrate new techniques for the investigation of charged hadron properties in electric fields. Our current calculations employ large electric fields, motivating us to analyze chiral dynamics in strong QED backgrounds, and subsequently uncover surprising non-perturbative effects present at finite volume.

  2. Meson Resonances from Lattice QCD

    SciTech Connect

    Edwards, Robert G.

    2016-06-01

    There has been recent, significant, advances in the determination of the meson spectrum of QCD. Current efforts have focused on the development and application of finite-volume formalisms that allow for the determination of scattering amplitudes as well as resonance behavior in coupled channel systems. I will review some of these recent developments, and demonstrate the viability of the method in meson systems.

  3. Basics of QCD perturbation theory

    SciTech Connect

    Soper, D.E.

    1997-06-01

    This is an introduction to the use of QCD perturbation theory, emphasizing generic features of the theory that enable one to separate short-time and long-time effects. The author also covers some important classes of applications: electron-positron annihilation to hadrons, deeply inelastic scattering, and hard processes in hadron-hadron collisions. 31 refs., 38 figs.

  4. QCD Phase Transitions, Volume 15

    SciTech Connect

    Schaefer, T.; Shuryak, E.

    1999-03-20

    The title of the workshop, ''The QCD Phase Transitions'', in fact happened to be too narrow for its real contents. It would be more accurate to say that it was devoted to different phases of QCD and QCD-related gauge theories, with strong emphasis on discussion of the underlying non-perturbative mechanisms which manifest themselves as all those phases. Before we go to specifics, let us emphasize one important aspect of the present status of non-perturbative Quantum Field Theory in general. It remains true that its studies do not get attention proportional to the intellectual challenge they deserve, and that the theorists working on it remain very fragmented. The efforts to create Theory of Everything including Quantum Gravity have attracted the lion share of attention and young talent. Nevertheless, in the last few years there was also a tremendous progress and even some shift of attention toward emphasis on the unity of non-perturbative phenomena. For example, we have seen some efforts to connect the lessons from recent progress in Supersymmetric theories with that in QCD, as derived from phenomenology and lattice. Another example is Maldacena conjecture and related development, which connect three things together, string theory, super-gravity and the (N=4) supersymmetric gauge theory. Although the progress mentioned is remarkable by itself, if we would listen to each other more we may have chance to strengthen the field and reach better understanding of the spectacular non-perturbative physics.

  5. Heavy quark production and QCD

    SciTech Connect

    Purohit, M.V.

    1988-12-01

    Recent results on charm and beauty production in fixed target experiments are reviewed. Particular emphasis is placed on the recent results, on the trend favored by the data, on companies with the recently improved QCD predictions and on what may be expected in the near future. 35 refs., 5 figs.

  6. New results in perturbative QCD

    SciTech Connect

    Ellis, R.K.

    1985-11-01

    Three topics in perturbative QCD important for Super-collider physics are reviewed. The topics are: (2 2) jet phenomena calculated in O( sT); new techniques for the calculation of tree graphs; and colour coherence in jet phenomena. 31 refs., 6 figs.

  7. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  8. Vector meson electroproduction in QCD

    NASA Astrophysics Data System (ADS)

    Lu, Juan; Cai, Xian-Hao; Zhou, Li-Juan

    2012-08-01

    Based on the generalized QCD vector meson dominance model, we study the electroproduction of a vector meson off a proton in the QCD inspired eikonalized model. Numerical calculations for the total cross section σtot and differential cross section dσ/dt are performed for ρ, ω and varphi meson electroproduction in this paper. Since gluons interact among themselves (self-interaction), two gluons can form a glueball with quantum numbers IG, JPC = 0+,2++, decay width Γt ≈ 100 MeV, and mass of mG = 2.23 GeV. The three gluons can form a three-gluon colorless bound state with charge conjugation quantum number C = -1, called the Odderon. The mediators of interactions between projectiles (the quark and antiquark pair fluctuated from the virtual photon) and the proton target (a three-quark system) are the tensor glueball and the Odderon. Our calculated results in the tensor glueball and Odderon exchange model fit to the existing data successfully, which evidently shows that our present QCD mechanism is a good description of meson electroproduction off a proton. It should be emphasized that our mechanism is different from the theoretical framework of Block et al. We also believe that the present study and its success are important for the investigation of other vector meson electro- and photoproduction at high energies, as well as for searching for new particles such as tensor glueballs and Odderons, which have been predicted by QCD and the color glass condensate model (CGC). Therefore, in return, it can test the validity of QCD and the CGC model.

  9. Angular Structure of the In-Medium QCD Cascade.

    PubMed

    Blaizot, J-P; Mehtar-Tani, Y; Torres, M A C

    2015-06-05

    We study the angular broadening of a medium-induced QCD cascade. We derive the equation that governs the evolution of the average transverse momentum squared of the gluons in the cascade as a function of the medium length, and we solve this equation analytically. Two regimes are identified. For a medium of a not too large size, and for not too soft gluons, the transverse momentum grows with the size of the medium according to standard momentum broadening. The other regime, visible for a medium of a sufficiently large size and very soft gluons, is a regime dominated by multiple branchings: there, the average transverse momentum saturates to a value that is independent of the size of the medium. This structure of the in-medium QCD cascade is, at least qualitatively, compatible with the recent LHC data on dijet asymmetry.

  10. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    PubMed

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.

  11. On the interface between perturbative and nonperturbative QCD

    NASA Astrophysics Data System (ADS)

    Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.

    2016-06-01

    The QCD running coupling αs (Q2) sets the strength of the interactions of quarks and gluons as a function of the momentum transfer Q. The Q2 dependence of the coupling is required to describe hadronic interactions at both large and short distances. In this article we adopt the light-front holographic approach to strongly-coupled QCD, a formalism which incorporates confinement, predicts the spectroscopy of hadrons composed of light quarks, and describes the low-Q2 analytic behavior of the strong coupling αs (Q2). The high-Q2 dependence of the coupling αs (Q2) is specified by perturbative QCD and its renormalization group equation. The matching of the high and low Q2 regimes of αs (Q2) then determines the scale Q0 which sets the interface between perturbative and nonperturbative hadron dynamics. The value of Q0 can be used to set the factorization scale for DGLAP evolution of hadronic structure functions and the ERBL evolution of distribution amplitudes. We discuss the scheme-dependence of the value of Q0 and the infrared fixed-point of the QCD coupling. Our analysis is carried out for the MS ‾, g1, MOM and V renormalization schemes. Our results show that the discrepancies on the value of αs at large distance seen in the literature can be explained by different choices of renormalization schemes. We also provide the formulae to compute αs (Q2) over the entire range of space-like momentum transfer for the different renormalization schemes discussed in this article.

  12. Diffusion Map Kernel Analysis for Target Classification

    DTIC Science & Technology

    2010-06-01

    Gaussian and Polynomial kernels are most familiar from support vector machines. The Laplacian and Rayleigh were introduced previously in [7]. IV ...Cancer • Clev. Heart: Heart Disease Data Set, Cleveland • Wisc . BC: Wisconsin Breast Cancer Original • Sonar2: Shallow Water Acoustic Toolset [9...the Rayleigh kernel captures the embedding with an average PC of 77.3% and a slightly higher PFA than the Gaussian kernel. For the Wisc . BC

  13. anQCD: Fortran programs for couplings at complex momenta in various analytic QCD models

    NASA Astrophysics Data System (ADS)

    Ayala, César; Cvetič, Gorazd

    2016-02-01

    We provide three Fortran programs which evaluate the QCD analytic (holomorphic) couplings Aν(Q2) for complex or real squared momenta Q2. These couplings are holomorphic analogs of the powers a(Q2)ν of the underlying perturbative QCD (pQCD) coupling a(Q2) ≡αs(Q2) / π, in three analytic QCD models (anQCD): Fractional Analytic Perturbation Theory (FAPT), Two-delta analytic QCD (2 δanQCD), and Massive Perturbation Theory (MPT). The index ν can be noninteger. The provided programs do basically the same job as the Mathematica package anQCD.m published by us previously (Ayala and Cvetič, 2015), but are now written in Fortran.

  14. Molecular Hydrodynamics from Memory Kernels

    NASA Astrophysics Data System (ADS)

    Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin

    2016-04-01

    The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t-3 /2 . We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius.

  15. LATTICE QCD AT FINITE DENSITY.

    SciTech Connect

    SCHMIDT, C.

    2006-07-23

    I discuss different approaches to finite density lattice QCD. In particular, I focus on the structure of the phase diagram and discuss attempts to determine the location of the critical end-point. Recent results on the transition line as function of the chemical potential (T{sub c}({mu}{sub q})) are reviewed. Along the transition line, hadronic fluctuations have been calculated; which can be used to characterize properties of the Quark Gluon plasma and eventually can also help to identify the location of the critical end-point in the QCD phase diagram on the lattice and in heavy ion experiments. Furthermore, I comment on the structure of the phase diagram at large {mu}{sub q}.

  16. Precision QCD measurements at HERA

    NASA Astrophysics Data System (ADS)

    Pirumov, Hayk

    2014-11-01

    A review of recent experimental results on perturbative QCD from the HERA experiments H1 and ZEUS is presented. All inclusive deep inelastic cross sections measured by the H1 and ZEUS collaborations in neutral and charged current unpolarised ep scattering are combined. They span six orders of magnitude in negative four-momentum-transfer squared, Q2, and in Bjorken x. This data set is used as the sole input to NLO and NNLO QCD analyses to determine new sets of parton distributions, HERAPDF2.0, with small experimental uncertainties and an estimate of model and parametrisation uncertainties. Also shown are new results on inclusive jet, dijet and trijet differential cross sections measured in neutral current deep inelastic scattering. The precision jet data is used to extract the strong coupling αs at NLO with small experimental errors.

  17. Nuclear forces from lattice QCD

    SciTech Connect

    Ishii, Noriyoshi

    2011-05-06

    Lattice QCD construction of nuclear forces is reviewed. In this method, the nuclear potentials are constructed by solving the Schroedinger equation, where equal-time Nambu-Bethe-Salpeter (NBS) wave functions are regarded as quantum mechanical wave functions. Since the long distance behavior of equal-time NBS wave functions is controlled by the scattering phase, which is in exactly the same way as scattering wave functions in quantum mechanics, the resulting potentials are faithful to the NN scattering data. The derivative expansion of this potential leads to the central and the tensor potentials at the leading order. Some of numerical results of these two potentials are shown based on the quenched QCD.

  18. Form factors from lattice QCD

    SciTech Connect

    Dru Renner

    2012-04-01

    Precision computation of hadronic physics with lattice QCD is becoming feasible. The last decade has seen precent-level calculations of many simple properties of mesons, and the last few years have seen calculations of baryon masses, including the nucleon mass, accurate to a few percent. As computational power increases and algorithms advance, the precise calculation of a variety of more demanding hadronic properties will become realistic. With this in mind, I discuss the current lattice QCD calculations of generalized parton distributions with an emphasis on the prospects for well-controlled calculations for these observables as well. I will do this by way of several examples: the pion and nucleon form factors and moments of the nucleon parton and generalized-parton distributions.

  19. Innovations in Lattice QCD Algorithms

    SciTech Connect

    Konstantinos Orginos

    2006-06-25

    Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today.

  20. Pinning down QCD-matter shear viscosity in A + A collisions via EbyE fluctuations using pQCD + saturation + hydrodynamics

    NASA Astrophysics Data System (ADS)

    Niemi, H.; Eskola, K. J.; Paatelainen, R.; Tuominen, K.

    2016-12-01

    We compute the initial energy densities produced in ultrarelativistic heavy-ion collisions from NLO perturbative QCD using a saturation conjecture to control soft particle production, and describe the subsequent space-time evolution of the system with hydrodynamics, event by event. The resulting centrality dependence of the low-pT observables from this pQCD + saturation + hydro ("EKRT") framework are then compared simultaneously to the LHC and RHIC measurements. With such an analysis we can test the initial state calculation, and constrain the temperature dependence of the shear viscosity-to-entropy ratio η / s of QCD matter. Using these constraints from the current RHIC and LHC measurements we then predict the charged hadron multiplicities and flow coefficients for the 5 TeV Pb + Pb collisions.

  1. Berry Phase in Lattice QCD.

    PubMed

    Yamamoto, Arata

    2016-07-29

    We propose the lattice QCD calculation of the Berry phase, which is defined by the ground state of a single fermion. We perform the ground-state projection of a single-fermion propagator, construct the Berry link variable on a momentum-space lattice, and calculate the Berry phase. As the first application, the first Chern number of the (2+1)-dimensional Wilson fermion is calculated by the Monte Carlo simulation.

  2. Lattice QCD: A Brief Introduction

    NASA Astrophysics Data System (ADS)

    Meyer, H. B.

    A general introduction to lattice QCD is given. The reader is assumed to have some basic familiarity with the path integral representation of quantum field theory. Emphasis is placed on showing that the lattice regularization provides a robust conceptual and computational framework within quantum field theory. The goal is to provide a useful overview, with many references pointing to the following chapters and to freely available lecture series for more in-depth treatments of specifics topics.

  3. Hadron physics from lattice QCD

    NASA Astrophysics Data System (ADS)

    Bietenholz, Wolfgang

    2016-07-01

    We sketch the basic ideas of the lattice regularization in Quantum Field Theory, the corresponding Monte Carlo simulations, and applications to Quantum Chromodynamics (QCD). This approach enables the numerical measurement of observables at the non-perturbative level. We comment on selected results, with a focus on hadron masses and the link to Chiral Perturbation Theory. At last, we address two outstanding issues: topological freezing and the sign problem.

  4. Lattice gauge theory for QCD

    SciTech Connect

    DeGrand, T.

    1997-06-01

    These lectures provide an introduction to lattice methods for nonperturbative studies of Quantum Chromodynamics. Lecture 1: Basic techniques for QCD and results for hadron spectroscopy using the simplest discretizations; lecture 2: Improved actions--what they are and how well they work; lecture 3: SLAC physics from the lattice-structure functions, the mass of the glueball, heavy quarks and {alpha}{sub s} (M{sub z}), and B-{anti B} mixing. 67 refs., 36 figs.

  5. QCD thermodynamics on a lattice

    NASA Astrophysics Data System (ADS)

    Levkova, Ludmila A.

    Numerical simulations of full QCD on anisotropic lattices provide a convenient way to study QCD thermodynamics with fixed physics scales and reduced lattice spacing errors. We report results from calculations with two flavors of dynamical staggered fermions, where all bare parameters and the renormalized anisotropy are kept constant and the temperature is changed in small steps by varying only the number of time slices. Including results from zero-temperature scale setting simulations, which determine the Karsch coefficients, allows for the calculation of the Equation of State at finite temperatures. We also report on studies of the chiral properties of dynamical domain-wall fermions combined with the DBW2 gauge action for different gauge couplings and fermion masses. For quenched theories, the DBW2 action gives a residual chiral symmetry breaking much smaller than what was found with more traditional choices for the gauge action. Our goal is to investigate the possibilities which this and further improvements provide for the study of QCD thermodynamics and other simulations at stronger couplings.

  6. QCD for Postgraduates (2/5)

    ScienceCinema

    None

    2016-07-12

    Modern QCD - Lecture 2 We will start discussing the matter content of the theory and revisit the experimental measurements that led to the discovery of quarks. We will then consider a classic QCD observable, the R-ratio, and use it to illustrate the appearance of UV divergences and the need to renormalize the coupling constant of QCD. We will then discuss asymptotic freedom and confinement. Finally, we will examine a case where soft and collinear infrared divergences appear, will discuss the soft approximation in QCD and will introduce the concept of infrared safe jets.

  7. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  8. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...

  9. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...

  10. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  11. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  12. Bergman Kernel from Path Integral

    NASA Astrophysics Data System (ADS)

    Douglas, Michael R.; Klevtsov, Semyon

    2010-01-01

    We rederive the expansion of the Bergman kernel on Kähler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory, and generalize it to supersymmetric quantum mechanics. One physics interpretation of this result is as an expansion of the projector of wave functions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kähler form. This is relevant for the quantum Hall effect in curved space, and for its higher dimensional generalizations. Other applications include the theory of coherent states, the study of balanced metrics, noncommutative field theory, and a conjecture on metrics in black hole backgrounds discussed in [24]. We give a short overview of these various topics. From a conceptual point of view, this expansion is noteworthy as it is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey et al short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry.

  13. Kernel current source density method.

    PubMed

    Potworowski, Jan; Jakuczun, Wit; Lȩski, Szymon; Wójcik, Daniel

    2012-02-01

    Local field potentials (LFP), the low-frequency part of extracellular electrical recordings, are a measure of the neural activity reflecting dendritic processing of synaptic inputs to neuronal populations. To localize synaptic dynamics, it is convenient, whenever possible, to estimate the density of transmembrane current sources (CSD) generating the LFP. In this work, we propose a new framework, the kernel current source density method (kCSD), for nonparametric estimation of CSD from LFP recorded from arbitrarily distributed electrodes using kernel methods. We test specific implementations of this framework on model data measured with one-, two-, and three-dimensional multielectrode setups. We compare these methods with the traditional approach through numerical approximation of the Laplacian and with the recently developed inverse current source density methods (iCSD). We show that iCSD is a special case of kCSD. The proposed method opens up new experimental possibilities for CSD analysis from existing or new recordings on arbitrarily distributed electrodes (not necessarily on a grid), which can be obtained in extracellular recordings of single unit activity with multiple electrodes.

  14. KERNEL PHASE IN FIZEAU INTERFEROMETRY

    SciTech Connect

    Martinache, Frantz

    2010-11-20

    The detection of high contrast companions at small angular separation appears feasible in conventional direct images using the self-calibration properties of interferometric observable quantities. The friendly notion of closure phase, which is key to the recent observational successes of non-redundant aperture masking interferometry used with adaptive optics, appears to be one example of a wide family of observable quantities that are not contaminated by phase noise. In the high-Strehl regime, soon to be available thanks to the coming generation of extreme adaptive optics systems on ground-based telescopes, and already available from space, closure phase like information can be extracted from any direct image, even taken with a redundant aperture. These new phase-noise immune observable quantities, called kernel phases, are determined a priori from the knowledge of the geometry of the pupil only. Re-analysis of archive data acquired with the Hubble Space Telescope NICMOS instrument using this new kernel-phase algorithm demonstrates the power of the method as it clearly detects and locates with milliarcsecond precision a known companion to a star at angular separation less than the diffraction limit.

  15. Protoribosome by quantum kernel energy method.

    PubMed

    Huang, Lulu; Krupkin, Miri; Bashan, Anat; Yonath, Ada; Massa, Lou

    2013-09-10

    Experimental evidence suggests the existence of an RNA molecular prebiotic entity, called by us the "protoribosome," which may have evolved in the RNA world before evolution of the genetic code and proteins. This vestige of the RNA world, which possesses all of the capabilities required for peptide bond formation, seems to be still functioning in the heart of all of the contemporary ribosome. Within the modern ribosome this remnant includes the peptidyl transferase center. Its highly conserved nucleotide sequence is suggestive of its robustness under diverse environmental conditions, and hence on its prebiotic origin. Its twofold pseudosymmetry suggests that this entity could have been a dimer of self-folding RNA units that formed a pocket within which two activated amino acids might be accommodated, similar to the binding mode of modern tRNA molecules that carry amino acids or peptidyl moieties. Using quantum mechanics and crystal coordinates, this work studies the question of whether the putative protoribosome has properties necessary to function as an evolutionary precursor to the modern ribosome. The quantum model used in the calculations is density functional theory--B3LYP/3-21G*, implemented using the kernel energy method to make the computations practical and efficient. It occurs that the necessary conditions that would characterize a practicable protoribosome--namely (i) energetic structural stability and (ii) energetically stable attachment to substrates--are both well satisfied.

  16. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  17. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  18. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  19. VNI version 4.1. Simulation of high-energy particle collisions in QCD: Space-time evolution of e{sup +}e{sup {minus}}...A + B collisions with parton-cascades, cluster-hadronization, final-state hadron cascades

    SciTech Connect

    Geiger, K.; Longacre, R.; Srivastava, D.K.

    1999-02-01

    VNI is a general-purpose Monte-Carlo event-generator, which includes the simulation of lepton-lepton, lepton-hadron, lepton-nucleus, hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions. It uses the real-time evolution of parton cascades in conjunction with a self-consistent hadronization scheme, as well as the development of hadron cascades after hadronization. The causal evolution from a specific initial state (determined by the colliding beam particles) is followed by the time-development of the phase-space densities of partons, pre-hadronic parton clusters, and final-state hadrons, in position-space, momentum-space and color-space. The parton-evolution is described in terms of a space-time generalization of the familiar momentum-space description of multiple (semi)hard interactions in QCD, involving 2 {r_arrow} 2 parton collisions, 2 {r_arrow} 1 parton fusion processes, and 1 {r_arrow} 2 radiation processes. The formation of color-singlet pre-hadronic clusters and their decays into hadrons, on the other hand, is treated by using a spatial criterion motivated by confinement and a non-perturbative model for hadronization. Finally, the cascading of produced prehadronic clusters and of hadrons includes a multitude of 2 {r_arrow} n processes, and is modeled in parallel to the parton cascade description. This paper gives a brief review of the physics underlying VNI, as well as a detailed description of the program itself. The latter program description emphasizes easy-to-use pragmatism and explains how to use the program (including simple examples), annotates input and control parameters, and discusses output data provided by it.

  20. Sufficient conditions for a memory-kernel master equation

    NASA Astrophysics Data System (ADS)

    Chruściński, Dariusz; Kossakowski, Andrzej

    2016-08-01

    We derive sufficient conditions for the memory-kernel governing nonlocal master equation which guarantee a legitimate (completely positive and trace-preserving) dynamical map. It turns out that these conditions provide natural parametrizations of the dynamical map being a generalization of the Markovian semigroup. This parametrization is defined by the so-called legitimate pair—monotonic quantum operation and completely positive map—and it is shown that such a class of maps covers almost all known examples from the Markovian semigroup, the semi-Markov evolution, up to collision models and their generalization.

  1. The context-tree kernel for strings.

    PubMed

    Cuturi, Marco; Vert, Jean-Philippe

    2005-10-01

    We propose a new kernel for strings which borrows ideas and techniques from information theory and data compression. This kernel can be used in combination with any kernel method, in particular Support Vector Machines for string classification, with notable applications in proteomics. By using a Bayesian averaging framework with conjugate priors on a class of Markovian models known as probabilistic suffix trees or context-trees, we compute the value of this kernel in linear time and space while only using the information contained in the spectrum of the considered strings. This is ensured through an adaptation of a compression method known as the context-tree weighting algorithm. Encouraging classification results are reported on a standard protein homology detection experiment, showing that the context-tree kernel performs well with respect to other state-of-the-art methods while using no biological prior knowledge.

  2. Kernel method for corrections to scaling.

    PubMed

    Harada, Kenji

    2015-07-01

    Scaling analysis, in which one infers scaling exponents and a scaling function in a scaling law from given data, is a powerful tool for determining universal properties of critical phenomena in many fields of science. However, there are corrections to scaling in many cases, and then the inference problem becomes ill-posed by an uncontrollable irrelevant scaling variable. We propose a new kernel method based on Gaussian process regression to fix this problem generally. We test the performance of the new kernel method for some example cases. In all cases, when the precision of the example data increases, inference results of the new kernel method correctly converge. Because there is no limitation in the new kernel method for the scaling function even with corrections to scaling, unlike in the conventional method, the new kernel method can be widely applied to real data in critical phenomena.

  3. Transverse Momentum-Dependent Parton Distributions From Lattice QCD

    SciTech Connect

    Michael Engelhardt, Bernhard Musch, Philipp Haegler, Andreas Schaefer

    2012-12-01

    Starting from a definition of transverse momentum-dependent parton distributions for semi-inclusive deep inelastic scattering and the Drell-Yan process, given in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection, a scheme to determine such observables in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are presented, focusing in particular on their dependence on the staple extent and the Collins-Soper evolution parameter.

  4. Two-photon collisions and QCD

    SciTech Connect

    Gunion, J.F.

    1980-05-01

    A critical review of the applications of QCD to low- and high-p/sub T/ interactions of two photons is presented. The advantages of the two-photon high-p/sub T/ tests over corresponding hadronic beam and/or target tests of QCD are given particular emphasis.

  5. Lattice QCD and High Baryon Density State

    SciTech Connect

    Nagata, Keitaro; Nakamura, Atsushi; Motoki, Shinji; Nakagawa, Yoshiyuki; Saito, Takuya

    2011-10-21

    We report our recent studies on the finite density QCD obtained from lattice QCD simulation with clover-improved Wilson fermions of two flavor and RG-improved gauge action. We approach the subject from two paths, i.e., the imaginary and chemical potentials.

  6. Solvable models and hidden symmetries in QCD

    SciTech Connect

    Yepez-Martinez, Tochtli; Hess, P. O.; Civitarese, O.; Lerma H., S.

    2010-12-23

    We show that QCD Hamiltonians at low energy exhibit an SU(2) structure, when only few orbital levels are considered. In case many orbital levels are taken into account we also find a semi-analytic solution for the energy levels of the dominant part of the QCD Hamiltonian. The findings are important to propose the structure of phenomenological models.

  7. Bayesian Kernel Mixtures for Counts.

    PubMed

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  8. MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES

    PubMed Central

    Dunson, David B.

    2013-01-01

    Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563

  9. Holographic QCD for H-dibaryon (uuddss)

    NASA Astrophysics Data System (ADS)

    Suganuma, Hideo; Matsumoto, Kohei

    2017-03-01

    The H-dibaryon (uuddss) is studied in holographic QCD for the first time. In holographic QCD, four-dimensional QCD, i.e., SU(Nc) gauge theory with chiral quarks, can be formulated with S1-compactified D4/D8/\\overline {{{D8}}} -brane system. In holographic QCD with large (Nc, all the baryons appear as topological chiral solitons of Nambu-Goldstone bosons and (axial) vector mesons, and the H-dibaryon can be described as an SO(3)-type topological soliton with B = 2. We derive the low-energy effective theory to describe the H-dibaryon in holographic QCD. The H-dibaryon mass is found to be twice of the B = 1 hedgehog-baryon mass, MH ≃ 2.00MB=1HH, and is estimated about 1.7GeV, which is smaller than mass of two nucleons (flavor-octet baryons), in the chiral limit.

  10. Consistent Perturbative Fixed Point Calculations in QCD and Supersymmetric QCD.

    PubMed

    Ryttov, Thomas A

    2016-08-12

    We suggest how to consistently calculate the anomalous dimension γ_{*} of the ψ[over ¯]ψ operator in finite order perturbation theory at an infrared fixed point for asymptotically free theories. If the n+1 loop beta function and n loop anomalous dimension are known, then γ_{*} can be calculated exactly and fully scheme independently in a Banks-Zaks expansion through O(Δ_{f}^{n}), where Δ_{f}=N[over ¯]_{f}-N_{f}, N_{f} is the number of flavors, and N[over ¯]_{f} is the number of flavors above which asymptotic freedom is lost. For a supersymmetric theory, the calculation preserves supersymmetry order by order in Δ_{f}. We then compute γ_{*} through O(Δ_{f}^{2}) for supersymmetric QCD in the dimensional reduction scheme and find that it matches the exact known result. We find that γ_{*} is astonishingly well described in perturbation theory already at the few loops level throughout the entire conformal window. We finally compute γ_{*} through O(Δ_{f}^{3}) for QCD and a variety of other nonsupersymmetric fermionic gauge theories. Small values of γ_{*} are observed for a large range of flavors.

  11. Nucleon Structure from Lattice QCD

    SciTech Connect

    Haegler, Philipp

    2011-10-24

    Hadron structure calculations in lattice QCD have seen substantial progress during recent years. We illustrate the achievements that have been made by discussing latest lattice results for a limited number of important observables related to nucleon form factors and generalized parton distributions. A particular focus is placed on the decomposition of the nucleon spin 1/2 in terms of quark spin and orbital angular momentum contributions. Results and limitations of the necessary chiral extrapolations based on ChPT will be briefly discussed.

  12. Tetraquark states from lattice QCD

    SciTech Connect

    Mathur, Nilmani

    2011-10-24

    Recently there have been considerable interests in studying hadronic states beyond the usual two and three quark configurations. With the renewed experimental interests in {sigma}(600) and the inability of quark model to incorporate too many light scalar mesons, it is quite appropriate to study hadronic states with four quark configurations. Moreover, some of the newly observed charmed hadrons may well be described by four quark configurations. Lattice QCD is perhaps the most desirable tool to adjudicate the theoretical controversy of the scalar mesons and to interpret the structures of the newly observed charmed states. Here we briefly reviewed the lattice studies of four-quark hadrons.

  13. Nuclear Physics from Lattice QCD

    SciTech Connect

    William Detmold, Silas Beane, Konstantinos Orginos, Martin Savage

    2011-01-01

    We review recent progress toward establishing lattice Quantum Chromodynamics as a predictive calculational framework for nuclear physics. A survey of the current techniques that are used to extract low-energy hadronic scattering amplitudes and interactions is followed by a review of recent two-body and few-body calculations by the NPLQCD collaboration and others. An outline of the nuclear physics that is expected to be accomplished with Lattice QCD in the next decade, along with estimates of the required computational resources, is presented.

  14. "Quantum Field Theory and QCD"

    SciTech Connect

    Jaffe, Arthur M.

    2006-02-25

    This grant partially funded a meeting, "QFT & QCD: Past, Present and Future" held at Harvard University, Cambridge, MA on March 18-19, 2005. The participants ranged from senior scientists (including at least 9 Nobel Prize winners, and 1 Fields medalist) to graduate students and undergraduates. There were several hundred persons in attendance at each lecture. The lectures ranged from superlative reviews of past progress, lists of important, unsolved questions, to provocative hypotheses for future discovery. The project generated a great deal of interest on the internet, raising awareness and interest in the open questions of theoretical physics.

  15. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  16. Perturbed kernel approximation on homogeneous manifolds

    NASA Astrophysics Data System (ADS)

    Levesley, J.; Sun, X.

    2007-02-01

    Current methods for interpolation and approximation within a native space rely heavily on the strict positive-definiteness of the underlying kernels. If the domains of approximation are the unit spheres in euclidean spaces, then zonal kernels (kernels that are invariant under the orthogonal group action) are strongly favored. In the implementation of these methods to handle real world problems, however, some or all of the symmetries and positive-definiteness may be lost in digitalization due to small random errors that occur unpredictably during various stages of the execution. Perturbation analysis is therefore needed to address the stability problem encountered. In this paper we study two kinds of perturbations of positive-definite kernels: small random perturbations and perturbations by Dunkl's intertwining operators [C. Dunkl, Y. Xu, Orthogonal polynomials of several variables, Encyclopedia of Mathematics and Its Applications, vol. 81, Cambridge University Press, Cambridge, 2001]. We show that with some reasonable assumptions, a small random perturbation of a strictly positive-definite kernel can still provide vehicles for interpolation and enjoy the same error estimates. We examine the actions of the Dunkl intertwining operators on zonal (strictly) positive-definite kernels on spheres. We show that the resulted kernels are (strictly) positive-definite on spheres of lower dimensions.

  17. The QCD/SM working group: Summary report

    SciTech Connect

    W. Giele et al.

    2004-01-12

    Quantum Chromo-Dynamics (QCD), and more generally the physics of the Standard Model (SM), enter in many ways in high energy processes at TeV Colliders, and especially in hadron colliders (the Tevatron at Fermilab and the forthcoming LHC at CERN), First of all, at hadron colliders, QCD controls the parton luminosity, which rules the production rates of any particle or system with large invariant mass and/or large transverse momentum. Accurate predictions for any signal of possible ''New Physics'' sought at hadron colliders, as well as the corresponding backgrounds, require an improvement in the control of uncertainties on the determination of PDF and of the propagation of these uncertainties in the predictions. Furthermore, to fully exploit these new types of PDF with uncertainties, uniform tools (computer interfaces, standardization of the PDF evolution codes used by the various groups fitting PDF's) need to be proposed and developed. The dynamics of colour also affects, both in normalization and shape, various observables of the signals of any possible ''New Physics'' sought at the TeV scale, such as, e.g. the production rate, or the distributions in transverse momentum of the Higgs boson. Last, but not least, QCD governs many backgrounds to the searches for this ''New Physics''. Large and important QCD corrections may come from extra hard parton emission (and the corresponding virtual corrections), involving multi-leg and/or multi-loop amplitudes. This requires complex higher order calculations, and new methods have to be designed to compute the required multi-legs and/or multi-loop corrections in a tractable form. In the case of semi-inclusive observables, logarithmically enhanced contributions coming from multiple soft and collinear gluon emission require sophisticated QCD resummation techniques. Resummation is a catch-all name for efforts to extend the predictive power of QCD by summing the large logarithmic corrections to all orders in perturbation theory. In

  18. Impact of Landscape Topology and Spatial Heterogeneity on the Shape and Parameters of Dispersal Kernels (Invited)

    NASA Astrophysics Data System (ADS)

    Rodriguez-Iturbe, I.; Muneepeerakul, R.; Rinaldo, A.; Levin, S. A.

    2010-12-01

    An evolutionary game theoretic approach is applied to the evolution of dispersal in explicitly spatial metacommunities. It is shown that there exists a strong selective pressure on the shape of the kernel as well as in the mean dispersal distances.The shape and most importantly the tail structure are crucially affected by landscape topology with the optimal dispersal kernels in the river network topology being more stable and with heavier tails than those in the direct-e.g,planar-topology. Spatial heterogeneity is shown to enable spatial coexistence and also controls the spatial distribution of distinct groups of dispersal strategies.

  19. Theta dependence in holographic QCD

    NASA Astrophysics Data System (ADS)

    Bartolini, Lorenzo; Bigazzi, Francesco; Bolognesi, Stefano; Cotrone, Aldo L.; Manenti, Andrea

    2017-02-01

    We study the effects of the CP-breaking topological θ-term in the large N c QCD model by Witten, Sakai and Sugimoto with N f degenerate light flavors. We first compute the ground state energy density, the topological susceptibility and the masses of the lowest lying mesons, finding agreement with expectations from the QCD chiral effective action. Then, focusing on the N f = 2 case, we consider the baryonic sector and determine, to leading order in the small θ regime, the related holographic instantonic soliton solutions. We find that while the baryon spectrum does not receive O(θ ) corrections, this is not the case for observables like the electromagnetic form factor of the nucleons. In particular, it exhibits a dipole term, which turns out to be vector-meson dominated. The resulting neutron electric dipole moment, which is exactly the opposite as that of the proton, is of the same order of magnitude of previous estimates in the literature. Finally, we compute the CP-violating pion-nucleon coupling constant {overline{g}}_{π NN} , finding that it is zero to leading order in the large N c limit.

  20. QCD studies in ep collisions

    SciTech Connect

    Smith, W.H.

    1997-06-01

    These lectures describe QCD physics studies over the period 1992--1996 from data taken with collisions of 27 GeV electrons and positrons with 820 GeV protons at the HERA collider at DESY by the two general-purpose detectors H1 and ZEUS. The focus of these lectures is on structure functions and jet production in deep inelastic scattering, photoproduction, and diffraction. The topics covered start with a general introduction to HERA and ep scattering. Structure functions are discussed. This includes the parton model, scaling violation, and the extraction of F{sub 2}, which is used to determine the gluon momentum distribution. Both low and high Q{sup 2} regimes are discussed. The low Q{sup 2} transition from perturbative QCD to soft hadronic physics is examined. Jet production in deep inelastic scattering to measure {alpha}{sub s}, and in photoproduction to study resolved and direct photoproduction, is also presented. This is followed by a discussion of diffraction that begins with a general introduction to diffraction in hadronic collisions and its relation to ep collisions, and moves on to deep inelastic scattering, where the structure of diffractive exchange is studied, and in photoproduction, where dijet production provides insights into the structure of the Pomeron. 95 refs., 39 figs.

  1. Relationship between cyanogenic compounds in kernels, leaves, and roots of sweet and bitter kernelled almonds.

    PubMed

    Dicenta, F; Martínez-Gómez, P; Grané, N; Martín, M L; León, A; Cánovas, J A; Berenguer, V

    2002-03-27

    The relationship between the levels of cyanogenic compounds (amygdalin and prunasin) in kernels, leaves, and roots of 5 sweet-, 5 slightly bitter-, and 5 bitter-kernelled almond trees was determined. Variability was observed among the genotypes for these compounds. Prunasin was found only in the vegetative part (roots and leaves) for all genotypes tested. Amygdalin was detected only in the kernels, mainly in bitter genotypes. In general, bitter-kernelled genotypes had higher levels of prunasin in their roots than nonbitter ones, but the correlation between cyanogenic compounds in the different parts of plants was not high. While prunasin seems to be present in most almond roots (with a variable concentration) only bitter-kernelled genotypes are able to transform it into amygdalin in the kernel. Breeding for prunasin-based resistance to the buprestid beetle Capnodis tenebrionis L. is discussed.

  2. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  3. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  4. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...

  5. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...

  6. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  7. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more...

  8. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...

  9. QCD and the BlueGene

    SciTech Connect

    Vranas, P

    2007-06-18

    Quantum Chromodynamics is the theory of nuclear and sub-nuclear physics. It is a celebrated theory and one of its inventors, F. Wilczek, has termed it as '... our most perfect physical theory'. Part of this is related to the fact that QCD can be numerically simulated from first principles using the methods of lattice gauge theory. The computational demands of QCD are enormous and have not only played a role in the history of supercomputers but are also helping define their future. Here I will discuss the intimate relation of QCD and massively parallel supercomputers with focus on the Blue Gene supercomputer and QCD thermodynamics. I will present results on the performance of QCD on the Blue Gene as well as physics simulation results of QCD at temperatures high enough that sub-nuclear matter transitions to a plasma state of elementary particles, the quark gluon plasma. This state of matter is thought to have existed at around 10 microseconds after the big bang. Current heavy ion experiments are in the quest of reproducing it for the first time since then. And numerical simulations of QCD on the Blue Gene systems are calculating the theoretical values of fundamental parameters so that comparisons of experiment and theory can be made.

  10. Kernel-Based Equiprobabilistic Topographic Map Formation.

    PubMed

    Van Hulle MM

    1998-09-15

    We introduce a new unsupervised competitive learning rule, the kernel-based maximum entropy learning rule (kMER), which performs equiprobabilistic topographic map formation in regular, fixed-topology lattices, for use with nonparametric density estimation as well as nonparametric regression analysis. The receptive fields of the formal neurons are overlapping radially symmetric kernels, compatible with radial basis functions (RBFs); but unlike other learning schemes, the radii of these kernels do not have to be chosen in an ad hoc manner: the radii are adapted to the local input density, together with the weight vectors that define the kernel centers, so as to produce maps of which the neurons have an equal probability to be active (equiprobabilistic maps). Both an "online" and a "batch" version of the learning rule are introduced, which are applied to nonparametric density estimation and regression, respectively. The application envisaged is blind source separation (BSS) from nonlinear, noisy mixtures.

  11. Bergman kernel from the lowest Landau level

    NASA Astrophysics Data System (ADS)

    Klevtsov, S.

    2009-07-01

    We use path integral representation for the density matrix, projected on the lowest Landau level, to generalize the expansion of the Bergman kernel on Kähler manifold to the case of arbitrary magnetic field.

  12. Quantum kernel applications in medicinal chemistry.

    PubMed

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.

  13. KITTEN Lightweight Kernel 0.1 Beta

    SciTech Connect

    Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne; VanDyke, John; Hudson, Trammell

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.

  14. AdS/QCD at finite density and temperature

    SciTech Connect

    Kim, Y.

    2012-07-15

    We review some basics of AdS/QCD following a non-standard path and list a few results from AdS/QCD or holographic QCD. The non-standard path here is to use the analogy of the way one obtains an effective model of QCD like linear sigma model and the procedure to construct an AdS/QCD model based on the AdS/CFT dictionary.

  15. TICK: Transparent Incremental Checkpointing at Kernel Level

    SciTech Connect

    Petrini, Fabrizio; Gioiosa, Roberto

    2004-10-25

    TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5

  16. Death to perturbative QCD in exclusive processes?

    SciTech Connect

    Eckardt, R.; Hansper, J.; Gari, M.F.

    1994-04-01

    The authors discuss the question of whether perturbative QCD is applicable in calculations of exclusive processes at available momentum transfers. They show that the currently used method of determining hadronic quark distribution amplitudes from QCD sum rules yields wave functions which are completely undetermined because the polynomial expansion diverges. Because of the indeterminacy of the wave functions no statement can be made at present as to whether perturbative QCD is valid. The authors emphasize the necessity of a rigorous discussion of the subject and the importance of experimental data in the range of interest.

  17. The QCD vacuum, hadrons and superdense matter

    SciTech Connect

    Shuryak, E.

    1986-01-01

    This is probably the only textbook available that gathers QCD, many-body theory and phase transitions in one volume. The presentation is pedagogical and readable. Contents: The QCD Vacuum: Introduction; QCD on the Lattice Topological Effects in Gauges Theories. Correlation Functions and Microscopic Excitations: Introduction; Operator Product Expansion; The Sum Rules beyond OPE; Nonpower Contributions to Correlators and Instantons; Hadronic Spectroscopy on the Lattice. Dense Matter: Hadronic Matter; Asymptotically Dense Quark-Gluon Plasma; Instantons in Matter; Lattice Calculations at Finite Temperature; Phase Transitions; Macroscopic Excitations and Experiments: General Properties of High Energy Collisions; ''Barometers'', ''Thermometers'', Interferometric ''Microscope''; Experimental Perspectives.

  18. Excited light isoscalar mesons from lattice QCD

    SciTech Connect

    Christopher Thomas

    2011-07-01

    I report a recent lattice QCD calculation of an excited spectrum of light isoscalar mesons, something that has up to now proved challenging for lattice QCD. With novel techniques we extract an extensive spectrum with high statistical precision, including spin-four states and, for the first time, light isoscalars with exotic quantum numbers. In addition, the hidden flavour content of these mesons is determined, providing a window on annihilation dynamics in QCD. I comment on future prospects including applications to the study of resonances.

  19. Shape of mesons in holographic QCD

    SciTech Connect

    Torabian, Mahdi; Yee, Ho-Ung

    2009-10-15

    Based on the expectation that the constituent quark model may capture the right physics in the large N limit, we point out that the orbital angular momentum of the quark-antiquark pair inside light mesons of low spins in the constituent quark model may provide a clue for the holographic dual string model of large N QCD. Our discussion, relying on a few suggestive assumptions, leads to a necessity of world-sheet fermions in the bulk of dual strings that can incorporate intrinsic spins of fundamental QCD degrees of freedom. We also comment on the interesting issue of the size of mesons in holographic QCD.

  20. QCD thermodynamics and missing hadron states

    NASA Astrophysics Data System (ADS)

    Petreczky, Peter

    2016-03-01

    Equation of State and fluctuations of conserved charges in hot strongly interacting matter are being calculated with increasing accuracy in lattice QCD, and continuum results at physical quark masses become available. At sufficiently low temperature the thermodynamic quantities can be understood in terms of hadron resonance gas model that includes known hadrons and hadronic resonances from Particle Data Book. However, for some quantities it is necessary to include undiscovered hadronic resonances (missing states) that are, however, predicted by quark model and lattice QCD study of hadron spectrum. Thus, QCD thermodynamics can provide indications for the existence of yet undiscovered hadron states.

  1. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  2. Weighted Bergman Kernels and Quantization}

    NASA Astrophysics Data System (ADS)

    Engliš, Miroslav

    Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion for x,y near z, where φ(x,y) is an almost-analytic extension of &\\phi(x)=φ(x,x) and similarly for ψ. Further, . If in addition Ω is of finite type, φ,ψ behave reasonably at the boundary, and - log φ, - log ψ are strictly plurisubharmonic on Ω, we obtain also an analogous asymptotic expansion for the Berezin transform and give applications to the Berezin quantization. Finally, for Ω smoothly bounded and strictly pseudoconvex and φ a smooth strictly plurisubharmonic defining function for Ω, we also obtain results on the Berezin-Toeplitz quantization.

  3. RKF-PCA: robust kernel fuzzy PCA.

    PubMed

    Heo, Gyeongyong; Gader, Paul; Frigui, Hichem

    2009-01-01

    Principal component analysis (PCA) is a mathematical method that reduces the dimensionality of the data while retaining most of the variation in the data. Although PCA has been applied in many areas successfully, it suffers from sensitivity to noise and is limited to linear principal components. The noise sensitivity problem comes from the least-squares measure used in PCA and the limitation to linear components originates from the fact that PCA uses an affine transform defined by eigenvectors of the covariance matrix and the mean of the data. In this paper, a robust kernel PCA method that extends the kernel PCA and uses fuzzy memberships is introduced to tackle the two problems simultaneously. We first introduce an iterative method to find robust principal components, called Robust Fuzzy PCA (RF-PCA), which has a connection with robust statistics and entropy regularization. The RF-PCA method is then extended to a non-linear one, Robust Kernel Fuzzy PCA (RKF-PCA), using kernels. The modified kernel used in the RKF-PCA satisfies the Mercer's condition, which means that the derivation of the K-PCA is also valid for the RKF-PCA. Formal analyses and experimental results suggest that the RKF-PCA is an efficient non-linear dimension reduction method and is more noise-robust than the original kernel PCA.

  4. Phenomenology of Large Nc QCD

    NASA Astrophysics Data System (ADS)

    Lebed, Richard F.

    1999-09-01

    These lectures are designed to introduce the methods and results of large N c QCD in a presentation intended for nuclear and particle physicists alike. Beginning with definitions and motivations of the approach, we demonstrate that all quark and gluon Feynman diagrams are organized into classes based on powers of 1/N c. We then show that this result can be translated into definite statements about mesons and baryons containing arbitrary numbers of constituents. In the mesons, numerous well-known phenomenological properties follow as immediate consequences of simply counting powers of N c, while for the baryons, quantitative large N c analyses of masses and other properties are seen to agree with experiment, even when large” N c is set equal to its observed value of 3. Large N c reasoning is also used to explain some simple features of nuclear interactions.

  5. QCD tests with polarized beams

    SciTech Connect

    Maruyama, Takashi; SLD Collaboration

    1996-09-01

    The authors present three QCD studies performed by the SLD experiment at SLAC, utilizing the highly polarized SLC electron beam. They examined particle production differences in light quark and antiquark hemispheres, and observed more high momentum baryons and K{sup {minus}}`s than antibaryons and K{sup +}`s in quark hemispheres, consistent with the leading particle hypothesis. They performed a search for jet handedness in light q- and {anti q}-jets. Assuming Standard Model values of quark polarization in Z{sup 0} decays, they have set an improved upper limit on the analyzing power of the handedness method. They studied the correlation between the Z{sup 0} spin and the event-plane orientation in polarized Z{sup 0} decays into three jets.

  6. Heavy Quarks, QCD, and Effective Field Theory

    SciTech Connect

    Thomas Mehen

    2012-10-09

    The research supported by this OJI award is in the area of heavy quark and quarkonium production, especially the application Soft-Collinear E ective Theory (SCET) to the hadronic production of quarkonia. SCET is an e ffective theory which allows one to derive factorization theorems and perform all order resummations for QCD processes. Factorization theorems allow one to separate the various scales entering a QCD process, and in particular, separate perturbative scales from nonperturbative scales. The perturbative physics can then be calculated using QCD perturbation theory. Universal functions with precise fi eld theoretic de nitions describe the nonperturbative physics. In addition, higher order perturbative QCD corrections that are enhanced by large logarithms can be resummed using the renormalization group equations of SCET. The applies SCET to the physics of heavy quarks, heavy quarkonium, and similar particles.

  7. Opportunities, challenges, and fantasies in lattice QCD

    NASA Astrophysics Data System (ADS)

    Wilczek, Frank

    2003-05-01

    Some important problems in quantitative QCD will certainly yield to hard work and adequate investment of resources, others appear difficult but may be accessible, and still others will require essentially new ideas. Here I identify several examples in each class.

  8. Simplifying Multi-Jet QCD Computation

    SciTech Connect

    Peskin, Michael E.; /SLAC

    2011-11-04

    These lectures give a pedagogical discussion of the computation of QCD tree amplitudes for collider physics. The tools reviewed are spinor products, color ordering, MHV amplitudes, and the Britto-Cachazo-Feng-Witten recursion formula.

  9. Towards a theoretical description of dense QCD

    NASA Astrophysics Data System (ADS)

    Philipsen, Owe

    2017-03-01

    The properties of matter at finite baryon densities play an important role for the astrophysics of compact stars as well as for heavy ion collisions or the description of nuclear matter. Because of the sign problem of the quark determinant, lattice QCD cannot be simulated by standard Monte Carlo at finite baryon densities. I review alternative attempts to treat dense QCD with an effective lattice theory derived by analytic strong coupling and hopping expansions, which close to the continuum is valid for heavy quarks only, but shows all qualitative features of nuclear physics emerging from QCD. In particular, the nuclear liquid gas transition and an equation of state for baryons can be calculated directly from QCD. A second effective theory based on strong coupling methods permits studies of the phase diagram in the chiral limit on coarse lattices.

  10. Scheme variations of the QCD coupling

    NASA Astrophysics Data System (ADS)

    Boito, Diogo; Jamin, Matthias; Miravitllas, Ramon

    2017-03-01

    The Quantum Chromodynamics (QCD) coupling αs is a central parameter in the Standard Model of particle physics. However, it depends on theoretical conventions related to renormalisation and hence is not an observable quantity. In order to capture this dependence in a transparent way, a novel definition of the QCD coupling, denoted by â, is introduced, whose running is explicitly renormalisation scheme invariant. The remaining renormalisation scheme dependence is related to transformations of the QCD scale Λ, and can be parametrised by a single parameter C. Hence, we call â the C-scheme coupling. The dependence on C can be exploited to study and improve perturbative predictions of physical observables. This is demonstrated for the QCD Adler function and hadronic decays of the τ lepton.

  11. Excited light meson spectroscopy from lattice QCD

    SciTech Connect

    Christopher Thomas, Hadron Spectrum Collaboration

    2012-04-01

    I report on recent progress in calculating excited meson spectra using lattice QCD, emphasizing results and phenomenology. With novel techniques we can now extract extensive spectra of excited mesons with high statistical precision, including spin-four states and those with exotic quantum numbers. As well as isovector meson spectra, I will present new calculations of the spectrum of excited light isoscalar mesons, something that has up to now been a challenge for lattice QCD. I show determinations of the flavor content of these mesons, including the eta-eta' mixing angle, providing a window on annihilation dynamics in QCD. I will also discuss recent work on using lattice QCD to map out the energy-dependent phase shift in pi-pi scattering and future applications of the methodology to the study of resonances and decays.

  12. Strange Baryon Physics in Full Lattice QCD

    SciTech Connect

    Huey-Wen Lin

    2007-11-01

    Strange baryon spectra and form factors are key probes to study excited nuclear matter. The use of lattice QCD allows us to test the strength of the Standard Model by calculating strange baryon quantities from first principles.

  13. Superfluid helium II as the QCD vacuum

    NASA Astrophysics Data System (ADS)

    Zhitnitsky, Ariel

    2017-03-01

    We study the winding number susceptibility in a superfluid system and the topological susceptibility in QCD. We argue that both correlation functions exhibit similar structures, including the generation of the contact terms. We discuss the nature of the contact term in superfluid system and argue that it has exactly the same origin as in QCD, and it is related to the long distance physics which cannot be associated with conventional microscopical degrees of freedom such as phonons and rotons. We emphasize that the conceptual similarities between superfluid system and QCD may lead, hopefully, to a deeper understanding of the topological features of a superfluid system as well as the QCD vacuum.

  14. QCD for Postgraduates (4/5)

    ScienceCinema

    None

    2016-07-12

    Modern QCD - Lecture 4 We will consider some processes of interest at the LHC and will discuss the main elements of their cross-section calculations. We will also summarize the current status of higher order calculations.

  15. Novel QCD effects in nuclear collisions

    SciTech Connect

    Brodsky, S.J.

    1991-12-01

    Heavy ion collisions can provide a novel environment for testing fundamental dynamical processes in QCD, including minijet formation and interactions, formation zone phenomena, color filtering, coherent co-mover interactions, and new higher twist mechanisms which could account for the observed excess production and anomalous nuclear target dependence of heavy flavor production. The possibility of using light-cone thermodynamics and a corresponding covariant temperature to describe the QCD phases of the nuclear fragmentation region is also briefly discussed.

  16. Some new/old approaches to QCD

    SciTech Connect

    Gross, D.J.

    1992-11-01

    In this lecture I shall discuss some recent attempts to revive some old ideas to address the problem of solving QCD. I believe that it is timely to return to this problem which has been woefully neglected for the last decade. QCD is a permanent part of the theoretical landscape and eventually we will have to develop analytic tools for dealing with the theory in the infra-red. Lattice techniques are useful but they have not yet lived up to their promise. Even if one manages to derive the hadronic spectrum numerically, to an accuracy of 10% or even 1%, we will not be truly satisfied unless we have some analytic understanding of the results. Also, lattice Monte-Carlo methods can only be used to answer a small set of questions. Many issues of great conceptual and practical interest-in particular the calculation of scattering amplitudes, are thus far beyond lattice control. Any progress in controlling QCD in an explicit analytic, fashion would be of great conceptual value. It would also be of great practical aid to experimentalists, who must use rather ad-hoc and primitive models of QCD scattering amplitudes to estimate the backgrounds to interesting new physics. I will discuss an attempt to derive a string representation of QCD and a revival of the large N approach to QCD. Both of these ideas have a long history, many theorist-years have been devoted to their pursuit-so far with little success. I believe that it is time to try again. In part this is because of the progress in the last few years in string theory. Our increased understanding of string theory should make the attempt to discover a stringy representation of QCD easier, and the methods explored in matrix models might be employed to study the large N limit of QCD.

  17. Lattice and Phase Diagram in QCD

    SciTech Connect

    Lombardo, Maria Paola

    2008-10-13

    Model calculations have produced a number of very interesting expectations for the QCD Phase Diagram, and the task of a lattice calculations is to put these studies on a quantitative grounds. I will give an overview of the current status of the lattice analysis of the QCD phase diagram, from the quantitative results of mature calculations at zero and small baryochemical potential, to the exploratory studies of the colder, denser phase.

  18. Some New/Old Approaches to QCD

    DOE R&D Accomplishments Database

    Gross, D. J.

    1992-11-01

    In this lecture I shall discuss some recent attempts to revive some old ideas to address the problem of solving QCD. I believe that it is timely to return to this problem which has been woefully neglected for the last decade. QCD is a permanent part of the theoretical landscape and eventually we will have to develop analytic tools for dealing with the theory in the infra-red. Lattice techniques are useful but they have not yet lived up to their promise. Even if one manages to derive the hadronic spectrum numerically, to an accuracy of 10% or even 1%, we will not be truly satisfied unless we have some analytic understanding of the results. Also, lattice Monte-Carlo methods can only be used to answer a small set of questions. Many issues of great conceptual and practical interest-in particular the calculation of scattering amplitudes, are thus far beyond lattice control. Any progress in controlling QCD in an explicit analytic, fashion would be of great conceptual value. It would also be of great practical aid to experimentalists, who must use rather ad-hoc and primitive models of QCD scattering amplitudes to estimate the backgrounds to interesting new physics. I will discuss an attempt to derive a string representation of QCD and a revival of the large N approach to QCD. Both of these ideas have a long history, many theorist-years have been devoted to their pursuit-so far with little success. I believe that it is time to try again. In part this is because of the progress in the last few years in string theory. Our increased understanding of string theory should make the attempt to discover a stringy representation of QCD easier, and the methods explored in matrix models might be employed to study the large N limit of QCD.

  19. QCD and hard diffraction at the LHC

    SciTech Connect

    Albrow, Michael G.; /Fermilab

    2005-09-01

    As an introduction to QCD at the LHC the author gives an overview of QCD at the Tevatron, emphasizing the high Q{sup 2} frontier which will be taken over by the LHC. After describing briefly the LHC detectors the author discusses high mass diffraction, in particular central exclusive production of Higgs and vector boson pairs. The author introduces the FP420 project to measure the scattered protons 420m downstream of ATLAS and CMS.

  20. Kernel-Based Reconstruction of Graph Signals

    NASA Astrophysics Data System (ADS)

    Romero, Daniel; Ma, Meng; Giannakis, Georgios B.

    2017-02-01

    A number of applications in engineering, social sciences, physics, and biology involve inference over networks. In this context, graph signals are widely encountered as descriptors of vertex attributes or features in graph-structured data. Estimating such signals in all vertices given noisy observations of their values on a subset of vertices has been extensively analyzed in the literature of signal processing on graphs (SPoG). This paper advocates kernel regression as a framework generalizing popular SPoG modeling and reconstruction and expanding their capabilities. Formulating signal reconstruction as a regression task on reproducing kernel Hilbert spaces of graph signals permeates benefits from statistical learning, offers fresh insights, and allows for estimators to leverage richer forms of prior information than existing alternatives. A number of SPoG notions such as bandlimitedness, graph filters, and the graph Fourier transform are naturally accommodated in the kernel framework. Additionally, this paper capitalizes on the so-called representer theorem to devise simpler versions of existing Thikhonov regularized estimators, and offers a novel probabilistic interpretation of kernel methods on graphs based on graphical models. Motivated by the challenges of selecting the bandwidth parameter in SPoG estimators or the kernel map in kernel-based methods, the present paper further proposes two multi-kernel approaches with complementary strengths. Whereas the first enables estimation of the unknown bandwidth of bandlimited signals, the second allows for efficient graph filter selection. Numerical tests with synthetic as well as real data demonstrate the merits of the proposed methods relative to state-of-the-art alternatives.

  1. Oecophylla longinoda (Hymenoptera: Formicidae) Lead to Increased Cashew Kernel Size and Kernel Quality.

    PubMed

    Anato, F M; Sinzogan, A A C; Offenberg, J; Adandonon, A; Wargui, R B; Deguenon, J M; Ayelo, P M; Vayssières, J-F; Kossou, D K

    2017-03-03

    Weaver ants, Oecophylla spp., are known to positively affect cashew, Anacardium occidentale L., raw nut yield, but their effects on the kernels have not been reported. We compared nut size and the proportion of marketable kernels between raw nuts collected from trees with and without ants. Raw nuts collected from trees with weaver ants were 2.9% larger than nuts from control trees (i.e., without weaver ants), leading to 14% higher proportion of marketable kernels. On trees with ants, the kernel: raw nut ratio from nuts damaged by formic acid was 4.8% lower compared with nondamaged nuts from the same trees. Weaver ants provided three benefits to cashew production by increasing yields, yielding larger nuts, and by producing greater proportions of marketable kernel mass.

  2. A new Mercer sigmoid kernel for clinical data classification.

    PubMed

    Carrington, André M; Fieguth, Paul W; Chen, Helen H

    2014-01-01

    In classification with Support Vector Machines, only Mercer kernels, i.e. valid kernels, such as the Gaussian RBF kernel, are widely accepted and thus suitable for clinical data. Practitioners would also like to use the sigmoid kernel, a non-Mercer kernel, but its range of validity is difficult to determine, and even within range its validity is in dispute. Despite these shortcomings the sigmoid kernel is used by some, and two kernels in the literature attempt to emulate and improve upon it. We propose the first Mercer sigmoid kernel, that is therefore trustworthy for the classification of clinical data. We show the similarity between the Mercer sigmoid kernel and the sigmoid kernel and, in the process, identify a normalization technique that improves the classification accuracy of the latter. The Mercer sigmoid kernel achieves the best mean accuracy on three clinical data sets, detecting melanoma in skin lesions better than the most popular kernels; while with non-clinical data sets it has no significant difference in median accuracy as compared with the Gaussian RBF kernel. It consistently classifies some points correctly that the Gaussian RBF kernel does not and vice versa.

  3. Windows on the axion. [quantum chromodynamics (QCD)

    NASA Technical Reports Server (NTRS)

    Turner, Michael S.

    1989-01-01

    Peccei-Quinn symmetry with attendant axion is a most compelling, and perhaps the most minimal, extension of the standard model, as it provides a very elegant solution to the nagging strong CP-problem associated with the theta vacuum structure of QCD. However, particle physics gives little guidance as to the axion mass; a priori, the plausible values span the range: 10(-12)eV is approx. less than m(a) which is approx. less than 10(6)eV, some 18 orders-of-magnitude. Laboratory experiments have excluded masses greater than 10(4)eV, leaving unprobed some 16 orders-of-magnitude. Axions have a host of interesting astrophysical and cosmological effects, including, modifying the evolution of stars of all types (our sun, red giants, white dwarfs, and neutron stars), contributing significantly to the mass density of the Universe today, and producting detectable line radiation through the decays of relic axions. Consideration of these effects has probed 14 orders-of-magnitude in axion mass, and has left open only two windows for further exploration: 10(-6)eV is approx. less than m(a) is approx. less than 10(-3)eV and 1eV is approx. less than m(a) is approx. less than 5eV (hadronic axions only). Both these windows are accessible to experiment, and a variety of very interesting experiments, all of which involve heavenly axions, are being planned or are underway.

  4. Kernel bandwidth optimization in spike rate estimation.

    PubMed

    Shimazaki, Hideaki; Shinomoto, Shigeru

    2010-08-01

    Kernel smoother and a time-histogram are classical tools for estimating an instantaneous rate of spike occurrences. We recently established a method for selecting the bin width of the time-histogram, based on the principle of minimizing the mean integrated square error (MISE) between the estimated rate and unknown underlying rate. Here we apply the same optimization principle to the kernel density estimation in selecting the width or "bandwidth" of the kernel, and further extend the algorithm to allow a variable bandwidth, in conformity with data. The variable kernel has the potential to accurately grasp non-stationary phenomena, such as abrupt changes in the firing rate, which we often encounter in neuroscience. In order to avoid possible overfitting that may take place due to excessive freedom, we introduced a stiffness constant for bandwidth variability. Our method automatically adjusts the stiffness constant, thereby adapting to the entire set of spike data. It is revealed that the classical kernel smoother may exhibit goodness-of-fit comparable to, or even better than, that of modern sophisticated rate estimation methods, provided that the bandwidth is selected properly for a given set of spike data, according to the optimization methods presented here.

  5. Online Sequential Extreme Learning Machine With Kernels.

    PubMed

    Scardapane, Simone; Comminiello, Danilo; Scarpiniti, Michele; Uncini, Aurelio

    2015-09-01

    The extreme learning machine (ELM) was recently proposed as a unifying framework for different families of learning algorithms. The classical ELM model consists of a linear combination of a fixed number of nonlinear expansions of the input vector. Learning in ELM is hence equivalent to finding the optimal weights that minimize the error on a dataset. The update works in batch mode, either with explicit feature mappings or with implicit mappings defined by kernels. Although an online version has been proposed for the former, no work has been done up to this point for the latter, and whether an efficient learning algorithm for online kernel-based ELM exists remains an open problem. By explicating some connections between nonlinear adaptive filtering and ELM theory, in this brief, we present an algorithm for this task. In particular, we propose a straightforward extension of the well-known kernel recursive least-squares, belonging to the kernel adaptive filtering (KAF) family, to the ELM framework. We call the resulting algorithm the kernel online sequential ELM (KOS-ELM). Moreover, we consider two different criteria used in the KAF field to obtain sparse filters and extend them to our context. We show that KOS-ELM, with their integration, can result in a highly efficient algorithm, both in terms of obtained generalization error and training time. Empirical evaluations demonstrate interesting results on some benchmarking datasets.

  6. Recent progress in backreacted bottom-up holographic QCD

    SciTech Connect

    Järvinen, Matti

    2016-01-22

    Recent progress in constructing holographic models for QCD is discussed, concentrating on the bottom-up models which implement holographically the renormalization group flow of QCD. The dynamics of gluons can be modeled by using a string-inspired model termed improved holographic QCD, and flavor can be added by introducing space filling branes in this model. The flavor fully backreacts to the glue in the Veneziano limit, giving rise to a class of models which are called V-QCD. The phase diagrams and spectra of V-QCD are in good agreement with results for QCD obtained by other methods.

  7. The connection between regularization operators and support vector kernels.

    PubMed

    Smola, Alex J.; Schölkopf, Bernhard; Müller, Klaus Robert

    1998-06-01

    In this paper a correspondence is derived between regularization operators used in regularization networks and support vector kernels. We prove that the Green's Functions associated with regularization operators are suitable support vector kernels with equivalent regularization properties. Moreover, the paper provides an analysis of currently used support vector kernels in the view of regularization theory and corresponding operators associated with the classes of both polynomial kernels and translation invariant kernels. The latter are also analyzed on periodical domains. As a by-product we show that a large number of radial basis functions, namely conditionally positive definite functions, may be used as support vector kernels.

  8. Simple analytic QCD model with perturbative QCD behavior at high momenta

    SciTech Connect

    Contreras, Carlos; Espinosa, Olivier; Cvetic, Gorazd; Martinez, Hector E.

    2010-10-01

    Analytic QCD models are those where the QCD running coupling has the physically correct analytic behavior, i.e., no Landau singularities in the Euclidean regime. We present a simple analytic QCD model in which the discontinuity function of the running coupling at high momentum scales is the same as in perturbative QCD (just like in the analytic QCD model of Shirkov and Solovtsov), but at low scales it is replaced by a delta function which parametrizes the unknown behavior there. We require that the running coupling agree to a high degree with the perturbative coupling at high energies, which reduces the number of free parameters of the model from four to one. The remaining parameter is fixed by requiring the reproduction of the correct value of the semihadronic tau decay ratio.

  9. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  10. Fast generation of sparse random kernel graphs

    SciTech Connect

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.

  11. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  12. Phenolic constituents of shea (Vitellaria paradoxa) kernels.

    PubMed

    Maranz, Steven; Wiesman, Zeev; Garti, Nissim

    2003-10-08

    Analysis of the phenolic constituents of shea (Vitellaria paradoxa) kernels by LC-MS revealed eight catechin compounds-gallic acid, catechin, epicatechin, epicatechin gallate, gallocatechin, epigallocatechin, gallocatechin gallate, and epigallocatechin gallate-as well as quercetin and trans-cinnamic acid. The mean kernel content of the eight catechin compounds was 4000 ppm (0.4% of kernel dry weight), with a 2100-9500 ppm range. Comparison of the profiles of the six major catechins from 40 Vitellaria provenances from 10 African countries showed that the relative proportions of these compounds varied from region to region. Gallic acid was the major phenolic compound, comprising an average of 27% of the measured total phenols and exceeding 70% in some populations. Colorimetric analysis (101 samples) of total polyphenols extracted from shea butter into hexane gave an average of 97 ppm, with the values for different provenances varying between 62 and 135 ppm of total polyphenols.

  13. Tile-Compressed FITS Kernel for IRAF

    NASA Astrophysics Data System (ADS)

    Seaman, R.

    2011-07-01

    The Flexible Image Transport System (FITS) is a ubiquitously supported standard of the astronomical community. Similarly, the Image Reduction and Analysis Facility (IRAF), developed by the National Optical Astronomy Observatory, is a widely used astronomical data reduction package. IRAF supplies compatibility with FITS format data through numerous tools and interfaces. The most integrated of these is IRAF's FITS image kernel that provides access to FITS from any IRAF task that uses the basic IMIO interface. The original FITS kernel is a complex interface of purpose-built procedures that presents growing maintenance issues and lacks recent FITS innovations. A new FITS kernel is being developed at NOAO that is layered on the CFITSIO library from the NASA Goddard Space Flight Center. The simplified interface will minimize maintenance headaches as well as add important new features such as support for the FITS tile-compressed (fpack) format.

  14. Fractal Weyl law for Linux Kernel architecture

    NASA Astrophysics Data System (ADS)

    Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.

    2011-01-01

    We study the properties of spectrum and eigenstates of the Google matrix of a directed network formed by the procedure calls in the Linux Kernel. Our results obtained for various versions of the Linux Kernel show that the spectrum is characterized by the fractal Weyl law established recently for systems of quantum chaotic scattering and the Perron-Frobenius operators of dynamical maps. The fractal Weyl exponent is found to be ν ≈ 0.65 that corresponds to the fractal dimension of the network d ≈ 1.3. An independent computation of the fractal dimension by the cluster growing method, generalized for directed networks, gives a close value d ≈ 1.4. The eigenmodes of the Google matrix of Linux Kernel are localized on certain principal nodes. We argue that the fractal Weyl law should be generic for directed networks with the fractal dimension d < 2.

  15. QCD explanation of oscillating hadron and jet multiplicity moments

    NASA Astrophysics Data System (ADS)

    Buican, M. A.; Förster, C.; Ochs, W.

    2003-10-01

    Ratios of multiplicity moments, H q (cumulant over factorial moments K q / F q ), have been observed to show an oscillatory behavior with respect to order, q. Recent studies of e + e - annihilations at LEP have shown, moreover, that the amplitude and oscillation length vary strongly with the jet resolution parameter y cut. We study the predictions of the perturbative QCD parton cascade assuming low non-perturbative cut-off (Q_0˜ Λ_{{QCD}}˜ few 100 MeV) and derive the expectations as a function of the CMS energy and jet resolution from threshold to very high energies. We consider numerical solutions of the evolution equations of gluodynamics in double logarithmic and modified leading logarithmic approximations (DLA, MLLA), as well as results from a parton MC with readjusted parameters. The main characteristics are obtained in MLLA, while a more numerically accurate description is obtained by the MC model. A unified description of correlations between hadrons and correlations between jets emerges, in particular for the transition region of small y cut.

  16. Chiral magnetic wave in an expanding QCD fluid

    NASA Astrophysics Data System (ADS)

    Taghavi, Seyed Farid; Wiedemann, Urs Achim

    2015-02-01

    As a consequence of the chiral anomaly, the hydrodynamics of hot quantum chromodynamics (QCD) matter coupled to quantum electrodynamics allows for a long-wavelength mode of chiral charge density, the chiral magnetic wave (CMW), that provides for a mechanism of electric charge separation along the direction of an external magnetic field. Here, we investigate the efficiency of this mechanism for values of the time-dependent magnetic field and of the energy density attained in the hot QCD matter of ultrarelativistic heavy-ion collisions. To this end, we derive the CMW equations of motion for expanding systems by treating the CMW as a charge perturbation on top of an expanding Bjorken-type background field in the limit μ /T ≪1 . Both, approximate analytical and full numerical solutions to these equations of motion, indicate that for the lifetime and thermodynamic conditions of ultrarelativistic heavy-ion collisions, the efficiency of CMW-induced electric charge separation decreases with increasing center-of-mass energy and that the effect is numerically very small. We note, however, that if sizable oriented asymmetries in the axial charge distribution (that are not induced by the CMW) are present in the early fluid dynamic evolution, then the mechanism of CMW-induced electric charge separation can be much more efficient.

  17. QCD as a topologically ordered system

    SciTech Connect

    Zhitnitsky, Ariel R.

    2013-09-15

    We argue that QCD belongs to a topologically ordered phase similar to many well-known condensed matter systems with a gap such as topological insulators or superconductors. Our arguments are based on an analysis of the so-called “deformed QCD” which is a weakly coupled gauge theory, but nevertheless preserves all the crucial elements of strongly interacting QCD, including confinement, nontrivial θ dependence, degeneracy of the topological sectors, etc. Specifically, we construct the so-called topological “BF” action which reproduces the well known infrared features of the theory such as non-dispersive contribution to the topological susceptibility which cannot be associated with any propagating degrees of freedom. Furthermore, we interpret the well known resolution of the celebrated U(1){sub A} problem where the would be η{sup ′} Goldstone boson generates its mass as a result of mixing of the Goldstone field with a topological auxiliary field characterizing the system. We then identify the non-propagating auxiliary topological field of the BF formulation in deformed QCD with the Veneziano ghost (which plays the crucial role in resolution of the U(1){sub A} problem). Finally, we elaborate on relation between “string-net” condensation in topologically ordered condensed matter systems and long range coherent configurations, the “skeletons”, studied in QCD lattice simulations. -- Highlights: •QCD may belong to a topologically ordered phase similar to condensed matter (CM) systems. •We identify the non-propagating topological field in deformed QCD with the Veneziano ghost. •Relation between “string-net” condensates in CM systems and the “skeletons” in QCD lattice simulations is studied.

  18. Hadronic and nuclear interactions in QCD

    SciTech Connect

    Not Available

    1982-01-01

    Despite the evidence that QCD - or something close to it - gives a correct description of the structure of hadrons and their interactions, it seems paradoxical that the theory has thus far had very little impact in nuclear physics. One reason for this is that the application of QCD to distances larger than 1 fm involves coherent, non-perturbative dynamics which is beyond present calculational techniques. For example, in QCD the nuclear force can evidently be ascribed to quark interchange and gluon exchange processes. These, however, are as complicated to analyze from a fundamental point of view as is the analogous covalent bond in molecular physics. Since a detailed description of quark-quark interactions and the structure of hadronic wavefunctions is not yet well-understood in QCD, it is evident that a quantitative first-principle description of the nuclear force will require a great deal of theoretical effort. Another reason for the limited impact of QCD in nuclear physics has been the conventional assumption that nuclear interactions can for the most part be analyzed in terms of an effective meson-nucleon field theory or potential model in isolation from the details of short distance quark and gluon structure of hadrons. These lectures, argue that this view is untenable: in fact, there is no correspondence principle which yields traditional nuclear physics as a rigorous large-distance or non-relativistic limit of QCD dynamics. On the other hand, the distinctions between standard nuclear physics dynamics and QCD at nuclear dimensions are extremely interesting and illuminating for both particle and nuclear physics.

  19. A kernel-based approach for biomedical named entity recognition.

    PubMed

    Patra, Rakesh; Saha, Sujan Kumar

    2013-01-01

    Support vector machine (SVM) is one of the popular machine learning techniques used in various text processing tasks including named entity recognition (NER). The performance of the SVM classifier largely depends on the appropriateness of the kernel function. In the last few years a number of task-specific kernel functions have been proposed and used in various text processing tasks, for example, string kernel, graph kernel, tree kernel and so on. So far very few efforts have been devoted to the development of NER task specific kernel. In the literature we found that the tree kernel has been used in NER task only for entity boundary detection or reannotation. The conventional tree kernel is unable to execute the complete NER task on its own. In this paper we have proposed a kernel function, motivated by the tree kernel, which is able to perform the complete NER task. To examine the effectiveness of the proposed kernel, we have applied the kernel function on the openly available JNLPBA 2004 data. Our kernel executes the complete NER task and achieves reasonable accuracy.

  20. Experimental study of turbulent flame kernel propagation

    SciTech Connect

    Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve

    2008-07-15

    Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{sub j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)

  1. A dynamic kernel modifier for linux

    SciTech Connect

    Minnich, R. G.

    2002-09-03

    Dynamic Kernel Modifier, or DKM, is a kernel module for Linux that allows user-mode programs to modify the execution of functions in the kernel without recompiling or modifying the kernel source in any way. Functions may be traced, either function entry only or function entry and exit; nullified; or replaced with some other function. For the tracing case, function execution results in the activation of a watchpoint. When the watchpoint is activated, the address of the function is logged in a FIFO buffer that is readable by external applications. The watchpoints are time-stamped with the resolution of the processor high resolution timers, which on most modem processors are accurate to a single processor tick. DKM is very similar to earlier systems such as the SunOS trace device or Linux TT. Unlike these two systems, and other similar systems, DKM requires no kernel modifications. DKM allows users to do initial probing of the kernel to look for performance problems, or even to resolve potential problems by turning functions off or replacing them. DKM watchpoints are not without cost: it takes about 200 nanoseconds to make a log entry on an 800 Mhz Pentium-Ill. The overhead numbers are actually competitive with other hardware-based trace systems, although it has less 'Los Alamos National Laboratory is operated by the University of California for the National Nuclear Security Administration of the United States Department of Energy under contract W-7405-ENG-36. accuracy than an In-Circuit Emulator such as the American Arium. Once the user has zeroed in on a problem, other mechanisms with a higher degree of accuracy can be used.

  2. Kernel abortion in maize. II. Distribution of /sup 14/C among kernel carboydrates

    SciTech Connect

    Hanft, J.M.; Jones, R.J.

    1986-06-01

    This study was designed to compare the uptake and distribution of /sup 14/C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35/sup 0/C were transferred to (/sup 14/C)sucrose media 10 days after pollination. Kernels cultured at 35/sup 0/C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on (/sup 14/C)sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35/sup 0/C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35/sup 0/C compared to kernels cultured at 30/sup 0/C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35/sup 0/C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30/sup 0/C (89%). Kernels cultured at 35/sup 0/C had a correspondingly higher proportion of /sup 14/C in endosperm fructose, glucose, and sucrose.

  3. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  4. Full Waveform Inversion Using Waveform Sensitivity Kernels

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    2013-04-01

    We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver

  5. Regularization techniques for PSF-matching kernels - I. Choice of kernel basis

    NASA Astrophysics Data System (ADS)

    Becker, A. C.; Homrighausen, D.; Connolly, A. J.; Genovese, C. R.; Owen, R.; Bickerton, S. J.; Lupton, R. H.

    2012-09-01

    We review current methods for building point spread function (PSF)-matching kernels for the purposes of image subtraction or co-addition. Such methods use a linear decomposition of the kernel on a series of basis functions. The correct choice of these basis functions is fundamental to the efficiency and effectiveness of the matching - the chosen bases should represent the underlying signal using a reasonably small number of shapes, and/or have a minimum number of user-adjustable tuning parameters. We examine methods whose bases comprise multiple Gauss-Hermite polynomials, as well as a form-free basis composed of delta-functions. Kernels derived from delta-functions are unsurprisingly shown to be more expressive; they are able to take more general shapes and perform better in situations where sum-of-Gaussian methods are known to fail. However, due to its many degrees of freedom (the maximum number allowed by the kernel size) this basis tends to overfit the problem and yields noisy kernels having large variance. We introduce a new technique to regularize these delta-function kernel solutions, which bridges the gap between the generality of delta-function kernels and the compactness of sum-of-Gaussian kernels. Through this regularization we are able to create general kernel solutions that represent the intrinsic shape of the PSF-matching kernel with only one degree of freedom, the strength of the regularization λ. The role of λ is effectively to exchange variance in the resulting difference image with variance in the kernel itself. We examine considerations in choosing the value of λ, including statistical risk estimators and the ability of the solution to predict solutions for adjacent areas. Both of these suggest moderate strengths of λ between 0.1 and 1.0, although this optimization is likely data set dependent. This model allows for flexible representations of the convolution kernel that have significant predictive ability and will prove useful in implementing

  6. Deep Sequencing of RNA from Ancient Maize Kernels

    PubMed Central

    Rasmussen, Morten; Cappellini, Enrico; Romero-Navarro, J. Alberto; Wales, Nathan; Alquezar-Planas, David E.; Penfield, Steven; Brown, Terence A.; Vielle-Calzada, Jean-Philippe; Montiel, Rafael; Jørgensen, Tina; Odegaard, Nancy; Jacobs, Michael; Arriaza, Bernardo; Higham, Thomas F. G.; Ramsey, Christopher Bronk; Willerslev, Eske; Gilbert, M. Thomas P.

    2013-01-01

    The characterization of biomolecules from ancient samples can shed otherwise unobtainable insights into the past. Despite the fundamental role of transcriptomal change in evolution, the potential of ancient RNA remains unexploited – perhaps due to dogma associated with the fragility of RNA. We hypothesize that seeds offer a plausible refuge for long-term RNA survival, due to the fundamental role of RNA during seed germination. Using RNA-Seq on cDNA synthesized from nucleic acid extracts, we validate this hypothesis through demonstration of partial transcriptomal recovery from two sources of ancient maize kernels. The results suggest that ancient seed transcriptomics may offer a powerful new tool with which to study plant domestication. PMID:23326310

  7. Accuracy of Reduced and Extended Thin-Wire Kernels

    SciTech Connect

    Burke, G J

    2008-11-24

    Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.

  8. Analysis of maize ( Zea mays ) kernel density and volume using microcomputed tomography and single-kernel near-infrared spectroscopy.

    PubMed

    Gustin, Jeffery L; Jackson, Sean; Williams, Chekeria; Patel, Anokhee; Armstrong, Paul; Peter, Gary F; Settles, A Mark

    2013-11-20

    Maize kernel density affects milling quality of the grain. Kernel density of bulk samples can be predicted by near-infrared reflectance (NIR) spectroscopy, but no accurate method to measure individual kernel density has been reported. This study demonstrates that individual kernel density and volume are accurately measured using X-ray microcomputed tomography (μCT). Kernel density was significantly correlated with kernel volume, air space within the kernel, and protein content. Embryo density and volume did not influence overall kernel density. Partial least-squares (PLS) regression of μCT traits with single-kernel NIR spectra gave stable predictive models for kernel density (R(2) = 0.78, SEP = 0.034 g/cm(3)) and volume (R(2) = 0.86, SEP = 2.88 cm(3)). Density and volume predictions were accurate for data collected over 10 months based on kernel weights calculated from predicted density and volume (R(2) = 0.83, SEP = 24.78 mg). Kernel density was significantly correlated with bulk test weight (r = 0.80), suggesting that selection of dense kernels can translate to improved agronomic performance.

  9. Hadronic structure of the photon at small x in holographic QCD

    NASA Astrophysics Data System (ADS)

    Watanabe, Akira; Li, Hsiang-nan

    2016-11-01

    We present our analysis on the photon structure functions at small Bjorken variable x in the framework of the holographic QCD. In the kinematic region, a photon can fluctuate into vector mesons and behaves like a hadron rather than a pointlike particle. Assuming the Pomeron exchange dominance, the dominant hadronic contribution to the structure functions is computed by convoluting the probe and target photon density distributions obtained from the wave functions of the U(1) vector field in the five-dimensional AdS space and the Brower-Polchinski-Strassler-Tan Pomeron exchange kernel. Our calculations are in agreement with both the experimental data from OPAL collaboration at LEP and those calculated from the parton distribution functions of the photon proposed by Glück, Reya, and Schienbein. The predictions presented here will be tested at future linear colliders, such as the planned International Linear Collider.

  10. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    SciTech Connect

    Charles Barnes; CLay Richardson; Scott Nagley; John Hunn; Eric Shaber

    2010-10-01

    Babcock and Wilcox (B&W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-µm, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B&W produced 425-µm, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B&W also produced 500-µm, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B&W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  11. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  12. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  13. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  14. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  15. Multiple spectral kernel learning and a gaussian complexity computation.

    PubMed

    Reyhani, Nima

    2013-07-01

    Multiple kernel learning (MKL) partially solves the kernel selection problem in support vector machines and similar classifiers by minimizing the empirical risk over a subset of the linear combination of given kernel matrices. For large sample sets, the size of the kernel matrices becomes a numerical issue. In many cases, the kernel matrix is of low-efficient rank. However, the low-rank property is not efficiently utilized in MKL algorithms. Here, we suggest multiple spectral kernel learning that efficiently uses the low-rank property by finding a kernel matrix from a set of Gram matrices of a few eigenvectors from all given kernel matrices, called a spectral kernel set. We provide a new bound for the gaussian complexity of the proposed kernel set, which depends on both the geometry of the kernel set and the number of Gram matrices. This characterization of the complexity implies that in an MKL setting, adding more kernels may not monotonically increase the complexity, while previous bounds show otherwise.

  16. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...

  17. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of...

  18. Thermomechanical property of rice kernels studied by DMA

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The thermomechanical property of the rice kernels was investigated using a dynamic mechanical analyzer (DMA). The length change of rice kernel with a loaded constant force along the major axis direction was detected during temperature scanning. The thermomechanical transition occurred in rice kernel...

  19. NIRS method for precise identification of Fusarium damaged wheat kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Development of scab resistant wheat varieties may be enhanced by non-destructive evaluation of kernels for Fusarium damaged kernels (FDKs) and deoxynivalenol (DON) levels. Fusarium infection generally affects kernel appearance, but insect damage and other fungi can cause similar symptoms. Also, some...

  20. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  1. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  2. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  3. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  4. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  5. Transverse momentum-dependent parton distribution functions from lattice QCD

    SciTech Connect

    Michael Engelhardt, Philipp Haegler, Bernhard Musch, John Negele, Andreas Schaefer

    2012-12-01

    Transverse momentum-dependent parton distributions (TMDs) relevant for semi-inclusive deep inelastic scattering (SIDIS) and the Drell-Yan process can be defined in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection. Starting from such a definition, a scheme to determine TMDs in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are obtained using ensembles at the pion masses 369MeV and 518MeV, focusing in particular on the dependence of these shifts on the staple extent and a Collins-Soper-type evolution parameter quantifying proximity of the staples to the light cone.

  6. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  7. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  8. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  9. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  10. Protein Structure Prediction Using String Kernels

    DTIC Science & Technology

    2006-03-03

    Prediction using String Kernels 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...consists of 4352 sequences from SCOP version 1.53 extracted from the Astral database, grouped into families and superfamilies. The dataset is processed

  11. Kernel Temporal Differences for Neural Decoding

    PubMed Central

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  12. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  13. Novel Spin Effects in QCD

    SciTech Connect

    Brodsky, S

    2004-01-15

    Measurements from HERMES, SMC, and Jlab show a significant single-spin asymmetry in semi-inclusive pion leptoproduction {gamma}*(q)p {yields} {pi}X when the proton is polarized normal to the photon-to-pion production plane. Hwang, Schmidt, and I [1] have shown that final-state interactions from gluon exchange between the outgoing quark and the target spectator system lead to such single-spin asymmetries at leading twist in perturbative QCD; i.e., the rescattering corrections are not power-law suppressed at large photon virtuality Q{sup 2} at fixed x{sub bj}. The existence of such single-spin asymmetries (the Sivers effect) requires a phase difference between two amplitudes coupling the proton target with J{sub p}{sup z} = {+-} 1/2 to the same final-state, the same amplitudes which are necessary to produce a nonzero proton anomalous magnetic moment. The single-spin asymmetry which arises from such final-state interactions is in addition to the Collins effect which measures the transversity distribution {delta}q(x, Q). The Sivers effect also leads to a leading-twist target single-spin asymmetry for jet production in electroproduction where the thrust axis is used to define the production plane. More generally, Hoyer, Marchal, Peigne, Sannino, and I [2] have shown that one cannot neglect the interactions which occur between the times of the currents in the current correlator even in light-cone gauge. For example, the final-state interactions lead to the Bjorken-scaling diffractive component {gamma}*p {yields} pX of deep inelastic scattering. Since the gluons exchanged in the final state carry negligible k{sup +}, the Pomeron structure function closely resembles that of the primary gluon. The diffractive scattering of the fast outgoing quarks on spectators in the target in turn causes shadowing in the DIS cross section. These effects highlight the unexpected importance of final- and initial-state interactions in QCD observables, they lead to leading-twist single

  14. Exploring hyperons and hypernuclei with lattice QCD

    SciTech Connect

    Beane, S.R.; Bedaque, P.F.; Parreno, A.; Savage, M.J.

    2003-01-01

    In this work we outline a program for lattice QCD that wouldprovide a first step toward understanding the strong and weakinteractions of strange baryons. The study of hypernuclear physics hasprovided a significant amount of information regarding the structure andweak decays of light nuclei containing one or two Lambda's, and Sigma's.From a theoretical standpoint, little is known about the hyperon-nucleoninteraction, which is required input for systematic calculations ofhypernuclear structure. Furthermore, the long-standing discrepancies inthe P-wave amplitudes for nonleptonic hyperon decays remain to beunderstood, and their resolution is central to a better understanding ofthe weak decays of hypernuclei. We present a framework that utilizesLuscher's finite-volume techniques in lattice QCD to extract thescattering length and effective range for Lambda-N scattering in both QCDand partially-quenched QCD. The effective theory describing thenonleptonic decays of hyperons using isospin symmetry alone, appropriatefor lattice calculations, is constructed.

  15. Equation of State from Lattice QCD Calculations

    SciTech Connect

    Gupta, Rajan

    2011-01-01

    We provide a status report on the calculation of the Equation of State (EoS) of QCD at finite temperature using lattice QCD. Most of the discussion will focus on comparison of recent results obtained by the HotQCD and Wuppertal-Budapest collaborations. We will show that very significant progress has been made towards obtaining high precision results over the temperature range of T = 150-700 MeV. The various sources of systematic uncertainties will be discussed and the differences between the two calculations highlighted. Our final conclusion is that these lattice results of EoS are precise enough to be used in the phenomenological analysis of heavy ion experiments at RHIC and LHC.

  16. Holographic models and the QCD trace anomaly

    SciTech Connect

    Jose L. Goity, Roberto C. Trinchero

    2012-08-01

    Five dimensional dilaton models are considered as possible holographic duals of the pure gauge QCD vacuum. In the framework of these models, the QCD trace anomaly equation is considered. Each quantity appearing in that equation is computed by holographic means. Two exact solutions for different dilaton potentials corresponding to perturbative and non-perturbative {beta}-functions are studied. It is shown that in the perturbative case, where the {beta}-function is the QCD one at leading order, the resulting space is not asymptotically AdS. In the non-perturbative case, the model considered presents confinement of static quarks and leads to a non-vanishing gluon condensate, although it does not correspond to an asymptotically free theory. In both cases analyses based on the trace anomaly and on Wilson loops are carried out.

  17. 'T hooft anomaly matching for QCD

    SciTech Connect

    Terning, John

    1998-03-03

    I present a set of theories which display non-trivial 'tHooft anomaly matching for QCD with F flavors. The matching theories arenon-Abelian gauge theories with "dual" quarks and baryons, rather thanthe purely confining theories of baryons that 't Hooft originallysearched for. The matching gauge groups are required to have an Fpm 6dimensional representation. Such a correspondence is reminiscent ofSeiberg's duality for supersymmetric (SUSY) QCD, and these theories arecandidates for non-SUSY duality. However anomaly matching by itself isnot sufficiently restrictive, and duality for QCD cannot be establishedat present. At the very least, the existence of multiple anomaly matchingsolutions should provide a note of caution regarding conjectured non-SUSYdualities.

  18. New View of the QCD Phase Diagram

    SciTech Connect

    McLerran,L.

    2009-07-09

    Quarkyonic matter is confining but can have densities much larger than 3QCD. Its existence isargued in the large Nc limit of QCD and implies that there are at least three phases of QCD with greatly different bulk properties. These are a Confined Phase of hadrons, a Deconfined Phase ofquarks and gluons, and the Quarkyonic Phase. In the Quarkyonic Phase, the baryon density isaccounted for by a quasi-free gas of quarks, and the the antiquarks and gluons are confined intomesons, glueballs. Quarks near the Fermi surface also are treated as baryons. (In addition tothese phases, there is a color superconducting phase that has vastly different transport properties than the above, but with bulk properties, such as pressure and energy density, that are not greatlydifferent than that of Quarkyonic Matter.)

  19. QCD sign problem for small chemical potential

    SciTech Connect

    Splittorff, K.; Verbaarschot, J. J. M.

    2007-06-01

    The expectation value of the complex phase factor of the fermion determinant is computed in the microscopic domain of QCD at nonzero chemical potential. We find that the average phase factor is nonvanishing below a critical value of the chemical potential equal to half the pion mass and vanishes exponentially in the volume for larger values of the chemical potential. This holds for QCD with dynamical quarks as well as for quenched and phase quenched QCD. The average phase factor has an essential singularity for zero chemical potential and cannot be obtained by analytic continuation from imaginary chemical potential or by means of a Taylor expansion. The leading order correction in the p-expansion of the chiral Lagrangian is calculated as well.

  20. Phase diagram of chirally imbalanced QCD matter

    SciTech Connect

    Chernodub, M. N.; Nedelin, A. S.

    2011-05-15

    We compute the QCD phase diagram in the plane of the chiral chemical potential and temperature using the linear sigma model coupled to quarks and to the Polyakov loop. The chiral chemical potential accounts for effects of imbalanced chirality due to QCD sphaleron transitions which may emerge in heavy-ion collisions. We found three effects caused by the chiral chemical potential: the imbalanced chirality (i) tightens the link between deconfinement and chiral phase transitions; (ii) lowers the common critical temperature; (iii) strengthens the order of the phase transition by converting the crossover into the strong first order phase transition passing via the second order end point. Since the fermionic determinant with the chiral chemical potential has no sign problem, the chirally imbalanced QCD matter can be studied in numerical lattice simulations.

  1. QCD and Light-Front Dynamics

    SciTech Connect

    Brodsky, Stanley J.; de Teramond, Guy F.; /SLAC /Southern Denmark U., CP3-Origins /Costa Rica U.

    2011-01-10

    AdS/QCD, the correspondence between theories in a dilaton-modified five-dimensional anti-de Sitter space and confining field theories in physical space-time, provides a remarkable semiclassical model for hadron physics. Light-front holography allows hadronic amplitudes in the AdS fifth dimension to be mapped to frame-independent light-front wavefunctions of hadrons in physical space-time. The result is a single-variable light-front Schroedinger equation which determines the eigenspectrum and the light-front wavefunctions of hadrons for general spin and orbital angular momentum. The coordinate z in AdS space is uniquely identified with a Lorentz-invariant coordinate {zeta} which measures the separation of the constituents within a hadron at equal light-front time and determines the off-shell dynamics of the bound state wavefunctions as a function of the invariant mass of the constituents. The hadron eigenstates generally have components with different orbital angular momentum; e.g., the proton eigenstate in AdS/QCD with massless quarks has L = 0 and L = 1 light-front Fock components with equal probability. Higher Fock states with extra quark-anti quark pairs also arise. The soft-wall model also predicts the form of the nonperturbative effective coupling and its {beta}-function. The AdS/QCD model can be systematically improved by using its complete orthonormal solutions to diagonalize the full QCD light-front Hamiltonian or by applying the Lippmann-Schwinger method to systematically include QCD interaction terms. Some novel features of QCD are discussed, including the consequences of confinement for quark and gluon condensates. A method for computing the hadronization of quark and gluon jets at the amplitude level is outlined.

  2. String breaking in four dimensional lattice QCD

    SciTech Connect

    Duncan, A.; Eichten, E.; Thacker, H.

    2001-06-01

    Virtual quark pair screening leads to breaking of the string between fundamental representation quarks in QCD. For unquenched four dimensional lattice QCD, this (so far elusive) phenomenon is studied using the recently developed truncated determinant algorithm (TDA). The dynamical configurations were generated on a 650 MHz PC. Quark eigenmodes up to 420 MeV are included exactly in these TDA studies performed at low quark mass on large coarse [but O(a{sup 2}) improved] lattices. A study of Wilson line correlators in Coulomb gauge extracted from an ensemble of 1000 two-flavor dynamical configurations reveals evidence for flattening of the string tension at distances R{approx}>1 fm.

  3. Anomalous mass dimension in multiflavor QCD

    NASA Astrophysics Data System (ADS)

    Doff, A.; Natale, A. A.

    2016-10-01

    Models of strongly interacting theories with a large mass anomalous dimension (γm) provide an interesting possibility for the dynamical origin of the electroweak symmetry breaking. A laboratory for these models is QCD with many flavors, which may present a nontrivial fixed point associated to a conformal region. Studies based on conformal field theories and on Schwinger-Dyson equations have suggested the existence of bounds on the mass anomalous dimension at the fixed points of these models. In this note we discuss γm values of multiflavor QCD exhibiting a nontrivial fixed point and affected by relevant four-fermion interactions.

  4. Non-perturbative QCD and hadron physics

    NASA Astrophysics Data System (ADS)

    Cobos-Martínez, J. J.

    2016-10-01

    A brief exposition of contemporary non-perturbative methods based on the Schwinger-Dyson (SDE) and Bethe-Salpeter equations (BSE) of Quantum Chromodynamics (QCD) and their application to hadron physics is given. These equations provide a non-perturbative continuum formulation of QCD and are a powerful and promising tool for the study of hadron physics. Results on some properties of hadrons based on this approach, with particular attention to the pion distribution amplitude, elastic, and transition electromagnetic form factors, and their comparison to experimental data are presented.

  5. Experimental Study of Nucleon Structure and QCD

    SciTech Connect

    Jian-Ping Chen

    2012-03-01

    Overview of Experimental Study of Nucleon Structure and QCD, with focus on the spin structure. Nucleon (spin) Structure provides valuable information on QCD dynamics. A decade of experiments from JLab yields these exciting results: (1) valence spin structure, duality; (2) spin sum rules and polarizabilities; (3) precision measurements of g{sub 2} - high-twist; and (4) first neutron transverse spin results - Collins/Sivers/A{sub LT}. There is a bright future as the 12 GeV Upgrade will greatly enhance our capability: (1) Precision determination of the valence quark spin structure flavor separation; and (2) Precision extraction of transversity/tensor charge/TMDs.

  6. Hadron scattering and resonances in QCD

    SciTech Connect

    Dudek, Jozef J.

    2016-05-01

    I describe how hadron-hadron scattering amplitudes are related to the eigenstates of QCD in a finite cubic volume. The discrete spectrum of such eigenstates can be determined from correlation functions computed using lattice QCD, and the corresponding scattering amplitudes extracted. I review results from the Hadron Spectrum Collaboration who have used these finite volume methods to study pi pi elastic scattering, including the rho resonance, as well as coupled-channel pi K, eta K scattering. Ongoing calculations are advertised and the outlook for finite volume approaches is presented.

  7. The Status of AdS/QCD

    SciTech Connect

    Reece, Matthew

    2011-05-23

    In this talk I give a brief assessment of the 'AdS/QCD correspondence', its successes, and its failures. I begin with a review of the AdS/CFT correspondence, with an emphasis on why the large N, large 't Hooft coupling limit is necessary for a calculable theory. I then briefly discuss attempts to extrapolate this correspondence to QCD-like theories, stressing why the failure of the large 't Hooft coupling limit is more important than the breakdown of the large N expansion. I sketch how event shapes can manifest stringy physics, and close with some brief remarks on the prospects for future improvements.

  8. Is Fractional Electric Charge Problematic for QCD?

    NASA Astrophysics Data System (ADS)

    Slansky, R.

    1982-11-01

    A model of broken QCD is described here; SU3c is broken to SO3g (``g'' for ``glow'') such that color triplets become glow triplets. With this breaking pattern, there should exist low-mass, fractionally-charged diquark states that are not strongly bound to nuclei, but are rarely produced at present accelerator facilities. The breaking of QCD can be done with a 27c, in which case, this strong interaction theory is easily embedded in unified models such as those based on SU5, SO10, or E6. This work was done in collaboration with Terry Goldman of Los Alamos and Gordon Shaw of U.C., Irvine.

  9. Geometric approach to condensates in holographic QCD

    SciTech Connect

    Hirn, Johannes; Rius, Nuria; Sanz, Veronica

    2006-04-15

    An SU(N{sub f})xSU(N{sub f}) Yang-Mills theory on an extra-dimensional interval is considered, with appropriate symmetry-breaking boundary conditions on the IR brane. UV-brane to UV-brane correlators at high energies are compared with the OPE of two-point functions of QCD quark currents. Condensates correspond to departure from the AdS metric of the (different) metrics felt by vector and axial combinations, away from the UV brane. Their effect on hadronic observables is studied: the extracted condensates agree with the signs and orders of magnitude expected from QCD.

  10. Hadron scattering and resonances in QCD

    NASA Astrophysics Data System (ADS)

    Dudek, Jozef J.

    2016-05-01

    I describe how hadron-hadron scattering amplitudes are related to the eigenstates of QCD in a finite cubic volume. The discrete spectrum of such eigenstates can be determined from correlation functions computed using lattice QCD, and the corresponding scattering amplitudes extracted. I review results from the Hadron Spectrum Collaboration who have used these finite volume methods to study ππ elastic scattering, including the ρ resonance, as well as coupled-channel π >K, ηK scattering. Ongoing calculations are advertised and the outlook for finite volume approaches is presented.

  11. Recent QCD Results from the Tevatron

    SciTech Connect

    Vellidis, Costas

    2015-10-10

    Four years after the shutdown of the Tevatron proton-antiproton collider, the two Tevatron experiments, CDF and DZero, continue producing important results that test the theory of the strong interaction, Quantum Chromodynamics (QCD). The experiments exploit the advantages of the data sample acquired during the Tevatron Run II, stemming from the unique pp initial state, the clean environment at the relatively low Tevatron instantaneous luminosities, and the good understanding of the data sample after many years of calibrations and optimizations. A summary of results using the full integrated luminosity is presented, focusing on measurements of prompt photon production, weak boson production associated with jets, and non-perturbative QCD processes.

  12. Novel Aspects of Hard Diffraction in QCD

    SciTech Connect

    Brodsky, Stanley J.; /SLAC

    2005-12-14

    Initial- and final-state interactions from gluon-exchange, normally neglected in the parton model have a profound effect in QCD hard-scattering reactions, leading to leading-twist single-spin asymmetries, diffractive deep inelastic scattering, diffractive hard hadronic reactions, and nuclear shadowing and antishadowing--leading-twist physics not incorporated in the light-front wavefunctions of the target computed in isolation. I also discuss the use of diffraction to materialize the Fock states of a hadronic projectile and test QCD color transparency.

  13. Independent genetic control of maize (Zea mays L.) kernel weight determination and its phenotypic plasticity.

    PubMed

    Alvarez Prado, Santiago; Sadras, Víctor O; Borrás, Lucas

    2014-08-01

    Maize kernel weight (KW) is associated with the duration of the grain-filling period (GFD) and the rate of kernel biomass accumulation (KGR). It is also related to the dynamics of water and hence is physiologically linked to the maximum kernel water content (MWC), kernel desiccation rate (KDR), and moisture concentration at physiological maturity (MCPM). This work proposed that principles of phenotypic plasticity can help to consolidated the understanding of the environmental modulation and genetic control of these traits. For that purpose, a maize population of 245 recombinant inbred lines (RILs) was grown under different environmental conditions. Trait plasticity was calculated as the ratio of the variance of each RIL to the overall phenotypic variance of the population of RILs. This work found a hierarchy of plasticities: KDR ≈ GFD > MCPM > KGR > KW > MWC. There was no phenotypic and genetic correlation between traits per se and trait plasticities. MWC, the trait with the lowest plasticity, was the exception because common quantitative trait loci were found for the trait and its plasticity. Independent genetic control of a trait per se and genetic control of its plasticity is a condition for the independent evolution of traits and their plasticities. This allows breeders potentially to select for high or low plasticity in combination with high or low values of economically relevant traits.

  14. Carbothermic Synthesis of ~820- m UN Kernels. Investigation of Process Variables

    SciTech Connect

    Lindemer, Terrence; Silva, Chinthaka M; Henry, Jr, John James; McMurray, Jake W; Jolly, Brian C; Hunt, Rodney Dale; Terrani, Kurt A

    2015-06-01

    This report details the continued investigation of process variables involved in converting sol-gel-derived, urainia-carbon microspheres to ~820-μm-dia. UN fuel kernels in flow-through, vertical refractory-metal crucibles at temperatures up to 2123 K. Experiments included calcining of air-dried UO3-H2O-C microspheres in Ar and H2-containing gases, conversion of the resulting UO2-C kernels to dense UO2:2UC in the same gases and vacuum, and its conversion in N2 to in UC1-xNx. The thermodynamics of the relevant reactions were applied extensively to interpret and control the process variables. Producing the precursor UO2:2UC kernel of ~96% theoretical density was required, but its subsequent conversion to UC1-xNx at 2123 K was not accompanied by sintering and resulted in ~83-86% of theoretical density. Decreasing the UC1-xNx kernel carbide component via HCN evolution was shown to be quantitatively consistent with present and past experiments and the only useful application of H2 in the entire process.

  15. Evolution equation for the B-meson distribution amplitude in the heavy-quark effective theory in coordinate space

    SciTech Connect

    Kawamura, Hiroyuki; Tanaka, Kazuhiro

    2010-06-01

    The B-meson distribution amplitude (DA) is defined as the matrix element of a quark-antiquark bilocal light-cone operator in the heavy-quark effective theory, corresponding to a long-distance component in the factorization formula for exclusive B-meson decays. The evolution equation for the B-meson DA is governed by the cusp anomalous dimension as well as the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi-type anomalous dimension, and these anomalous dimensions give the ''quasilocal'' kernel in the coordinate-space representation. We show that this evolution equation can be solved analytically in the coordinate space, accomplishing the relevant Sudakov resummation at the next-to-leading logarithmic accuracy. The quasilocal nature leads to a quite simple form of our solution which determines the B-meson DA with a quark-antiquark light-cone separation t in terms of the DA at a lower renormalization scale {mu} with smaller interquark separations zt (z{<=}1). This formula allows us to present rigorous calculation of the B-meson DA at the factorization scale {approx}{radical}(m{sub b{Lambda}QCD}) for t less than {approx}1 GeV{sup -1}, using the recently obtained operator product expansion of the DA as the input at {mu}{approx}1 GeV. We also derive the master formula, which reexpresses the integrals of the DA at {mu}{approx}{radical}(m{sub b{Lambda}QCD}) for the factorization formula by the compact integrals of the DA at {mu}{approx}1 GeV.

  16. Kernel weights optimization for error diffusion halftoning method

    NASA Astrophysics Data System (ADS)

    Fedoseev, Victor

    2015-02-01

    This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.

  17. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  18. A Study of the H-dibaryon in Holographic QCD

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kohei; Nakagawa, Yuya; Suganuma, Hideo

    We study the H-dibaryon (uuddss) in holographic QCD for the first time. Holographic QCD is derived from a QCD-equivalent D-brane system (S1-compactified D4/D8/overline{D8}) in the superstring theory via the gauge/gravity correspondence. In holographic QCD, all baryons appear as topological chiral solitons of Nambu-Goldstone bosons and (axial) vector mesons. In this framework, the H-dibaryon can be described as an SO(3)-type hedgehog state. We present the formalism of the H-dibaryon in holographic QCD, and perform the calculation to investigate its properties in the chiral limit.

  19. Difference image analysis: automatic kernel design using information criteria

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.

    2016-03-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.

  20. Efficient $\\chi ^{2}$ Kernel Linearization via Random Feature Maps.

    PubMed

    Yuan, Xiao-Tong; Wang, Zhenzhen; Deng, Jiankang; Liu, Qingshan

    2016-11-01

    Explicit feature mapping is an appealing way to linearize additive kernels, such as χ(2) kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of χ(2) kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize χ(2) kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to χ(2) multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the χ(2) kernel SVMs at almost no cost of testing accuracy.

  1. Chiral logarithms in quenched QCD

    SciTech Connect

    Y. Chen; S. J. Dong; T. Draper; I. Horvath; F. X. Lee; K. F. Liu; N. Mathur; and J. B. Zhang

    2004-08-01

    The quenched chiral logarithms are examined on a 163x28 lattice with Iwasaki gauge action and overlap fermions. The pion decay constant fpi is used to set the lattice spacing, a = 0.200(3) fm. With pion mass as low as {approx}180 MeV, we see the quenched chiral logarithms clearly in mpi2/m and fP, the pseudoscalar decay constant. The authors analyze the data to determine how low the pion mass needs to be in order for the quenched one-loop chiral perturbation theory (chiPT) to apply. With the constrained curve-fitting method, they are able to extract the quenched chiral logarithmic parameter delta together with other low-energy parameters. Only for mpi<=300 MeV do we obtain a consistent and stable fit with a constant delta which they determine to be 0.24(3)(4) (at the chiral scale Lambdachi = 0.8 GeV). By comparing to the 123x28 lattice, they estimate the finite volume effect to be about 2.7% for the smallest pion mass. They also fitted the pion mass to the form for the re-summed cactus diagrams and found that its applicable region is extended farther than the range for the one-loop formula, perhaps up to mpi {approx}500-600 MeV. The scale independent delta is determined to be 0.20(3) in this case. The authors study the quenched non-analytic terms in the nucleon mass and find that the coefficient C1/2 in the nucleon mass is consistent with the prediction of one-loop chiPT. They also obtain the low energy constant L5 from fpi. They conclude from this study that it is imperative to cover only the range of data with the pion mass less than {approx}300 MeV in order to examine the chiral behavior of the hadron masses and decay constants in quenched QCD and match them with quenched one-loop chiPT.

  2. A Novel Framework for Learning Geometry-Aware Kernels.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo

    2016-05-01

    The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.

  3. Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets.

    PubMed

    Wang, Shitong; Wang, Jun; Chung, Fu-lai

    2014-01-01

    Kernel methods such as the standard support vector machine and support vector regression trainings take O(N(3)) time and O(N(2)) space complexities in their naïve implementations, where N is the training set size. It is thus computationally infeasible in applying them to large data sets, and a replacement of the naive method for finding the quadratic programming (QP) solutions is highly desirable. By observing that many kernel methods can be linked up with kernel density estimate (KDE) which can be efficiently implemented by some approximation techniques, a new learning method called fast KDE (FastKDE) is proposed to scale up kernel methods. It is based on establishing a connection between KDE and the QP problems formulated for kernel methods using an entropy-based integrated-squared-error criterion. As a result, FastKDE approximation methods can be applied to solve these QP problems. In this paper, the latest advance in fast data reduction via KDE is exploited. With just a simple sampling strategy, the resulted FastKDE method can be used to scale up various kernel methods with a theoretical guarantee that their performance does not degrade a lot. It has a time complexity of O(m(3)) where m is the number of the data points sampled from the training set. Experiments on different benchmarking data sets demonstrate that the proposed method has comparable performance with the state-of-art method and it is effective for a wide range of kernel methods to achieve fast learning in large data sets.

  4. The CKM Matrix from Lattice QCD

    SciTech Connect

    Mackenzie, Paul B.; /Fermilab

    2009-07-01

    Lattice QCD plays an essential role in testing and determining the parameters of the CKM theory of flavor mixing and CP violation. Very high precisions are required for lattice calculations analyzing CKM data; I discuss the prospects for achieving them. Lattice calculations will also play a role in investigating flavor mixing and CP violation beyond the Standard Model.

  5. Exact adler function in supersymmetric QCD.

    PubMed

    Shifman, M; Stepanyantz, K

    2015-02-06

    The Adler function D is found exactly in supersymmetric QCD. Our exact formula relates D(Q(2)) to the anomalous dimension of the matter superfields γ(α(s)(Q(2))). En route we prove another theorem: the absence of the so-called singlet contribution to D. While such singlet contributions are present in individual supergraphs, they cancel in the sum.

  6. Exploring Hyperons and Hypernuclei with Lattice QCD

    SciTech Connect

    S.R. Beane; P.F. Bedaque; A. Parreno; M.J. Savage

    2005-01-01

    In this work we outline a program for lattice QCD that would provide a first step toward understanding the strong and weak interactions of strange baryons. The study of hypernuclear physics has provided a significant amount of information regarding the structure and weak decays of light nuclei containing one or two Lambda's, and Sigma's. From a theoretical standpoint, little is known about the hyperon-nucleon interaction, which is required input for systematic calculations of hypernuclear structure. Furthermore, the long-standing discrepancies in the P-wave amplitudes for nonleptonic hyperon decays remain to be understood, and their resolution is central to a better understanding of the weak decays of hypernuclei. We present a framework that utilizes Luscher's finite-volume techniques in lattice QCD to extract the scattering length and effective range for Lambda-N scattering in both QCD and partially-quenched QCD. The effective theory describing the nonleptonic decays of hyperons using isospin symmetry alone, appropriate for lattice calculations, is constructed.

  7. On-Shell Methods in Perturbative QCD

    SciTech Connect

    Bern, Zvi; Dixon, Lance J.; Kosower, David A.

    2007-04-25

    We review on-shell methods for computing multi-parton scattering amplitudes in perturbative QCD, utilizing their unitarity and factorization properties. We focus on aspects which are useful for the construction of one-loop amplitudes needed for phenomenological studies at the Large Hadron Collider.

  8. Bottom-up holographic approach to QCD

    SciTech Connect

    Afonin, S. S.

    2016-01-22

    One of the most known result of the string theory consists in the idea that some strongly coupled gauge theories may have a dual description in terms of a higher dimensional weakly coupled gravitational theory — the so-called AdS/CFT correspondence or gauge/gravity correspondence. The attempts to apply this idea to the real QCD are often referred to as “holographic QCD” or “AdS/QCD approach”. One of directions in this field is to start from the real QCD and guess a tentative dual higher dimensional weakly coupled field model following the principles of gauge/gravity correspondence. The ensuing phenomenology can be then developed and compared with experimental data and with various theoretical results. Such a bottom-up holographic approach turned out to be unexpectedly successful in many cases. In the given short review, the technical aspects of the bottom-up holographic approach to QCD are explained placing the main emphasis on the soft wall model.

  9. Pluto results on jets and QCD

    SciTech Connect

    Pluto collaboration

    1981-02-01

    Results obtained with the PLUTO detector at PETRA are presented. Multihadron final states have been analysed with respect to clustering, energy-energy correlations and transverse momenta in jets. QCD predictions for hard gluon emission and soft gluon-quark cascades are discussed. Results on ..cap alpha../sub s/ and the gluon spin are given.

  10. QCD parton model at collider energies

    SciTech Connect

    Ellis, R.K.

    1984-09-01

    Using the example of vector boson production, the application of the QCD improved parton model at collider energies is reviewed. The reliability of the extrapolation to SSC energies is assessed. Predictions at ..sqrt..S = 0.54 TeV are compared with data. 21 references.

  11. Frontiers of finite temperature lattice QCD

    NASA Astrophysics Data System (ADS)

    Borsányi, Szabolcs

    2017-03-01

    I review a selection of recent finite temperature lattice results of the past years. First I discuss the extension of the equation of state towards high temperatures and finite densities, then I show recent results on the QCD topological susceptibility at high temperatures and highlight its relevance for dark matter search.

  12. Local topological and chiral properties of QCD.

    SciTech Connect

    de Forcrand, Ph.

    1998-10-30

    To elucidate the role played by instantons in chiral symmetry breaking, the authors explore their properties, in full QCD, around the critical temperature. They study in particular, spatial correlations between low-lying Dirac eigenmodes and instantons. Their measurements are compared with the predictions of instanton-based models.

  13. QCD PHASE TRANSITIONS-VOLUME 15.

    SciTech Connect

    SCHAFER,T.

    1998-11-04

    The title of the workshop, ''The QCD Phase Transitions'', in fact happened to be too narrow for its real contents. It would be more accurate to say that it was devoted to different phases of QCD and QCD-related gauge theories, with strong emphasis on discussion of the underlying non-perturbative mechanisms which manifest themselves as all those phases. Before we go to specifics, let us emphasize one important aspect of the present status of non-perturbative Quantum Field Theory in general. It remains true that its studies do not get attention proportional to the intellectual challenge they deserve, and that the theorists working on it remain very fragmented. The efforts to create Theory of Everything including Quantum Gravity have attracted the lion share of attention and young talent. Nevertheless, in the last few years there was also a tremendous progress and even some shift of attention toward emphasis on the unity of non-perturbative phenomena. For example, we have seen some. efforts to connect the lessons from recent progress in Supersymmetric theories with that in QCD, as derived from phenomenology and lattice. Another example is Maldacena conjecture and related development, which connect three things together, string theory, super-gravity and the (N=4) supersymmetric gauge theory. Although the progress mentioned is remarkable by itself, if we would listen to each other more we may have chance to strengthen the field and reach better understanding of the spectacular non-perturbative physics.

  14. Marking up lattice QCD configurations and ensembles

    SciTech Connect

    P.Coddington; B.Joo; C.M.Maynard; D.Pleiter; T.Yoshie

    2007-10-01

    QCDml is an XML-based markup language designed for sharing QCD configurations and ensembles world-wide via the International Lattice Data Grid (ILDG). Based on the latest release, we present key ingredients of the QCDml in order to provide some starting points for colleagues in this community to markup valuable configurations and submit them to the ILDG.

  15. QCD subgroup on diffractive and forward physics

    SciTech Connect

    Albrow, M.G.; Baker, W.; Bhatti, A.

    1996-10-01

    The goal is to understand the pomeron, and hence the behavior of total cross sections, elastic scattering and diffractive excitation, in terms of the underlying theory, QCD. A description of the basic ideas and phenomenology is followed by a discussion of hadron-hadron and electron-proton experiments. An appendix lists recommended diffractive-physics terms and definitions. 44 refs., 6 figs.

  16. Nonperturbative QCD corrections to electroweak observables

    SciTech Connect

    Dru B Renner, Xu Feng, Karl Jansen, Marcus Petschlies

    2011-12-01

    Nonperturbative QCD corrections are important to many low-energy electroweak observables, for example the muon magnetic moment. However, hadronic corrections also play a significant role at much higher energies due to their impact on the running of standard model parameters, such as the electromagnetic coupling. Currently, these hadronic contributions are accounted for by a combination of experimental measurements and phenomenological modeling but ideally should be calculated from first principles. Recent developments indicate that many of the most important hadronic corrections may be feasibly calculated using lattice QCD methods. To illustrate this, we will examine the lattice computation of the leading-order QCD corrections to the muon magnetic moment, paying particular attention to a recently developed method but also reviewing the results from other calculations. We will then continue with several examples that demonstrate the potential impact of the new approach: the leading-order corrections to the electron and tau magnetic moments, the running of the electromagnetic coupling, and a class of the next-to-leading-order corrections for the muon magnetic moment. Along the way, we will mention applications to the Adler function, the determination of the strong coupling constant and QCD corrections to muonic-hydrogen.

  17. QCD results from D-Zero

    SciTech Connect

    Varelas, N.; D0 Collaboration

    1997-10-01

    We present recent results on jet production, dijet angular distributions, W+ Jets, and color coherence from p{anti p} collisions at {radical}s = 1.8 TeV at the Fermilab Tevatron Collider using the D0 detector. The data are compared to perturbative QCD calculations or to predictions of parton shower based Monte Carlo models.

  18. QCD in hadron-hadron collisions

    SciTech Connect

    Albrow, M.

    1997-03-01

    Quantum Chromodynamics provides a good description of many aspects of high energy hadron-hadron collisions, and this will be described, along with some aspects that are not yet understood in QCD. Topics include high E{sub T} jet production, direct photon, W, Z and heavy flavor production, rapidity gaps and hard diffraction.

  19. The Top Quark, QCD, And New Physics.

    DOE R&D Accomplishments Database

    Dawson, S.

    2002-06-01

    The role of the top quark in completing the Standard Model quark sector is reviewed, along with a discussion of production, decay, and theoretical restrictions on the top quark properties. Particular attention is paid to the top quark as a laboratory for perturbative QCD. As examples of the relevance of QCD corrections in the top quark sector, the calculation of e{sup+}e{sup -}+ t{bar t} at next-to-leading-order QCD using the phase space slicing algorithm and the implications of a precision measurement of the top quark mass are discussed in detail. The associated production of a t{bar t} pair and a Higgs boson in either e{sup+}e{sup -} or hadronic collisions is presented at next-to-leading-order QCD and its importance for a measurement of the top quark Yulrawa coupling emphasized. Implications of the heavy top quark mass for model builders are briefly examined, with the minimal supersymmetric Standard Model and topcolor discussed as specific examples.

  20. Visualization Tools for Lattice QCD - Final Report

    SciTech Connect

    Massimo Di Pierro

    2012-03-15

    Our research project is about the development of visualization tools for Lattice QCD. We developed various tools by extending existing libraries, adding new algorithms, exposing new APIs, and creating web interfaces (including the new NERSC gauge connection web site). Our tools cover the full stack of operations from automating download of data, to generating VTK files (topological charge, plaquette, Polyakov lines, quark and meson propagators, currents), to turning the VTK files into images, movies, and web pages. Some of the tools have their own web interfaces. Some Lattice QCD visualization have been created in the past but, to our knowledge, our tools are the only ones of their kind since they are general purpose, customizable, and relatively easy to use. We believe they will be valuable to physicists working in the field. They can be used to better teach Lattice QCD concepts to new graduate students; they can be used to observe the changes in topological charge density and detect possible sources of bias in computations; they can be used to observe the convergence of the algorithms at a local level and determine possible problems; they can be used to probe heavy-light mesons with currents and determine their spatial distribution; they can be used to detect corrupted gauge configurations. There are some indirect results of this grant that will benefit a broader audience than Lattice QCD physicists.

  1. The Light-Front Schrödinger Equation and the Determination of the Perturbative QCD Scale from Color Confinement: A First Approximation to QCD

    NASA Astrophysics Data System (ADS)

    Brodsky, Stanley J.; de Téramond, Guy F.; Deur, Alexandre; Dosch, Hans Günter

    2015-09-01

    The valence Fock-state wavefunctions of the light-front (LF) QCD Hamiltonian satisfy a relativistic equation of motion, analogous to the nonrelativistic radial Schrödinger equation, with an effective confining potential U which systematically incorporates the effects of higher quark and gluon Fock states. If one requires that the effective action which underlies the QCD Lagrangian remains conformally invariant and extends the formalism of de Alfaro, Fubini and Furlan to LF Hamiltonian theory, the potential U has a unique form of a harmonic oscillator potential, and a mass gap arises. The result is a nonperturbative relativistic LF quantum mechanical wave equation which incorporates color confinement and other essential spectroscopic and dynamical features of hadron physics, including a massless pion for zero quark mass and linear Regge trajectories with the same slope in the radial quantum number n and orbital angular momentum L. Only one mass parameter κ appears. The corresponding LF Dirac equation provides a dynamical and spectroscopic model of nucleons. The same LF equations arise from the holographic mapping of the soft-wall model modification of AdS5 space with a unique dilaton profile to QCD (3+1) at fixed LF time. LF holography thus provides a precise relation between the bound-state amplitudes in the fifth dimension of Anti-de Sitter (AdS) space and the boost-invariant LFWFs describing the internal structure of hadrons in physical space-time. We also show how the mass scale underlying confinement and the masses of light-quark hadrons determines the scale controlling the evolution of the perturbative QCD coupling. The relation between scales is obtained by matching the nonperturbative dynamics, as described by an effective conformal theory mapped to the LF and its embedding in AdS space, to the perturbative QCD regime computed to four-loop order. The data for the effective coupling defined from the Bjorken sum rule are remarkably consistent with the

  2. Bergman kernel and complex singularity exponent

    NASA Astrophysics Data System (ADS)

    Chen, Boyong; Lee, Hanjin

    2009-12-01

    We give a precise estimate of the Bergman kernel for the model domain defined by $\\Omega_F=\\{(z,w)\\in \\mathbb{C}^{n+1}:{\\rm Im}w-|F(z)|^2>0\\},$ where $F=(f_1,...,f_m)$ is a holomorphic map from $\\mathbb{C}^n$ to $\\mathbb{C}^m$, in terms of the complex singularity exponent of $F$.

  3. Advanced Development of Certified OS Kernels

    DTIC Science & Technology

    2015-06-01

    and Coq Ltac libraries. 15. SUBJECT TERMS Certified Software; Certified OS Kernels; Certified Compilers; Abstraction Layers; Modularity; Deep ...module should only need to be done once (to show that it implements its deep functional specification [14]). Global properties should be derived from the...building certified abstraction layers with deep specifications. A certified layer is a new language-based module construct that consists of a triple pL1,M

  4. Standard Model anatomy of WIMP dark matter direct detection. II. QCD analysis and hadronic matrix elements

    NASA Astrophysics Data System (ADS)

    Hill, Richard J.; Solon, Mikhail P.

    2015-02-01

    Models of weakly interacting massive particles (WIMPs) specified at the electroweak scale are systematically matched to effective theories at hadronic scales where WIMP-nucleus scattering observables are evaluated. Anomalous dimensions and heavy-quark threshold matching conditions are computed for the complete basis of lowest-dimension effective operators involving quarks and gluons. The resulting QCD renormalization group evolution equations are solved. The status of relevant hadronic matrix elements is reviewed and phenomenological illustrations are given, including details for the computation of the universal limit of nucleon scattering with heavy S U (2 )W×U (1 )Y charged WIMPs. Several cases of previously underestimated hadronic uncertainties are isolated. The results connect arbitrary models specified at the electroweak scale to a basis of nf=3 -flavor QCD operators. The complete basis of operators and Lorentz invariance constraints through order v2/c2 in the nonrelativistic nucleon effective theory are derived.

  5. The Palomar kernel-phase experiment: testing kernel phase interferometry for ground-based astronomical observations

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin; Tuthill, Peter; Hinkley, Sasha; Ireland, Michael J.; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D.; Martinache, Frantz

    2016-01-01

    At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently developed adaptive optics post-processing technique, called kernel-phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here, we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the α Ophiuchi binary system near periastron, using the Palomar 200-Inch Telescope. This is the first case in which kernel phase has been used with a full aperture to resolve a system close to the diffraction limit with ground-based extreme adaptive optics observations. Excellent agreement in astrometric quantities is found between kernel phase and masking, and kernel phase significantly outperforms PSF fitting and bispectral analysis, demonstrating its viability as an alternative to conventional non-redundant masking under appropriate conditions.

  6. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.

  7. Kernel Non-Rigid Structure from Motion

    PubMed Central

    Gotardo, Paulo F. U.; Martinez, Aleix M.

    2013-01-01

    Non-rigid structure from motion (NRSFM) is a difficult, underconstrained problem in computer vision. The standard approach in NRSFM constrains 3D shape deformation using a linear combination of K basis shapes; the solution is then obtained as the low-rank factorization of an input observation matrix. An important but overlooked problem with this approach is that non-linear deformations are often observed; these deformations lead to a weakened low-rank constraint due to the need to use additional basis shapes to linearly model points that move along curves. Here, we demonstrate how the kernel trick can be applied in standard NRSFM. As a result, we model complex, deformable 3D shapes as the outputs of a non-linear mapping whose inputs are points within a low-dimensional shape space. This approach is flexible and can use different kernels to build different non-linear models. Using the kernel trick, our model complements the low-rank constraint by capturing non-linear relationships in the shape coefficients of the linear model. The net effect can be seen as using non-linear dimensionality reduction to further compress the (shape) space of possible solutions. PMID:24002226

  8. Balancing continuous covariates based on Kernel densities.

    PubMed

    Ma, Zhenjun; Hu, Feifang

    2013-03-01

    The balance of important baseline covariates is essential for convincing treatment comparisons. Stratified permuted block design and minimization are the two most commonly used balancing strategies, both of which require the covariates to be discrete. Continuous covariates are typically discretized in order to be included in the randomization scheme. But breakdown of continuous covariates into subcategories often changes the nature of the covariates and makes distributional balance unattainable. In this article, we propose to balance continuous covariates based on Kernel density estimations, which keeps the continuity of the covariates. Simulation studies show that the proposed Kernel-Minimization can achieve distributional balance of both continuous and categorical covariates, while also keeping the group size well balanced. It is also shown that the Kernel-Minimization is less predictable than stratified permuted block design and minimization. Finally, we apply the proposed method to redesign the NINDS trial, which has been a source of controversy due to imbalance of continuous baseline covariates. Simulation shows that imbalances such as those observed in the NINDS trial can be generally avoided through the implementation of the new method.

  9. Kernel methods for phenotyping complex plant architecture.

    PubMed

    Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien

    2014-02-07

    The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.

  10. Topological Charge Evolution in the Markov-Chain of QCD

    SciTech Connect

    Derek Leinweber; Anthony Williams; Jian-bo Zhang; Frank Lee

    2004-04-01

    The topological charge is studied on lattices of large physical volume and fine lattice spacing. We illustrate how a parity transformation on the SU(3) link-variables of lattice gauge configurations reverses the sign of the topological charge and leaves the action invariant. Random applications of the parity transformation are proposed to traverse from one topological charge sign to the other. The transformation provides an improved unbiased estimator of the ensemble average and is essential in improving the ergodicity of the Markov chain process.

  11. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  12. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  13. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  14. Viscous QCD matter in a hybrid hydrodynamic+Boltzmann approach

    SciTech Connect

    Song Huichao; Bass, Steffen A.; Heinz, Ulrich

    2011-02-15

    A hybrid transport approach for the bulk evolution of viscous QCD matter produced in ultra-relativistic heavy-ion collisions is presented. The expansion of the dense deconfined phase of the reaction is modeled with viscous hydrodynamics, while the dilute late hadron gas stage is described microscopically by the Boltzmann equation. The advantages of such a hybrid approach lie in the improved capability of handling large dissipative corrections in the late dilute phase of the reaction, including a realistic treatment of the nonequilibrium hadronic chemistry and kinetic freeze-out. By varying the switching temperature at which the hydrodynamic output is converted to particles for further propagation with the Boltzmann cascade we test the ability of the macroscopic hydrodynamic approach to emulate the microscopic evolution during the hadronic stage and extract the temperature dependence of the effective shear viscosity of the hadron resonance gas produced in the collision. We find that the extracted values depend on the prior hydrodynamic history and hence do not represent fundamental transport properties of the hadron resonance gas. We conclude that viscous fluid dynamics does not provide a faithful description of hadron resonance gas dynamics with predictive power, and that both components of the hybrid approach are needed for a quantitative description of the fireball expansion and its freeze-out.

  15. Mapping the QCD Phase Transition with Accreting Compact Stars

    SciTech Connect

    Blaschke, D.; Poghosyan, G.; Grigorian, H.

    2008-10-29

    We discuss an idea for how accreting millisecond pulsars could contribute to the understanding of the QCD phase transition in the high-density nuclear matter equation of state (EoS). It is based on two ingredients, the first one being a ''phase diagram'' of rapidly rotating compact star configurations in the plane of spin frequency and mass, determined with state-of-the-art hybrid equations of state, allowing for a transition to color superconducting quark matter. The second is the study of spin-up and accretion evolution in this phase diagram. We show that the quark matter phase transition leads to a characteristic line in the {omega}-M plane, the phase border between neutron stars and hybrid stars with a quark matter core. Along this line a drop in the pulsar's moment of inertia entails a waiting point phenomenon in the accreting millisecond pulsar (AMXP) evolution: most of these objects should therefore be found along the phase border in the {omega}-M plane, which may be viewed as the AMXP analog of the main sequence in the Hertzsprung-Russell diagram for normal stars. In order to prove the existence of a high-density phase transition in the cores of compact stars we need population statistics for AMXPs with sufficiently accurate determination of their masses, spin frequencies and magnetic fields.

  16. Electrical conductivity of hot QCD matter.

    PubMed

    Cassing, W; Linnyk, O; Steinert, T; Ozvenchuk, V

    2013-05-03

    We study the electric conductivity of hot QCD matter at various temperatures T within the off-shell parton-hadron-string dynamics transport approach for interacting partonic, hadronic or mixed systems in a finite box with periodic boundary conditions. The response of the strongly interacting system in equilibrium to an external electric field defines the electric conductivity σ(0). We find a sizable temperature dependence of the ratio σ(0)/T well in line with calculations in a relaxation time approach for T(c)QCD matter even at T ≈ T(c) is a much better electric conductor than Cu or Ag (at room temperature).

  17. Exploring Three Nucleon Forces in Lattice QCD

    SciTech Connect

    Doi, Takumi

    2011-10-21

    We study the three nucleon force in N{sub f} = 2 dynamical clover fermion lattice QCD, utilizing the Nambu-Bethe-Salpeter wave function of the three nucleon system. Since parity-odd two nucleon potentials are not available in lattice QCD at this moment, we develop a new formulation to extract the genuine three nucleon force which requires only the information of parity-even two nucleon potentials. In order to handle the extremely expensive calculation cost, we consider a specific three-dimensional coordinate configuration for the three nucleons. We find that the linear setup is advantageous, where nucleons are aligned linearly with equal spacings. The lattice calculation is performed with 16{sup 3}x32 configurations at {beta} = 1.95, m{sub {pi}} = 1.13 GeV generated by CP-PACS Collaboration, and the result of the three nucleon force in triton channel is presented.

  18. Hadronization of QCD and effective interactions

    SciTech Connect

    Frank, M.R.

    1994-07-01

    An introductory treatment of hadronization through functional integral calculus and bifocal Bose fields is given. Emphasis is placed on the utility of this approach for providing a connection between QCD and effective hadronic field theories. The hadronic interactions obtained by this method are nonlocal due to the QCD substructure, yet, in the presence of an electromagnetic field, maintain the electromagnetic gauge invariance manifest at the quark level. A local chiral model which is structurally consistent with chiral perturbation theory is obtained through a derivative expansion of the nonlocalities with determined, finite coefficients. Tree-level calculations of the pion form factor and {pi} {minus} {pi} scattering, which illustrate the dual constituent-quark-chiral-model nature of this approach, are presented.

  19. Electrical Conductivity of Hot QCD Matter

    NASA Astrophysics Data System (ADS)

    Cassing, W.; Linnyk, O.; Steinert, T.; Ozvenchuk, V.

    2013-05-01

    We study the electric conductivity of hot QCD matter at various temperatures T within the off-shell parton-hadron-string dynamics transport approach for interacting partonic, hadronic or mixed systems in a finite box with periodic boundary conditions. The response of the strongly interacting system in equilibrium to an external electric field defines the electric conductivity σ0. We find a sizable temperature dependence of the ratio σ0/T well in line with calculations in a relaxation time approach for TcQCD matter even at T≈Tc is a much better electric conductor than Cu or Ag (at room temperature).

  20. An Analytic Approach to Perturbative QCD

    NASA Astrophysics Data System (ADS)

    Magradze, B. A.

    The two-loop invariant (running) coupling of QCD is written in terms of the Lambert W function. The analyticity structure of the coupling in the complex Q2-plane is established. The corresponding analytic coupling is reconstructed via a dispersion relation. We also consider some other approximations to the QCD β-function, when the corresponding couplings are solved in terms of the Lambert function. The Landau gauge gluon propagator has been considered in the renormalization group invariant analytic approach (IAA). It is shown that there is a nonperturbative ambiguity in determination of the anomalous dimension function of the gluon field. Several analytic solutions for the propagator at the one-loop order are constructed. Properties of the obtained analytical solutions are discussed.

  1. The {Lambda}(1405) in Full QCD

    SciTech Connect

    Menadue, Benjamin J.; Kamleh, Waseem; Leinweber, Derek B.; Mahbub, M. Selim

    2011-12-14

    At 1405.1 MeV, the lowest-lying negative-parity state of the {Lambda} baryon lies surprising low. Indeed, this is lower than the lowest negative-parity state of the nucleon, even though the {Lambda}(1405) possesses a valence strange quark. However, previous Lattice QCD studies have been unable to identify such a low-lying state. Using the PACS-CS (2+1)-flavour full-QCD ensembles, available through the ILDG, we utilise a variational analysis with source and sink smearing to isolate this elusive state. We find three low-lying odd-parity states, and for the first time reproduce the correct level ordering with respect to the nearby scattering thresholds.

  2. η and η' mesons from lattice QCD.

    PubMed

    Christ, N H; Dawson, C; Izubuchi, T; Jung, C; Liu, Q; Mawhinney, R D; Sachrajda, C T; Soni, A; Zhou, R

    2010-12-10

    The large mass of the ninth pseudoscalar meson, the η', is believed to arise from the combined effects of the axial anomaly and the gauge field topology present in QCD. We report a realistic, 2+1-flavor, lattice QCD calculation of the η and η' masses and mixing which confirms this picture. The physical eigenstates show small octet-singlet mixing with a mixing angle of θ=-14.1(2.8)°. Extrapolation to the physical light quark mass gives, with statistical errors only, mη=573(6) MeV and mη'=947(142) MeV, consistent with the experimental values of 548 and 958 MeV.

  3. Compositeness and QCD at the SSC

    SciTech Connect

    Barnes, V.; Blumenfeld, B.; Cahn, R.; Chivukula, S.; Ellis, S.; Freeman, J.; Heusch, C.; Huston, J.; Kondo, K.; Morfin, J.

    1987-10-12

    Compositeness may be signaled by an increase in the production of high transverse momentum hadronic jet pairs or lepton pairs. The hadronic jet signal competes with the QCD production of jets, a subject of interest in its own right. Tests of perturbative QCD at the SSC will be of special interest because the calculations are expected to be quite reliable. Studies show that compositeness up to a scale of 20 to 35 TeV would be detected in hadronic jets at the SSC. Leptonic evidence would be discovered for scales up to 10 to 20 TeV. The charge asymmetry for leptons would provide information on the nature of the compositeness interaction. Calorimetry will play a crucial role in the detection of compositeness in the hadronic jet signal. Deviations from an e/h response of 1 could mask the effect. The backgrounds for lepton pair production seem manageable. 30 refs., 19 figs., 10 tabs.

  4. Nucleon Structure from Dynamical Lattice QCD

    SciTech Connect

    Huey-Wen Lin

    2007-06-01

    We present lattice QCD numerical calculations of hadronic structure functions and form factors from full-QCD lattices, with a chirally symmetric fermion action, domain-wall fermions, for the sea and valence quarks. The lattice spacing is about 0.12 fm with physical volume approximately (2 fm)3 for RBC 2-flavor ensembles and (3 fm)3 for RBC/UKQCD 2+1-flavor dynamical ones. The lightest sea quark mass is about 1/2 the strange quark mass for the former ensembles and 1/4 for the latter ones. Our calculations include: isovector vector- and axial-charge form factors and the first few moments of the polarized and unpolarized structure functions of the nucleon. Nonperturbative renormalization in RI/MOM scheme is applied.

  5. Nucleon Structure from Dynamical Lattice QCD

    SciTech Connect

    Lin, H.-W.

    2007-06-13

    We present lattice QCD numerical calculations of hadronic structure functions and form factors from full-QCD lattices, with a chirally symmetric fermion action, domain-wall fermions, for the sea and valence quarks. The lattice spacing is about 0.12 fm with physical volume approximately (2 fm)3 for RBC 2-flavor ensembles and (3 fm)3 for RBC/UKQCD 2+1-flavor dynamical ones. The lightest sea quark mass is about 1/2 the strange quark mass for the former ensembles and 1/4 for the latter ones. Our calculations include: isovector vector- and axial-charge form factors and the first few moments of the polarized and unpolarized structure functions of the nucleon. Nonperturbative renormalization in RI/MOM scheme is applied.

  6. Nucleon Parton Structure from Continuum QCD

    NASA Astrophysics Data System (ADS)

    Bednar, Kyle; Cloet, Ian; Tandy, Peter

    2017-01-01

    The parton structure of the nucleon is investigated using QCD's Dyson-Schwinger equations (DSEs). This formalism builds in numerous essential features of QCD, for example, the dressing of parton propagators and dynamical formation of non-pointlike di-quark correlations. All needed elements of the approach, including the nucleon wave function solution from a Poincaré covariant Faddeev equation, are encoded in spectral-type representations in the Nakanishi style. This facilitates calculations and the necessary connections between Euclidean and Minkowski metrics. As a first step results for the nucleon quark distribution functions will be presented. The extension to the transverse momentum-dependent parton distributions (TMDs) also be discussed. Supported by NSF Grant No. PHY-1516138.

  7. Proton spin structure from lattice QCD

    SciTech Connect

    Fukugita, M.; Kuramashi, Y.; Okawa, M.; Ukawa, A. ||

    1995-09-11

    A lattice QCD calculation of the proton matrix element of the flavor singlet axial-vector current is reported. Both the connected and disconnected contributions are calculated, for the latter employing the variant method of wall source without gauge fixing. From simulations in quenched QCD with the Wilson quark action on a 16{sup 3}{times}20 lattice at {beta}=5.7 (the lattice spacing {ital a}{approx}0.14 fm), we find {Delta}{Sigma}={Delta}{ital u}+{Delta}{ital d}+{Delta}{ital s}=+0.638(54){minus}0.347(46){minus}0.109(30)=+0.18(10) with the disconnected contribution to {Delta}{ital u} and {Delta}{ital d} equal to {minus}0.119(44), which is reasonably consistent with the experiment.

  8. Phase transitions in QCD and string theory

    NASA Astrophysics Data System (ADS)

    Campell, Bruce A.; Ellis, John; Kalara, S.; Nanopoulos, D. V.; Olive, Keith A.

    1991-02-01

    We develop a unified effective field theory approach to the high-temperature phase transitions in QCD and string theory, incorporating winding modes (time-like Polyakov loops, vortices) as well as low-mass states (pseudoscalar mesons and glueballs, matter and dilaton supermultiplets). Anomalous scale invariance and the Z3 structure of the centre of SU(3) decree a first-order phase transition with simultaneous deconfinement and Polyakov loop condensation in QCD, whereas string vortex condensation is a second-order phase transition breaking a Z2 symmetry. We argue that vortex condensation is accompanied by a dilaton phase transition to a strong coupling regime, and comment on the possible role of soliton degrees of freedom in the high-temperature string phase. On leave of absence from the School of Physics & Astronomy, University of Minnesota, Minneapolis, Minnesota, USA.

  9. Influence of wheat kernel physical properties on the pulverizing process.

    PubMed

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  10. Geometric tree kernels: classification of COPD from airway tree geometry.

    PubMed

    Feragen, Aasa; Petersen, Jens; Grimm, Dominik; Dirksen, Asger; Pedersen, Jesper Holst; Borgwardt, Karsten; de Bruijne, Marleen

    2013-01-01

    Methodological contributions: This paper introduces a family of kernels for analyzing (anatomical) trees endowed with vector valued measurements made along the tree. While state-of-the-art graph and tree kernels use combinatorial tree/graph structure with discrete node and edge labels, the kernels presented in this paper can include geometric information such as branch shape, branch radius or other vector valued properties. In addition to being flexible in their ability to model different types of attributes, the presented kernels are computationally efficient and some of them can easily be computed for large datasets (N - 10.000) of trees with 30 - 600 branches. Combining the kernels with standard machine learning tools enables us to analyze the relation between disease and anatomical tree structure and geometry. Experimental results: The kernels are used to compare airway trees segmented from low-dose CT, endowed with branch shape descriptors and airway wall area percentage measurements made along the tree. Using kernelized hypothesis testing we show that the geometric airway trees are significantly differently distributed in patients with Chronic Obstructive Pulmonary Disease (COPD) than in healthy individuals. The geometric tree kernels also give a significant increase in the classification accuracy of COPD from geometric tree structure endowed with airway wall thickness measurements in comparison with state-of-the-art methods, giving further insight into the relationship between airway wall thickness and COPD. Software: Software for computing kernels and statistical tests is available at http://image.diku.dk/aasa/software.php.

  11. A Kernel-based Account of Bibliometric Measures

    NASA Astrophysics Data System (ADS)

    Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji

    The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.

  12. BB Potentials in Quenched Lattice QCD

    SciTech Connect

    William Detmold; Kostas Orginos; Martin J. Savage

    2007-12-01

    The potentials between two B-mesons are computed in the heavy-quark limit using quenched lattice QCD at $m_\\pi\\sim 400~{\\rm MeV}$. Non-zero central potentials are clearly evident in all four spin-isospin channels, (I,s_l) = (0,0) , (0,1) , (1,0) , (1,1), where s_l is the total spin of the light degrees of freedom. At short distance, we find repulsion in the $I\

  13. Advances in QCD sum-rule calculations

    SciTech Connect

    Melikhov, Dmitri

    2016-01-22

    We review the recent progress in the applications of QCD sum rules to hadron properties with the emphasis on the following selected problems: (i) development of new algorithms for the extraction of ground-state parameters from two-point correlators; (ii) form factors at large momentum transfers from three-point vacuum correlation functions: (iii) properties of exotic tetraquark hadrons from correlation functions of four-quark currents.

  14. Bootstrapping One-Loop QCD Amplitudes

    SciTech Connect

    Berger, Carola F.; /SLAC

    2006-09-08

    We review the recently developed bootstrap method for the computation of high-multiplicity QCD amplitudes at one loop. We illustrate the general algorithm step by step with a six-point example. The method combines (generalized) unitarity with on-shell recursion relations to determine the not cut-constructible, rational terms of these amplitudes. Our bootstrap approach works for arbitrary configurations of gluon helicities and arbitrary numbers of external legs.

  15. Hadron physics as Seiberg dual of QCD

    SciTech Connect

    Kitano, Ryuichiro

    2012-07-27

    We try to identify the light hadron world as the magnetic picture of QCD. We take both phenomenological and theoretical approaches to this hypothesis, and find that the interpretation seems to show interesting consistencies. In particular, one can identify the {rho} and {omega} mesons as the magnetic gauge bosons, and the Higgs mechanism for them provides a dual picture of the color confinement{sup 1}.

  16. Lattice QCD calculations of weak matrix elements

    NASA Astrophysics Data System (ADS)

    Detar, Carleton

    2017-01-01

    Lattice QCD has become the method of choice for calculating the hadronic environment of the electroweak interactions of quarks. So it is now an essential tool in the search for new physics beyond the Standard Model. Advances in computing power and algorithms have resulted in increasingly precise predictions and increasingly stringent tests of the Standard Model. I review results of recent calculations of weak matrix elements and discuss their implications for new physics. Supported by US NSF grant PHY10-034278.

  17. Theoretical overview: Hot and dense QCD in equilibrium

    SciTech Connect

    Hatsuda, Tetsuo

    1991-11-01

    Static and dynamical properties of QCD at finite temperature and density are reviewed. Non-perturbative aspects of the QCD plasma and modification of the hadron properties associated with the chiral transition are discussed on the basis of lattice data, effective theories and QCD sum rules. Special emphasis is laid on the importance of the finite baryon density to see the effects of the restoration of chiral symmetry in experiment.

  18. Quarkyonic Matter and the Phase Diagram of QCD

    SciTech Connect

    McLerran,L.

    2008-05-15

    Quarkyonic matter is a new phase of QCD at finite temperature and density which is distinct from the confined and de-confined phases. Its existence is unambiguously argued in the large numbers of colors limit, N{sub c} {yields} {infinity}, of QCD. Hints of its existence for QCD, N{sub c} = 3, are shown in lattice Monte-Carlo data and in heavy ion experiments.

  19. Structure and dynamical nature of hot and dense QCD matter

    SciTech Connect

    Hatsuda, Tetsuo.

    1991-07-01

    Static and dynamical properties of QCD at finite temperature and density are reviewed. Non-perturbative aspects of the QCD plasma and the modification of the hadron properties associated with the chiral transition are discussed on the basis of lattice data, effective theories and QCD sum rules. Special emphasis is laid on the importance of the finite baryon density to see the effects of the restoration of chiral symmetry in experiment.

  20. Gravitational waves from the cosmological QCD transition

    NASA Astrophysics Data System (ADS)

    Mourão Roque, V. R. C.; Roque, G. Lugones o.; Lugones, G.

    2014-09-01

    We determine the minimum fluctuations in the cosmological QCD phase transition that could be detectable by the eLISA/NGO gravitational wave observatory. To this end, we performed several hydrodynamical simulations using a state-of-the-art equation of state derived from lattice QCD simulations. Based on the fact that the viscosity per entropy density of the quark gluon plasma obtained from heavy-ion collision experiments at the RHIC and the LHC is extremely small, we considered a non-viscous fluid in our simulations. Several previous works about this transition considered a first order transition that generates turbulence which follows a Kolmogorov power law. We show that for the QCD crossover transition the turbulent spectrum must be very different because there is no viscosity and no source of continuous energy injection. As a consequence, a large amount of kinetic energy accumulates at the smallest scales. From the hydrodynamic simulations, we have obtained the spectrum of the gravitational radiation emitted by the motion of the fluid, finding that, if typical velocity and temperature fluctuations have an amplitude Δ v /c ≳ 10-2 and/or Δ T/T_c ≳ 10-3, they would be detected by eLISA/NGO at frequencies larger than ˜ 10-4 Hz.

  1. Astrophysical Implications of the QCD Phase Transition

    SciTech Connect

    Schaffner-Bielich, J.; Sagert, I.; Hempel, M.; Pagliara, G.; Fischer, T.; Mezzacappa, Anthony; Thielemann, Friedrich-Karl W.; Liebendoerfer, Matthias

    2009-01-01

    The possible role of a first order QCD phase transition at nonvanishing quark chemical potential and temperature for cold neutron stars and for supernovae is delineated. For cold neutron stars, we use the NJL model with a nonvanishing color superconducting pairing gap, which describes the phase transition to the 2SC and the CFL quark matter phases at high baryon densities. We demonstrate that these two phase transitions can both be present in the core of neutron stars and that they lead to the appearance of a third family of solution for compact stars. In particular, a core of CFL quark matter can be present in stable compact star configurations when slightly adjusting the vacuum pressure to the onset of the chiral phase transition from the hadronic model to the NJL model. We show that a strong first order phase transition can have a strong impact on the dynamics of core collapse supernovae. If the QCD phase transition sets in shortly after the first bounce, a second outgoing shock wave can be generated which leads to an explosion. The presence of the QCD phase transition can be read off from the neutrino and antineutrino signal of the supernova.

  2. Hybrid model for QCD deconfining phase boundary

    NASA Astrophysics Data System (ADS)

    Srivastava, P. K.; Singh, C. P.

    2012-06-01

    Intensive search for a proper and realistic equations of state (EOS) is still continued for studying the phase diagram existing between quark gluon plasma (QGP) and hadron gas (HG) phases. Lattice calculations provide such EOS for the strongly interacting matter at finite temperature (T) and vanishing baryon chemical potential (μB). These calculations are of limited use at finite μB due to the appearance of notorious sign problem. In the recent past, we had constructed a hybrid model description for the QGP as well as HG phases where we make use of a new excluded-volume model for HG and a thermodynamically-consistent quasiparticle model for the QGP phase and used them further to get QCD phase boundary and a critical point. Since then many lattice calculations have appeared showing various thermal and transport properties of QCD matter at finite T and μB=0. We test our hybrid model by reproducing the entire data for strongly interacting matter and predict our results at finite μB so that they can be tested in future. Finally we demonstrate the utility of the model in fixing the precise location, the order of the phase transition and the nature of CP existing on the QCD phase diagram. We thus emphasize the suitability of the hybrid model as formulated here in providing a realistic EOS for the strongly interacting matter.

  3. QCD with Chiral Imbalance: models vs. lattice

    NASA Astrophysics Data System (ADS)

    Andrianov, Alexander; Andrianov, Vladimir; Espriu, Domenec

    2017-03-01

    In heavy ion collisions (HIC) at high energies there may appear new phases of matter which must be described by QCD. These phases may have different color and flavour symmetries associated with the constituents involved in collisions as well as various space-time symmetries of hadron matter. Properties of the QCD medium in such a matter can be approximately described, in particular, by a number of right-handed (RH) and left-handed (LH) light quarks. The chiral imbalance (ChI) is characterized by the difference between the numbers of RH and LH quarks and supposedly occurs in the fireball after HIC. Accordingly we have to introduce a quark chiral (axial) chemical potential which simulates a ChI emerging in such a phase. In this report we discuss the possibility of a phase with Local spatial Parity Breaking (LPB) in such an environment and outline conceivable signatures for the registration of LPB as well as the appearance of new states in the spectra of scalar, pseudoscalar and vector particles as a consequence of local ChI. The comparison of the results obtained in the effective QCD- motivated models with lattice data is also performed.

  4. QCD in heavy quark production and decay

    SciTech Connect

    Wiss, J.

    1997-06-01

    The author discusses how QCD is used to understand the physics of heavy quark production and decay dynamics. His discussion of production dynamics primarily concentrates on charm photoproduction data which are compared to perturbative QCD calculations which incorporate fragmentation effects. He begins his discussion of heavy quark decay by reviewing data on charm and beauty lifetimes. Present data on fully leptonic and semileptonic charm decay are then reviewed. Measurements of the hadronic weak current form factors are compared to the nonperturbative QCD-based predictions of Lattice Gauge Theories. He next discusses polarization phenomena present in charmed baryon decay. Heavy Quark Effective Theory predicts that the daughter baryon will recoil from the charmed parent with nearly 100% left-handed polarization, which is in excellent agreement with present data. He concludes by discussing nonleptonic charm decay which is traditionally analyzed in a factorization framework applicable to two-body and quasi-two-body nonleptonic decays. This discussion emphasizes the important role of final state interactions in influencing both the observed decay width of various two-body final states as well as modifying the interference between interfering resonance channels which contribute to specific multibody decays. 50 refs., 77 figs.

  5. Full CKM matrix with lattice QCD

    SciTech Connect

    Okamoto, Masataka; /Fermilab

    2004-12-01

    The authors show that it is now possible to fully determine the CKM matrix, for the first time, using lattice QCD. |V{sub cd}|, |V{sub cs}|, |V{sub ub}|, |V{sub cb}| and |V{sub us}| are, respectively, directly determined with the lattice results for form factors of semileptonic D {yields} {pi}lv, D {yields} Klv, B {yields} {pi}lv, B {yields} Dlv and K {yields} {pi}lv decays. The error from the quenched approximation is removed by using the MILC unquenced lattice gauge configurations, where the effect of u, d and s quarks is included. The error from the ''chiral'' extrapolation (m{sub l} {yields} m{sub ud}) is greatly reduced by using improved staggered quarks. The accuracy is comparable to that of the Particle Data Group averages. In addition, |V{sub ud}|, |V{sub ts}|, |V{sub ts}| and |V{sub td}| are determined by using unitarity of the CKM matrix and the experimental result for sin (2{beta}). In this way, they obtain all 9 CKM matrix elements, where the only theoretical input is lattice QCD. They also obtain all the Wolfenstein parameters, for the first time, using lattice QCD.

  6. QCD, Tevatron results and LHC prospects

    SciTech Connect

    Elvira, V.Daniel; /Fermilab

    2008-08-01

    We present a summary of the most recent measurements relevant to Quantum Chromodynamics (QCD) delivered by the D0 and CDF Tevatron experiments by May 2008. CDF and D0 are moving toward precision measurements of QCD based on data samples in excess of 1 fb-1. The inclusive jet cross sections have been extended to forward rapidity regions and measured with unprecedented precision following improvements in the jet energy calibration. Results on dijet mass distributions, bbbar dijet production using tracker based triggers, underlying event in dijet and Drell-Yan samples, inclusive photon and diphoton cross sections complete the list of measurements included in this paper. Good agreement with pQCD within errors is observed for jet production measurements. An improved and consistent theoretical description is needed for photon+jets processes. Collisions at the LHC are scheduled for early fall 2008, opening an era of discoveries at the new energy frontier, 5-7 times higher than that of the Tevatron.

  7. Model-based online learning with kernels.

    PubMed

    Li, Guoqi; Wen, Changyun; Li, Zheng Guo; Zhang, Aimin; Yang, Feng; Mao, Kezhi

    2013-03-01

    New optimization models and algorithms for online learning with Kernels (OLK) in classification, regression, and novelty detection are proposed in a reproducing Kernel Hilbert space. Unlike the stochastic gradient descent algorithm, called the naive online Reg minimization algorithm (NORMA), OLK algorithms are obtained by solving a constrained optimization problem based on the proposed models. By exploiting the techniques of the Lagrange dual problem like Vapnik's support vector machine (SVM), the solution of the optimization problem can be obtained iteratively and the iteration process is similar to that of the NORMA. This further strengthens the foundation of OLK and enriches the research area of SVM. We also apply the obtained OLK algorithms to problems in classification, regression, and novelty detection, including real time background substraction, to show their effectiveness. It is illustrated that, based on the experimental results of both classification and regression, the accuracy of OLK algorithms is comparable with traditional SVM-based algorithms, such as SVM and least square SVM (LS-SVM), and with the state-of-the-art algorithms, such as Kernel recursive least square (KRLS) method and projectron method, while it is slightly higher than that of NORMA. On the other hand, the computational cost of the OLK algorithm is comparable with or slightly lower than existing online methods, such as above mentioned NORMA, KRLS, and projectron methods, but much lower than that of SVM-based algorithms. In addition, different from SVM and LS-SVM, it is possible for OLK algorithms to be applied to non-stationary problems. Also, the applicability of OLK in novelty detection is illustrated by simulation results.

  8. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  9. Prediction of kernel density of corn using single-kernel near infrared spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...

  10. Neutron scattering kernel for solid deuterium

    NASA Astrophysics Data System (ADS)

    Granada, J. R.

    2009-06-01

    A new scattering kernel to describe the interaction of slow neutrons with solid deuterium was developed. The main characteristics of that system are contained in the formalism, including the lattice's density of states, the Young-Koppel quantum treatment of the rotations, and the internal molecular vibrations. The elastic processes involving coherent and incoherent contributions are fully described, as well as the spin-correlation effects. The results from the new model are compared with the best available experimental data, showing very good agreement.

  11. Oil point pressure of Indian almond kernels

    NASA Astrophysics Data System (ADS)

    Aregbesola, O.; Olatunde, G.; Esuola, S.; Owolarafe, O.

    2012-07-01

    The effect of preprocessing conditions such as moisture content, heating temperature, heating time and particle size on oil point pressure of Indian almond kernel was investigated. Results showed that oil point pressure was significantly (P < 0.05) affected by above mentioned parameters. It was also observed that oil point pressure reduced with increase in heating temperature and heating time for both coarse and fine particles. Furthermore, an increase in moisture content resulted in increased oil point pressure for coarse particles while there was a reduction in oil point pressure with increase in moisture content for fine particles.

  12. Verification of Chare-kernel programs

    SciTech Connect

    Bhansali, S.; Kale, L.V. )

    1989-01-01

    Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.

  13. Lattice analysis for the energy scale of QCD phenomena.

    PubMed

    Yamamoto, Arata; Suganuma, Hideo

    2008-12-12

    We formulate a new framework in lattice QCD to study the relevant energy scale of QCD phenomena. By considering the Fourier transformation of link variable, we can investigate the intrinsic energy scale of a physical quantity nonperturbatively. This framework is broadly available for all lattice QCD calculations. We apply this framework for the quark-antiquark potential and meson masses in quenched lattice QCD. The gluonic energy scale relevant for the confinement is found to be less than 1 GeV in the Landau or Coulomb gauge.

  14. Nucleon QCD sum rules in the instanton medium

    SciTech Connect

    Ryskin, M. G.; Drukarev, E. G. Sadovnikova, V. A.

    2015-09-15

    We try to find grounds for the standard nucleon QCD sum rules, based on a more detailed description of the QCD vacuum. We calculate the polarization operator of the nucleon current in the instanton medium. The medium (QCD vacuum) is assumed to be a composition of the small-size instantons and some long-wave gluon fluctuations. We solve the corresponding QCD sum rule equations and demonstrate that there is a solution with the value of the nucleon mass close to the physical one if the fraction of the small-size instantons contribution is w{sub s} ≈ 2/3.

  15. QCD and Light-Front Holography

    SciTech Connect

    Brodsky, Stanley J.; de Teramond, Guy F.; /Costa Rica U.

    2010-10-27

    The soft-wall AdS/QCD model, modified by a positive-sign dilaton metric, leads to a remarkable one-parameter description of nonperturbative hadron dynamics. The model predicts a zero-mass pion for zero-mass quarks and a Regge spectrum of linear trajectories with the same slope in the leading orbital angular momentum L of hadrons and the radial quantum number N. Light-Front Holography maps the amplitudes which are functions of the fifth dimension variable z of anti-de Sitter space to a corresponding hadron theory quantized on the light front. The resulting Lorentz-invariant relativistic light-front wave equations are functions of an invariant impact variable {zeta} which measures the separation of the quark and gluonic constituents within the hadron at equal light-front time. The result is to a semi-classical frame-independent first approximation to the spectra and light-front wavefunctions of meson and baryon light-quark bound states, which in turn predict the behavior of the pion and nucleon form factors. The theory implements chiral symmetry in a novel way: the effects of chiral symmetry breaking increase as one goes toward large interquark separation, consistent with spectroscopic data, and the the hadron eigenstates generally have components with different orbital angular momentum; e.g., the proton eigenstate in AdS/QCD with massless quarks has L = 0 and L = 1 light-front Fock components with equal probability. The soft-wall model also predicts the form of the non-perturbative effective coupling {alpha}{sub s}{sup AdS} (Q) and its {beta}-function which agrees with the effective coupling {alpha}{sub g1} extracted from the Bjorken sum rule. The AdS/QCD model can be systematically improved by using its complete orthonormal solutions to diagonalize the full QCD light-front Hamiltonian or by applying the Lippmann-Schwinger method in order to systematically include the QCD interaction terms. A new perspective on quark and gluon condensates is also reviewed.

  16. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense.

  17. Analysis of maize (Zea mays) kernel density and volume using micro-computed tomography and single-kernel near infrared spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Maize kernel density impacts milling quality of the grain due to kernel hardness. Harder kernels are correlated with higher test weight and are more resistant to breakage during harvest and transport. Softer kernels, in addition to being susceptible to mechanical damage, are also prone to pathogen ...

  18. Delimiting Areas of Endemism through Kernel Interpolation

    PubMed Central

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  19. Bergman kernel, balanced metrics and black holes

    NASA Astrophysics Data System (ADS)

    Klevtsov, Semyon

    In this thesis we explore the connections between the Kahler geometry and Landau levels on compact manifolds. We rederive the expansion of the Bergman kernel on Kahler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory. The physics interpretation of this result is as an expansion of the projector of wavefunctions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kahler form. This is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry. We also generalize this expansion to supersymmetric quantum mechanics and more general magnetic fields, and explore its applications. These include the quantum Hall effect in curved space, the balanced metrics and Kahler gravity. In particular, we conjecture that for a probe in a BPS black hole in type II strings compactified on Calabi-Yau manifolds, the moduli space metric is the balanced metric.

  20. Scientific Computing Kernels on the Cell Processor

    SciTech Connect

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  1. Generalized Langevin equation with tempered memory kernel

    NASA Astrophysics Data System (ADS)

    Liemert, André; Sandev, Trifce; Kantz, Holger

    2017-01-01

    We study a generalized Langevin equation for a free particle in presence of a truncated power-law and Mittag-Leffler memory kernel. It is shown that in presence of truncation, the particle from subdiffusive behavior in the short time limit, turns to normal diffusion in the long time limit. The case of harmonic oscillator is considered as well, and the relaxation functions and the normalized displacement correlation function are represented in an exact form. By considering external time-dependent periodic force we obtain resonant behavior even in case of a free particle due to the influence of the environment on the particle movement. Additionally, the double-peak phenomenon in the imaginary part of the complex susceptibility is observed. It is obtained that the truncation parameter has a huge influence on the behavior of these quantities, and it is shown how the truncation parameter changes the critical frequencies. The normalized displacement correlation function for a fractional generalized Langevin equation is investigated as well. All the results are exact and given in terms of the three parameter Mittag-Leffler function and the Prabhakar generalized integral operator, which in the kernel contains a three parameter Mittag-Leffler function. Such kind of truncated Langevin equation motion can be of high relevance for the description of lateral diffusion of lipids and proteins in cell membranes.

  2. Transcriptome analysis of Ginkgo biloba kernels

    PubMed Central

    He, Bing; Gu, Yincong; Xu, Meng; Wang, Jianwen; Cao, Fuliang; Xu, Li-an

    2015-01-01

    Ginkgo biloba is a dioecious species native to China with medicinally and phylogenetically important characteristics; however, genomic resources for this species are limited. In this study, we performed the first transcriptome sequencing for Ginkgo kernels at five time points using Illumina paired-end sequencing. Approximately 25.08-Gb clean reads were obtained, and 68,547 unigenes with an average length of 870 bp were generated by de novo assembly. Of these unigenes, 29,987 (43.74%) were annotated in publicly available plant protein database. A total of 3,869 genes were identified as significantly differentially expressed, and enrichment analysis was conducted at different time points. Furthermore, metabolic pathway analysis revealed that 66 unigenes were responsible for terpenoid backbone biosynthesis, with up to 12 up-regulated unigenes involved in the biosynthesis of ginkgolide and bilobalide. Differential gene expression analysis together with real-time PCR experiments indicated that the synthesis of bilobalide may have interfered with the ginkgolide synthesis process in the kernel. These data can remarkably expand the existing transcriptome resources of Ginkgo, and provide a valuable platform to reveal more on developmental and metabolic mechanisms of this species. PMID:26500663

  3. Characterization of factors underlying the metabolic shifts in developing kernels of colored maize

    PubMed Central

    Hu, Chaoyang; Li, Quanlin; Shen, Xuefang; Quan, Sheng; Lin, Hong; Duan, Lei; Wang, Yifa; Luo, Qian; Qu, Guorun; Han, Qing; Lu, Yuan; Zhang, Dabing; Yuan, Zheng; Shi, Jianxin

    2016-01-01

    Elucidation of the metabolic pathways determining pigmentation and their underlying regulatory mechanisms in maize kernels is of high importance in attempts to improve the nutritional composition of our food. In this study, we compared dynamics in the transcriptome and metabolome between colored SW93 and white SW48 by integrating RNA-Seq and non-targeted metabolomics. Our data revealed that expression of enzyme coding genes and levels of primary metabolites decreased gradually from 11 to 21 DAP, corresponding well with the physiological change of developing maize kernels from differentiation through reserve accumulation to maturation, which was cultivar independent. A remarkable up-regulation of anthocyanin and phlobaphene pathway distinguished SW93 from SW48, in which anthocyanin regulating transcriptional factors (R1 and C1), enzyme encoding genes involved in both pathways and corresponding metabolic intermediates were up-regulated concurrently in SW93 but not in SW48. The shift from the shikimate pathway of primary metabolism to the flavonoid pathway of secondary metabolism, however, appears to be under posttranscriptional regulation. This study revealed the link between primary metabolism and kernel coloration, which facilitate further study to explore fundamental questions regarding the evolution of seed metabolic capabilities as well as their potential applications in maize improvement regarding both staple and functional foods. PMID:27739524

  4. Biochemical and molecular characterization of Avena indolines and their role in kernel texture.

    PubMed

    Gazza, Laura; Taddei, Federica; Conti, Salvatore; Gazzelloni, Gloria; Muccilli, Vera; Janni, Michela; D'Ovidio, Renato; Alfieri, Michela; Redaelli, Rita; Pogna, Norberto E

    2015-02-01

    Among cereals, Avena sativa is characterized by an extremely soft endosperm texture, which leads to some negative agronomic and technological traits. On the basis of the well-known softening effect of puroindolines in wheat kernel texture, in this study, indolines and their encoding genes are investigated in Avena species at different ploidy levels. Three novel 14 kDa proteins, showing a central hydrophobic domain with four tryptophan residues and here named vromindoline (VIN)-1,2 and 3, were identified. Each VIN protein in diploid oat species was found to be synthesized by a single Vin gene whereas, in hexaploid A. sativa, three Vin-1, three Vin-2 and two Vin-3 genes coding for VIN-1, VIN-2 and VIN-3, respectively, were described and assigned to the A, C or D genomes based on similarity to their counterparts in diploid species. Expression of oat vromindoline transgenes in the extra-hard durum wheat led to accumulation of vromindolines in the endosperm and caused an approximate 50 % reduction of grain hardness, suggesting a central role for vromindolines in causing the extra-soft texture of oat grain. Further, hexaploid oats showed three orthologous genes coding for avenoindolines A and B, with five or three tryptophan residues, respectively, but very low amounts of avenoindolines were found in mature kernels. The present results identify a novel protein family affecting cereal kernel texture and would further elucidate the phylogenetic evolution of Avena genus.

  5. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    PubMed Central

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  6. Kernel method based human model for enhancing interactive evolutionary optimization.

    PubMed

    Pei, Yan; Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly.

  7. Predicting β-Turns in Protein Using Kernel Logistic Regression

    PubMed Central

    Elbashir, Murtada Khalafallah; Sheng, Yu; Wang, Jianxin; Wu, FangXiang; Li, Min

    2013-01-01

    A β-turn is a secondary protein structure type that plays a significant role in protein configuration and function. On average 25% of amino acids in protein structures are located in β-turns. It is very important to develope an accurate and efficient method for β-turns prediction. Most of the current successful β-turns prediction methods use support vector machines (SVMs) or neural networks (NNs). The kernel logistic regression (KLR) is a powerful classification technique that has been applied successfully in many classification problems. However, it is often not found in β-turns classification, mainly because it is computationally expensive. In this paper, we used KLR to obtain sparse β-turns prediction in short evolution time. Secondary structure information and position-specific scoring matrices (PSSMs) are utilized as input features. We achieved Qtotal of 80.7% and MCC of 50% on BT426 dataset. These results show that KLR method with the right algorithm can yield performance equivalent to or even better than NNs and SVMs in β-turns prediction. In addition, KLR yields probabilistic outcome and has a well-defined extension to multiclass case. PMID:23509793

  8. Signals of the QCD phase transition in core-collapse supernovae.

    PubMed

    Sagert, I; Fischer, T; Hempel, M; Pagliara, G; Schaffner-Bielich, J; Mezzacappa, A; Thielemann, F-K; Liebendörfer, M

    2009-02-27

    We explore the implications of the QCD phase transition during the postbounce evolution of core-collapse supernovae. Using the MIT bag model for the description of quark matter, we model phase transitions that occur during the early postbounce evolution. This stage of the evolution can be simulated with general relativistic three-flavor Boltzmann neutrino transport. The phase transition produces a second shock wave that triggers a delayed supernova explosion. If such a phase transition happens in a future galactic supernova, its existence and properties should become observable as a second peak in the neutrino signal that is accompanied by significant changes in the energy of the emitted neutrinos. This second neutrino burst is dominated by the emission of antineutrinos because the electron degeneracy is reduced when the second shock passes through the previously neutronized matter.

  9. Sugar uptake into kernels of tunicate tassel-seed maize

    SciTech Connect

    Thomas, P.A.; Felker, F.C.; Crawford, C.G. )

    1990-05-01

    A maize (Zea mays L.) strain expressing both the tassel-seed (Ts-5) and tunicate (Tu) characters was developed which produces glume-covered kernels on the tassel, often born on 7-10 mm pedicels. Vigorous plants produce up to 100 such kernels interspersed with additional sessile kernels. This floral unit provides a potentially valuable experimental system for studying sugar uptake into developing maize seeds. When detached kernels (with glumes and pedicel intact) are placed in incubation solution, fluid flows up the pedicel and into the glumes, entering the pedicel apoplast near the kernel base. The unusual anatomical features of this maize strain permit experimental access to the pedicel apoplast with much less possibility of kernel base tissue damage than with kernels excised from the cob. ({sup 14}C)Fructose incorporation into soluble and insoluble fractions of endosperm increased for 8 days. Endosperm uptake of sucrose, fructose, and D-glucose was significantly greater than that of L-glucose. Fructose uptake was significantly inhibited by CCCP, DNP, and PCMBS. These results suggest the presence of an active, non-diffusion component of sugar transport in maize kernels.

  10. Integral Transform Methods: A Critical Review of Various Kernels

    NASA Astrophysics Data System (ADS)

    Orlandini, Giuseppina; Turro, Francesco

    2017-03-01

    Some general remarks about integral transform approaches to response functions are made. Their advantage for calculating cross sections at energies in the continuum is stressed. In particular we discuss the class of kernels that allow calculations of the transform by matrix diagonalization. A particular set of such kernels, namely the wavelets, is tested in a model study.

  11. Evidence-Based Kernels: Fundamental Units of Behavioral Influence

    ERIC Educational Resources Information Center

    Embry, Dennis D.; Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…

  12. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  13. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  14. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  15. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  16. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  17. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  18. High speed sorting of Fusarium-damaged wheat kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recent studies have found that resistance to Fusarium fungal infection can be inherited in wheat from one generation to another. However, there is not yet available a cost effective method to separate Fusarium-damaged wheat kernels from undamaged kernels so that wheat breeders can take advantage of...

  19. End-use quality of soft kernel durum wheat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat is known for its very hard texture, which influences how it is milled and for what products it is well suited. We developed soft kernel durum wheat lines via Ph1b-mediated homoeologous recombination with Dr. Leonard Joppa...

  20. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  1. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  2. Computing the roots of complex orthogonal and kernel polynomials

    SciTech Connect

    Saylor, P.E.; Smolarski, D.C.

    1988-01-01

    A method is presented to compute the roots of complex orthogonal and kernel polynomials. An important application of complex kernel polynomials is the acceleration of iterative methods for the solution of nonsymmetric linear equations. In the real case, the roots of orthogonal polynomials coincide with the eigenvalues of the Jacobi matrix, a symmetric tridiagonal matrix obtained from the defining three-term recurrence relationship for the orthogonal polynomials. In the real case kernel polynomials are orthogonal. The Stieltjes procedure is an algorithm to compute the roots of orthogonal and kernel polynomials bases on these facts. In the complex case, the Jacobi matrix generalizes to a Hessenberg matrix, the eigenvalues of which are roots of either orthogonal or kernel polynomials. The resulting algorithm generalizes the Stieljes procedure. It may not be defined in the case of kernel polynomials, a consequence of the fact that they are orthogonal with respect to a nonpositive bilinear form. (Another consequence is that kernel polynomials need not be of exact degree.) A second algorithm that is always defined is presented for kernel polynomials. Numerical examples are described.

  3. Pion Form Factor in Chiral Limit of Hard-Wall AdS/QCD Model

    SciTech Connect

    Anatoly Radyushkin; Hovhannes Grigoryan

    2007-12-01

    We develop a formalism to calculate form factor and charge density distribution of pion in the chiral limit using the holographic dual model of QCD with hard-wall cutoff. We introduce two conjugate pion wave functions and present analytic expressions for these functions and for the pion form factor. They allow to relate such observables as the pion decay constant and the pion charge electric radius to the values of chiral condensate and hard-wall cutoff scale. The evolution of the pion form factor to large values of the momentum transfer is discussed, and results are compared to existing experimental data.

  4. OSKI: A Library of Automatically Tuned Sparse Matrix Kernels

    SciTech Connect

    Vuduc, R; Demmel, J W; Yelick, K A

    2005-07-19

    The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.

  5. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  6. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  7. A novel extended kernel recursive least squares algorithm.

    PubMed

    Zhu, Pingping; Chen, Badong; Príncipe, José C

    2012-08-01

    In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.

  8. Quarkyonic Matter and the Revised Phase Diagram of QCD

    SciTech Connect

    McLerran,L.

    2009-03-30

    At high baryon number density, it has been proposed that a new phase of QCD matter controlsthe physics. This matter is confining but can have densities much larger than 3QCD. Its existenceis argued from large Nc approximations, and model computations. It is approximately chirallysymmetric.

  9. Renormalization group analysis in nonrelativistic QCD for colored scalars

    SciTech Connect

    Hoang, Andre H.; Ruiz-Femenia, Pedro

    2006-01-01

    The velocity nonrelativistic QCD Lagrangian for colored heavy scalar fields in the fundamental representation of QCD and the renormalization group analysis of the corresponding operators are presented. The results are an important ingredient for renormalization group improved computations of scalar-antiscalar bound state energies and production rates at next-to-next-to-leading-logarithmic (NNLL) order.

  10. Mechanisms of chiral symmetry breaking in QCD: A lattice perspective

    NASA Astrophysics Data System (ADS)

    Giusti, Leonardo

    2016-01-01

    I briefly review two recent studies on chiral symmetry breaking in QCD: (a) a computation of the spectral density of the Dirac operator in QCD Lite, (b) a precise determination of the topological charge distribution in the SU(3) Yang-Mills theory as defined by evolving the fundamental gauge field with the Yang-Mills gradient flow equation.

  11. Confinining properties of QCD in strong magnetic backgrounds

    NASA Astrophysics Data System (ADS)

    Bonati, Claudio; D'Elia, Massimo; Mariti, Marco; Mesiti, Michele; Negro, Francesco; Rucci, Andrea; Sanfilippo, Francesco

    2017-03-01

    Strong magnetic backgrounds are known to modify QCD properties at a nonperturbative level. We discuss recent lattice results, obtained for Nf = 2 + 1 QCD with physical quark masses, concerning in particular the modifications and the anisotropies induced at the level of the static quark-antiquark potential, both at zero and finite temperature.

  12. Lattice QCD production on commodity clusters at Fermilab

    SciTech Connect

    D. Holmgren et al.

    2003-09-30

    We describe the construction and results to date of Fermilab's three Myrinet-networked lattice QCD production clusters (an 80-node dual Pentium III cluster, a 48-node dual Xeon cluster, and a 128-node dual Xeon cluster). We examine a number of aspects of performance of the MILC lattice QCD code running on these clusters.

  13. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  14. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects.

  15. A visualization tool for the kernel-driven model with improved ability in data analysis and kernel assessment

    NASA Astrophysics Data System (ADS)

    Dong, Yadong; Jiao, Ziti; Zhang, Hu; Bai, Dongni; Zhang, Xiaoning; Li, Yang; He, Dandan

    2016-10-01

    The semi-empirical, kernel-driven Bidirectional Reflectance Distribution Function (BRDF) model has been widely used for many aspects of remote sensing. With the development of the kernel-driven model, there is a need to further assess the performance of newly developed kernels. The use of visualization tools can facilitate the analysis of model results and the assessment of newly developed kernels. However, the current version of the kernel-driven model does not contain a visualization function. In this study, a user-friendly visualization tool, named MaKeMAT, was developed specifically for the kernel-driven model. The POLDER-3 and CAR BRDF datasets were used to demonstrate the applicability of MaKeMAT. The visualization of inputted multi-angle measurements enhances understanding of multi-angle measurements and allows the choice of measurements with good representativeness. The visualization of modeling results facilitates the assessment of newly developed kernels. The study shows that the visualization tool MaKeMAT can promote the widespread application of the kernel-driven model.

  16. LATTICE QCD AT FINITE TEMPERATURE AND DENSITY.

    SciTech Connect

    BLUM,T.; CREUTZ,M.; PETRECZKY,P.

    2004-02-24

    With the operation of the RHIC heavy ion program, the theoretical understanding of QCD at finite temperature and density has become increasingly important. Though QCD at finite temperature has been extensively studied using lattice Monte-Carlo simulations over the past twenty years, most physical questions relevant for RHIC (and future) heavy ion experiments remain open. In lattice QCD at finite temperature and density there have been at least two major advances in recent years. First, for the first time calculations of real time quantities, like meson spectral functions have become available. Second, the lattice study of the QCD phase diagram and equation of state have been extended to finite baryon density by several groups. Both issues were extensively discussed in the course of the workshop. A real highlight was the study of the QCD phase diagram in (T, {mu})-plane by Z. Fodor and S. Katz and the determination of the critical end-point for the physical value of the pion mass. This was the first time such lattice calculations at, the physical pion mass have been performed. Results by Z Fodor and S. Katz were obtained using a multi-parameter re-weighting method. Other determinations of the critical end point were also presented, in particular using a Taylor expansion around {mu} = 0 (Bielefeld group, Ejiri et al.) and using analytic continuation from imaginary chemical potential (Ph. de Forcrand and O. Philipsen). The result based on Taylor expansion agrees within errors with the new prediction of Z. Fodor and S. Katz, while methods based on analytic continuation still predict a higher value for the critical baryon density. Most of the thermodynamics studies in full QCD (including those presented at this workshop) have been performed using quite coarse lattices, a = 0.2-0.3 fm. Therefore one may worry about cutoff effects in different thermodynamic quantities, like the transition temperature T{sub tr}. At the workshop U. Heller presented a study of the transition

  17. High energy hadron collisions in QCD

    NASA Astrophysics Data System (ADS)

    Levin, E. M.; Ryskin, M. G.

    1990-05-01

    In this review we present the microscopic approach to large cross section physics at high energy, based on the leading logarithmic approximation of perturbative QCD and the reggeon diagram technique. We insist that at high energy the main source of secondary hadrons is the production and fragmentation of the gluon minijets with transverse momentum qt ≈ q0, which rapidly growswith energy, namely q2t≈ q20≈Λ 2 exp(2.5√ln s). Such a large value of the transverse momentum allows us to adopt perturbative QCD for high hadron collisions. The completely avoid the unknown confinement problem, a new scale overlineQ0 ( overlineQ0≈1 GeV, α s( overlineQ20)<1) is introduced in our calculations and only momenta qt> overlineQ0 for gluons are taken into account in any integration. All our results only slightly depend on the value of overlineQ0. It is shown that perturbative QCD is able to describe the main properties of the hedron interactions at high energy, namely, the inclusive spectra of secondary hadrons as functions of y and qt, including small qt⪅300MeV, in a wide energy range √ s=50-900 GeV, the multiplicity distribution, the mean transverse momentum versus multiplicity and so on. We use only three phenomenological parameters in such a description of the experimental data; these values are in agreement with theoretical estimates. Our approach predicts a rapid increase of the mean transverse momentum for secondary hadrons, qt≈ q0, where q0=2.5 GeV at √ S=0.5 TeV, and q0⋍7 GeV at √ S=40 TeV, the total multiplicity N≈ q20, the total cross section σ t≈ln 2s and a comparatively slow increase of the diffraction dissociation cross section σ D≈ln s.

  18. Transversity from First Principles in QCD

    SciTech Connect

    Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins

    2012-02-16

    Transversity observables, such as the T-odd Sivers single-spin asymmetry measured in deep inelastic lepton scattering on polarized protons and the distributions which are measured in deeply virtual Compton scattering, provide important constraints on the fundamental quark and gluon structure of the proton. In this talk I discuss the challenge of computing these observables from first principles; i.e.; quantum chromodynamics, itself. A key step is the determination of the frame-independent light-front wavefunctions (LFWFs) of hadrons - the QCD eigensolutions which are analogs of the Schroedinger wavefunctions of atomic physics. The lensing effects of initial-state and final-state interactions, acting on LFWFs with different orbital angular momentum, lead to T-odd transversity observables such as the Sivers, Collins, and Boer-Mulders distributions. The lensing effect also leads to leading-twist phenomena which break leading-twist factorization such as the breakdown of the Lam-Tung relation in Drell-Yan reactions. A similar rescattering mechanism also leads to diffractive deep inelastic scattering, as well as nuclear shadowing and non-universal antishadowing. It is thus important to distinguish 'static' structure functions, the probability distributions computed the target hadron's light-front wavefunctions, versus 'dynamical' structure functions which include the effects of initial- and final-state rescattering. I also discuss related effects such as the J = 0 fixed pole contribution which appears in the real part of the virtual Compton amplitude. AdS/QCD, together with 'Light-Front Holography', provides a simple Lorentz-invariant color-confining approximation to QCD which is successful in accounting for light-quark meson and baryon spectroscopy as well as hadronic LFWFs.

  19. Quenching parameter in a holographic thermal QCD

    NASA Astrophysics Data System (ADS)

    Patra, Binoy Krishna; Arya, Bhaskar

    2017-01-01

    We have calculated the quenching parameter, q ˆ in a model-independent way using the gauge-gravity duality. In earlier calculations, the geometry in the gravity side at finite temperature was usually taken as the pure AdS black hole metric for which the dual gauge theory becomes conformally invariant unlike QCD. Therefore we use a metric which incorporates the fundamental quarks by embedding the coincident D7 branes in the Klebanov-Tseytlin background and a finite temperature is switched on by inserting a black hole into the background, known as OKS-BH metric. Further inclusion of an additional UV cap to the metric prepares the dual gauge theory to run similar to thermal QCD. Moreover q ˆ is usually defined in the literature from the Glauber model perturbative QCD evaluation of the Wilson loop, which has no reasons to hold if the coupling is large and is thus against the main idea of gauge-gravity duality. Thus we use an appropriate definition of q ˆ : q ˆ L- = 1 /L2, where L is the separation for which the Wilson loop is equal to some specific value. The above two refinements cause q ˆ to vary with the temperature as T4 always and to depend linearly on the light-cone time L- with an additional (1 /L-) correction term in the short-distance limit whereas in the long-distance limit, q ˆ depends only linearly on L- with no correction term. These observations agree with other holographic calculations directly or indirectly.

  20. Applying generalized Padé approximants in analytic QCD models

    NASA Astrophysics Data System (ADS)

    Cvetič, Gorazd; Kögerler, Reinhart

    2011-09-01

    A method of resummation of truncated perturbation series, related to diagonal Padé approximants but giving results independent of the renormalization scale, was developed more than ten years ago by us with a view of applying it in perturbative QCD. We now apply this method in analytic QCD models, i.e., models where the running coupling has no unphysical singularities, and we show that the method has attractive features, such as a rapid convergence. The method can be regarded as a generalization of the scale-setting methods of Stevenson, Grunberg, and Brodsky-Lepage-Mackenzie. The method involves the fixing of various scales and weight coefficients via an auxiliary construction of diagonal Padé approximant. In low-energy QCD observables, some of these scales become sometimes low at high order, which prevents the method from being effective in perturbative QCD, where the coupling has unphysical singularities at low spacelike momenta. There are no such problems in analytic QCD.

  1. Exploring dense and cold QCD in magnetic fields

    NASA Astrophysics Data System (ADS)

    Ferrer, E. J.; de la Incera, V.

    2016-08-01

    Strong magnetic fields are commonly generated in off-central relativistic heavy-ion collisions in the Relativistic Heavy-Ion Collider (RHIC) at Brookhaven National Lab and in the Large Hadron Collider at CERN and have been used to probe the topological configurations of the QCD vacua. A strong magnetic field can affect the character and location of the QCD critical point, influence the QCD phases, and lead to anomalous transport of charge. To take advantage of the magnetic field as a probe of QCD at higher baryon densities, we are going to need experiments capable to scan the lower energy region. In this context, the nuclotron-based ion collider facility (NICA) at JINR offers a unique opportunity to explore such a region and complement alternative programs at RHIC and other facilities. In this paper we discuss some relevant problems of the interplay between QCD and magnetic fields and the important role the experiments at NICA can play in tackling them.

  2. Algorithms for Disconnected Diagrams in Lattice QCD

    SciTech Connect

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Konstantinos; Yoon, Boram; Gupta, Rajan; Syritsyn, Sergey

    2016-11-01

    Computing disconnected diagrams in Lattice QCD (operator insertion in a quark loop) entails the computationally demanding problem of taking the trace of the all to all quark propagator. We first outline the basic algorithm used to compute a quark loop as well as improvements to this method. Then, we motivate and introduce an algorithm based on the synergy between hierarchical probing and singular value deflation. We present results for the chiral condensate using a 2+1-flavor clover ensemble and compare estimates of the nucleon charges with the basic algorithm.

  3. Isoscalar meson spectroscopy from lattice QCD

    SciTech Connect

    Jozef Dudek, Robert Edwards, David Richards, Christopher Thomas, Balint Joo, Michael Peardon

    2011-06-01

    We extract to high statistical precision an excited spectrum of single-particle isoscalar mesons using lattice QCD, including states of high spin and, for the first time, light exotic JPC isoscalars. The use of a novel quark field construction has enabled us to overcome the long-standing challenge of efficiently including quark-annihilation contributions. Hidden-flavor mixing angles are extracted and while most states are found to be close to ideally flavor mixed, there are examples of large mixing in the pseudoscalar and axial sectors in line with experiment. The exotic JPC isoscalar states appear at a mass scale comparable to the exotic isovector states.

  4. Numerical approach to Coulomb gauge QCD

    SciTech Connect

    Matevosyan, Hrayr H.; Szczepaniak, Adam P.; Bowman, Patrick O.

    2008-07-01

    We calculate the ghost two-point function in Coulomb gauge QCD with a simple model vacuum gluon wave function using Monte Carlo integration. This approach extends the previous analytic studies of the ghost propagator with this ansatz, where a ladder-rainbow expansion was unavoidable for calculating the path integral over gluon field configurations. The new approach allows us to study the possible critical behavior of the coupling constant, as well as the Coulomb potential derived from the ghost dressing function. We demonstrate that IR enhancement of the ghost correlator or Coulomb form factor fails to quantitatively reproduce confinement using Gaussian vacuum wave functional.

  5. The lowest Landau level in QCD

    NASA Astrophysics Data System (ADS)

    Bruckmann, Falk; Endrőodi, Gergely; Giordano, Matteo; Katz, Sándor D.; Kovács, Tamás G.; Pittler, Ferenc; Wellnhofer, Jacob

    2017-03-01

    The thermodynamics of Quantum Chromodynamics (QCD) in external (electro-)magnetic fields shows some unexpected features like inverse magnetic catalysis, which have been revealed mainly through lattice studies. Many effective descriptions, on the other hand, use Landau levels or approximate the system by just the lowest Landau level (LLL). Analyzing lattice configurations we ask whether such a picture is justified. We find the LLL to be separated from the rest by a spectral gap in the two-dimensional Dirac operator and analyze the corresponding LLL signature in four dimensions. We determine to what extent the quark condensate is LLL dominated at strong magnetic fields.

  6. QCD, unification and the road to asymptopia

    SciTech Connect

    Lindenbaum, S.J.

    1980-11-01

    Attempts to describe interactions at extremely high energies are addressed. Previous beliefs that asymptopia - the theoretically promised land where all asymptotic theorems come true - was reached have always proven false. Present estimates of asymptopia range from 10/sup 5/ GeV to 10/sup 16/ GeV. In the author's opinion it is premature to believe that the universe is described by a hierarchy of nested gauge groups. The establishment of QCD as the nonabelian gauge group describing strong interactions has not yet been accomplished. 2 figures. (RWR)

  7. Nuclear correlation functions in lattice QCD

    SciTech Connect

    Detmold, William; Orginos, Konstantinos

    2013-06-01

    We consider the problem of calculating the large number of Wick contractions necessary to compute states with the quantum numbers of many baryons in lattice QCD. We consider a constructive approach and a determinant-based approach and show that these methods allow the required contractions to be performed for certain choices of interpolating operators. Examples of correlation functions computed using these techniques are shown for the quantum numbers of the light nuclei, $^4$He, $^8$Be, $^{12}$C, $^{16}$O and $^{28}$Si.

  8. Three loop cusp anomalous dimension in QCD.

    PubMed

    Grozin, Andrey; Henn, Johannes M; Korchemsky, Gregory P; Marquard, Peter

    2015-02-13

    We present the full analytic result for the three loop angle-dependent cusp anomalous dimension in QCD. With this result, infrared divergences of planar scattering processes with massive particles can be predicted to that order. Moreover, we define a closely related quantity in terms of an effective coupling defined by the lightlike cusp anomalous dimension. We find evidence that this quantity is universal for any gauge theory and use this observation to predict the nonplanar n(f)-dependent terms of the four loop cusp anomalous dimension.

  9. Lattice QCD with mismatched fermi surfaces.

    PubMed

    Yamamoto, Arata

    2014-04-25

    We study two flavor fermions with mismatched chemical potentials in quenched lattice QCD. We first consider a large isospin chemical potential, where a charged pion is condensed, and then introduce a small mismatch between the chemical potentials of the up quark and the down antiquark. We find that the homogeneous pion condensate is destroyed by the mismatch of the chemical potentials. We also find that the two-point correlation function shows spatial oscillation, which indicates an inhomogeneous ground state, although it is not massless but massive in the present simulation setup.

  10. QCD on the connection machine: beyond LISP

    NASA Astrophysics Data System (ADS)

    Brickner, Ralph G.; Baillie, Clive F.; Johnsson, S. Lennart

    1991-04-01

    We report on the status of code development for a simulation of quantum chromodynamics (QCD) with dynamical Wilson fermions on the Connection Machine model CM-2. Our original code, written in Lisp, gave performance in the near-GFLOPS range. We have rewritten the most time-consuming parts of the code in the low-level programming systems CMIS, including the matrix multiply and the communication. Current versions of the code run at approximately 3.6 GFLOPS for the fermion matrix inversion, and we expect the next version to reach or exceed 5 GFLOPS.

  11. Pion electric polarizability from lattice QCD

    SciTech Connect

    Alexandru, Andrei; Lujan, Michael; Freeman, Walter; Lee, Frank

    2016-01-22

    Electromagnetic polarizabilities are important parameters for understanding the interaction between photons and hadrons. For pions these quantities are poorly constrained experimentally since they can only be measured indirectly. New experiments at CERN and Jefferson Lab are planned that will measure the polarizabilities more precisely. Lattice QCD can be used to compute these quantities directly in terms of quark and gluons degrees of freedom, using the background field method. We present results for the electric polarizability for two different quark masses, light enough to connect to chiral perturbation theory. These are currently the lightest quark masses used in polarizability studies.

  12. Extracting electric polarizabilities from lattice QCD

    SciTech Connect

    Detmold, W.; Tiburzi, B. C.; Walker-Loud, A.

    2009-05-01

    Charged and neutral, pion and kaon electric polarizabilities are extracted from lattice QCD using an ensemble of anisotropic gauge configurations with dynamical clover fermions. We utilize classical background fields to access the polarizabilities from two-point correlation functions. Uniform background fields are achieved by quantizing the electric field strength with the proper treatment of boundary flux. These external fields, however, are implemented only in the valence quark sector. A novel method to extract charge particle polarizabilities is successfully demonstrated for the first time.

  13. Extracting Electric Polarizabilities from Lattice QCD

    SciTech Connect

    Will Detmold, William Detmold, Brian Tiburzi, Andre Walker-Loud

    2009-05-01

    Charged and neutral, pion and kaon electric polarizabilities are extracted from lattice QCD using an ensemble of anisotropic gauge configurations with dynamical clover fermions. We utilize classical background fields to access the polarizabilities from two-point correlation functions. Uniform background fields are achieved by quantizing the electric field strength with the proper treatment of boundary flux. These external fields, however, are implemented only in the valence quark sector. A novel method to extract charge particle polarizabilities is successfully demonstrated for the first time.

  14. QCD for Postgraduates (5/5)

    ScienceCinema

    None

    2016-07-12

    Modern QCD - Lecture 5 We will introduce and discuss in some detail the two main classes of jets: cone type and sequential-recombination type. We will discuss their basic properties, as well as more advanced concepts such as jet substructure, jet filtering, ways of optimizing the jet radius, ways of defining the areas of jets, and of establishing the quality measure of the jet-algorithm in terms of discriminating power in specific searches. Finally we will discuss applications for Higgs searches involving boosted particles.

  15. The photo-philic QCD axion

    NASA Astrophysics Data System (ADS)

    Farina, Marco; Pappadopulo, Duccio; Rompineve, Fabrizio; Tesi, Andrea

    2017-01-01

    We propose a framework in which the QCD axion has an exponentially large coupling to photons, relying on the "clockwork" mechanism. We discuss the impact of present and future axion experiments on the parameter space of the model. In addition to the axion, the model predicts a large number of pseudoscalars which can be light and observable at the LHC. In the most favorable scenario, axion Dark Matter will give a signal in multiple axion detection experiments and the pseudo-scalars will be discovered at the LHC, allowing us to determine most of the parameters of the model.

  16. QCD SPIN PHYSICS IN HADRONIC INTERACTIONS.

    SciTech Connect

    VOGELSANG,W.

    2007-06-19

    We discuss spin phenomena in high-energy hadronic scattering, with a particular emphasis on the spin physics program now underway at the first polarized proton-proton collider, RHIC. Experiments at RHIC unravel the spin structure of the nucleon in new ways. Prime goals are to determine the contribution of gluon spins to the proton spin, to elucidate the flavor structure of quark and antiquark polarizations in the nucleon, and to help clarify the origin of transverse-spin phenomena in QCD. These lectures describe some aspects of this program and of the associated physics.

  17. Supersymmetric QCD vacua and geometrical engineering

    SciTech Connect

    Tatar, Radu; Wetenhall, Ben

    2008-02-15

    We consider the geometrical engineering constructions for the N=1 supersymmetric QCD vacua recently proposed by Giveon and Kutasov. After 1 T-duality, the geometries with wrapped D5 branes become N=1 brane configurations with NS branes and D4 branes. The field theories encoded by the geometries contain extra massive adjoint fields for the flavor group. After performing a flop, the geometries contain branes, antibranes and branes wrapped on nonholomorphic cycles. The various tachyon condensations between pairs of wrapped D5 branes and anti-D5 branes together with deformations of the cycles give rise to a variety of supersymmetric and metastable nonsupersymmetric vacua.

  18. BFKL equation with running QCD coupling and HERA data

    NASA Astrophysics Data System (ADS)

    Levin, Eugene; Potashnikova, Irina

    2014-02-01

    In this paper we developed approach based on the BFKL evolution in ln( Q 2). We show that the simplest diffusion approximation with running QCD coupling is able to describe the HERA experimental data on the deep inelastic structure function with good χ2 /d .o .f . ≈ 1 .3. From our description of the experimental data we learned several lessons; (i) the non-perturbative physics at long distances started to show up at Q 2 = 0 .25 GeV2; (ii) the scattering amplitude at Q 2 = 0 .25 GeV2 cannot be written as sum of soft Pomeron and the secondary Reggeon but the Pomeron interactions should be taken into account; (iii) the Pomeron interactions can be reduced to the enhanced diagrams and, therefore, we do not see any needs for the shadowing corrections at HERA energies; and (iv) we demonstrated that the shadowing correction could be sizable at higher than HERA energies without any contradiction with our initial conditions.

  19. Sivers and Boer-Mulders observables from lattice QCD.

    SciTech Connect

    B.U. Musch, Ph. Hagler, M. Engelhardt, J.W. Negele, A. Schafer

    2012-05-01

    We present a first calculation of transverse momentum dependent nucleon observables in dynamical lattice QCD employing non-local operators with staple-shaped, 'process-dependent' Wilson lines. The use of staple-shaped Wilson lines allows us to link lattice simulations to TMD effects determined from experiment, and in particular to access non-universal, naively time-reversal odd TMD observables. We present and discuss results for the generalized Sivers and Boer-Mulders transverse momentum shifts for the SIDIS and DY cases. The effect of staple-shaped Wilson lines on T-even observables is studied for the generalized tensor charge and a generalized transverse shift related to the worm gear function g{sub 1}T. We emphasize the dependence of these observables on the staple extent and the Collins-Soper evolution parameter. Our numerical calculations use an n{sub f} = 2+1 mixed action scheme with domain wall valence fermions on an Asqtad sea and pion masses 369 MeV as well as 518 MeV.

  20. On the Kernelization Complexity of Colorful Motifs

    NASA Astrophysics Data System (ADS)

    Ambalath, Abhimanyu M.; Balasundaram, Radheshyam; Rao H., Chintan; Koppula, Venkata; Misra, Neeldhara; Philip, Geevarghese; Ramanujan, M. S.

    The Colorful Motif problem asks if, given a vertex-colored graph G, there exists a subset S of vertices of G such that the graph induced by G on S is connected and contains every color in the graph exactly once. The problem is motivated by applications in computational biology and is also well-studied from the theoretical point of view. In particular, it is known to be NP-complete even on trees of maximum degree three [Fellows et al, ICALP 2007]. In their pioneering paper that introduced the color-coding technique, Alon et al. [STOC 1995] show, inter alia, that the problem is FPT on general graphs. More recently, Cygan et al. [WG 2010] showed that Colorful Motif is NP-complete on comb graphs, a special subclass of the set of trees of maximum degree three. They also showed that the problem is not likely to admit polynomial kernels on forests.