Sample records for qcd evolution kernels

  1. Wilson Dslash Kernel From Lattice QCD Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show themore » technique gives excellent performance on regular Xeon Architecture as well.« less

  2. QCD Evolution 2016

    NASA Astrophysics Data System (ADS)

    The QCD Evolution 2016 workshop was held at the National Institute for Subatomic Physics (Nikhef) in Amsterdam, May 30 - June 3, 2016. The workshop is a continuation of a series of workshops held during five consecutive years, in 2011, 2012, 2013, 2015 at Jefferson Lab, and in 2014 in Santa Fe, NM. With the rapid developments in our understanding of the evolution of parton distributions including low-x, TMDs, GPDs, higher-twist correlation functions, and the associated progress in perturbative QCD, lattice QCD and effective field theory techniques, we look forward to yet another exciting meeting in 2016. The program of QCD Evolution 2016 will pay special attention to the topics of importance for ongoing experiments, in the full range from Jefferson Lab energies to LHC energies or future experiments such as a future Electron Ion Collider, recently recommended as a highest priority in U.S. Department of Energy's 2015 Long Range Plan for Nuclear Science.

  3. QCDNUM: Fast QCD evolution and convolution

    NASA Astrophysics Data System (ADS)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline

  4. 2017 QCD Evolution 2017

    NASA Astrophysics Data System (ADS)

    2017-05-01

    The QCD Evolution 2017 workshop was held at Jefferson Lab, May 22-26, 2017. The workshop is a continuation of a series of workshops held during six consecutive years, in 2011, 2012, 2013, 2015 at Jefferson Lab, and in 2014 in Santa Fe, NM, and in 2016 at the National Institute for Subatomic Physics (Nikhef) in Amsterdam. With the rapid developments in our understanding of the evolution of parton distributions including TMDs, GPDs, low-x, higher-twist correlation functions, and the associated progress in perturbative QCD, lattice QCD and effective field theory techniques, we look forward to yet another exciting meeting in 2017. The program of QCD Evolution 2017 will pay special attention to the topics of importance for ongoing experiments, in the full range from Jefferson Lab energies to RHIC and LHC energies or future experiments such as a future Electron Ion Collider, recently recommended as a highest priority in U.S. Department of Energy's 2015 Long Range Plan for Nuclear Science.

  5. Resumming double logarithms in the QCD evolution of color dipoles

    DOE PAGES

    Iancu, E.; Madrigal, J. D.; Mueller, A. H.; ...

    2015-05-01

    The higher-order perturbative corrections, beyond leading logarithmic accuracy, to the BFKL evolution in QCD at high energy are well known to suffer from a severe lack-of-convergence problem, due to radiative corrections enhanced by double collinear logarithms. Via an explicit calculation of Feynman graphs in light cone (time-ordered) perturbation theory, we show that the corrections enhanced by double logarithms (either energy-collinear, or double collinear) are associated with soft gluon emissions which are strictly ordered in lifetime. These corrections can be resummed to all orders by solving an evolution equation which is non-local in rapidity. This equation can be equivalently rewritten inmore » local form, but with modified kernel and initial conditions, which resum double collinear logs to all orders. We extend this resummation to the next-to-leading order BFKL and BK equations. The first numerical studies of the collinearly-improved BK equation demonstrate the essential role of the resummation in both stabilizing and slowing down the evolution.« less

  6. Archeology and evolution of QCD

    NASA Astrophysics Data System (ADS)

    De Rújula, A.

    2017-03-01

    These are excerpts from the closing talk at the "XIIth Conference on Quark Confinement and the Hadron Spectrum", which took place last Summer in Thessaloniki -an excellent place to enjoy an interest in archeology. A more complete personal view of the early days of QCD and the rest of the Standard Model is given in [1]. Here I discuss a few of the points which -to my judgement- illustrate well the QCD evolution (in time), both from a scientific and a sociological point of view.

  7. Wilson loops and QCD/string scattering amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makeenko, Yuri; Olesen, Poul; Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen O

    2009-07-15

    We generalize modern ideas about the duality between Wilson loops and scattering amplitudes in N=4 super Yang-Mills theory to large N QCD by deriving a general relation between QCD meson scattering amplitudes and Wilson loops. We then investigate properties of the open-string disk amplitude integrated over reparametrizations. When the Wilson-loop is approximated by the area behavior, we find that the QCD scattering amplitude is a convolution of the standard Koba-Nielsen integrand and a kernel. As usual poles originate from the first factor, whereas no (momentum-dependent) poles can arise from the kernel. We show that the kernel becomes a constant whenmore » the number of external particles becomes large. The usual Veneziano amplitude then emerges in the kinematical regime, where the Wilson loop can be reliably approximated by the area behavior. In this case, we obtain a direct duality between Wilson loops and scattering amplitudes when spatial variables and momenta are interchanged, in analogy with the N=4 super Yang-Mills theory case.« less

  8. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  9. QCD evolution of the Sivers function

    NASA Astrophysics Data System (ADS)

    Aybat, S. M.; Collins, J. C.; Qiu, J. W.; Rogers, T. C.

    2012-02-01

    We extend the Collins-Soper-Sterman (CSS) formalism to apply it to the spin dependence governed by the Sivers function. We use it to give a correct numerical QCD evolution of existing fixed-scale fits of the Sivers function. With the aid of approximations useful for the nonperturbative region, we present the results as parametrizations of a Gaussian form in transverse-momentum space, rather than in the Fourier conjugate transverse coordinate space normally used in the CSS formalism. They are specifically valid at small transverse momentum. Since evolution has been applied, our results can be used to make predictions for Drell-Yan and semi-inclusive deep inelastic scattering at energies different from those where the original fits were made. Our evolved functions are of a form that they can be used in the same parton-model factorization formulas as used in the original fits, but now with a predicted scale dependence in the fit parameters. We also present a method by which our evolved functions can be corrected to allow for twist-3 contributions at large parton transverse momentum.

  10. Decoupling the NLO-coupled QED⊗QCD, DGLAP evolution equations, using Laplace transform method

    NASA Astrophysics Data System (ADS)

    Mottaghizadeh, Marzieh; Eslami, Parvin; Taghavi-Shahri, Fatemeh

    2017-05-01

    We analytically solved the QED⊗QCD-coupled DGLAP evolution equations at leading order (LO) quantum electrodynamics (QED) and next-to-leading order (NLO) quantum chromodynamics (QCD) approximations, using the Laplace transform method and then computed the proton structure function in terms of the unpolarized parton distribution functions. Our analytical solutions for parton densities are in good agreement with those from CT14QED (1.2952 < Q2 < 1010) (Ref. 6) global parametrizations and APFEL (A PDF Evolution Library) (2 < Q2 < 108) (Ref. 4). We also compared the proton structure function, F2p(x,Q2), with the experimental data released by the ZEUS and H1 collaborations at HERA. There is a nice agreement between them in the range of low and high x and Q2.

  11. Analytic Evolution of Singular Distribution Amplitudes in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tandogan Kunkel, Asli

    2014-08-01

    Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less

  12. Transverse momentum dependent parton distribution and fragmentation functions with QCD evolution

    NASA Astrophysics Data System (ADS)

    Aybat, S. Mert; Rogers, Ted C.

    2011-06-01

    We assess the current phenomenological status of transverse momentum dependent (TMD) parton distribution functions (PDFs) and fragmentation functions (FFs) and study the effect of consistently including perturbative QCD (pQCD) evolution. Our goal is to initiate the process of establishing reliable, QCD-evolved parametrizations for the TMD PDFs and TMD FFs that can be used both to test TMD factorization and to search for evidence of the breakdown of TMD factorization that is expected for certain processes. In this article, we focus on spin-independent processes because they provide the simplest illustration of the basic steps and can already be used in direct tests of TMD factorization. Our calculations are based on the Collins-Soper-Sterman (CSS) formalism, supplemented by recent theoretical developments which have clarified the precise definitions of the TMD PDFs and TMD FFs needed for a valid TMD-factorization theorem. Starting with these definitions, we numerically generate evolved TMD PDFs and TMD FFs using as input existing parametrizations for the collinear PDFs, collinear FFs, nonperturbative factors in the CSS factorization formalism, and recent fixed-scale fits. We confirm that evolution has important consequences, both qualitatively and quantitatively, and argue that it should be included in future phenomenological studies of TMD functions. Our analysis is also suggestive of extensions to processes that involve spin-dependent functions such as the Boer-Mulders, Sivers, or Collins functions, which we intend to pursue in future publications. At our website [http://projects.hepforge.org/tmd/], we have made available the tables and calculations needed to obtain the TMD parametrizations presented herein.

  13. Exclusive QCD processes, quark-hadron duality, and the transition to perturbative QCD

    NASA Astrophysics Data System (ADS)

    Corianò, Claudio; Li, Hsiang-nan; Savkli, Cetin

    1998-07-01

    Experiments at CEBAF will scan the intermediate-energy region of the QCD dynamics for the nucleon form factors and for Compton Scattering. These experiments will definitely clarify the role of resummed perturbation theory and of quark-hadron duality (QCD sum rules) in this regime. With this perspective in mind, we review the factorization theorem of perturbative QCD for exclusive processes at intermediate energy scales, which embodies the transverse degrees of freedom of a parton and the Sudakov resummation of the corresponding large logarithms. We concentrate on the pion and proton electromagnetic form factors and on pion Compton scattering. New ingredients, such as the evolution of the pion wave function and the complete two-loop expression of the Sudakov factor, are included. The sensitivity of our predictions to the infrared cutoff for the Sudakov evolution is discussed. We also elaborate on QCD sum rule methods for Compton Scattering, which provide an alternative description of this process. We show that, by comparing the local duality analysis to resummed perturbation theory, it is possible to describe the transition of exclusive processes to perturbative QCD.

  14. Linear vs non-linear QCD evolution in the neutrino-nucleon cross section

    NASA Astrophysics Data System (ADS)

    Albacete, Javier L.; Illana, José I.; Soto-Ontoso, Alba

    2016-03-01

    Evidence for an extraterrestrial flux of ultra-high-energy neutrinos, in the order of PeV, has opened a new era in Neutrino Astronomy. An essential ingredient for the determination of neutrino fluxes from the number of observed events is the precise knowledge of the neutrino-nucleon cross section. In this work, based on [1], we present a quantitative study of σνN in the neutrino energy range 104 < Eν < 1014 GeV within two transversal QCD approaches: NLO DGLAP evolution using different sets of PDFs and BK small-x evolution with running coupling and kinematical corrections. Further, we translate this theoretical uncertainty into upper bounds for the ultra-high-energy neutrino flux for different experiments.

  15. Real-time evolution of non-Gaussian cumulants in the QCD critical regime

    NASA Astrophysics Data System (ADS)

    Mukherjee, Swagato; Venugopalan, Raju; Yin, Yi

    2015-09-01

    We derive a coupled set of equations that describe the nonequilibrium evolution of cumulants of critical fluctuations for spacetime trajectories on the crossover side of the QCD phase diagram. In particular, novel expressions are obtained for the nonequilibrium evolution of non-Gaussian skewness and kurtosis cumulants. UBy utilizing a simple model of the spacetime evolution of a heavy-ion collision, we demonstrate that, depending on the relaxation rate of critical fluctuations, skewness and kurtosis can differ significantly in magnitude as well as in sign from equilibrium expectations. Memory effects are important and shown to persist even for trajectories that skirt the edge of the critical regime. We use phenomenologically motivated parametrizations of freeze-out curves and of the beam-energy dependence of the net baryon chemical potential to explore the implications of our model study for the critical-point search in heavy-ion collisions.

  16. Limits on transverse momentum dependent evolution from semi-inclusive deep inelastic scattering at moderate Q

    NASA Astrophysics Data System (ADS)

    Aidala, C. A.; Field, B.; Gamberg, L. P.; Rogers, T. C.

    2014-05-01

    In the QCD evolution of transverse momentum dependent parton distribution and fragmentation functions, the Collins-Soper evolution kernel includes both a perturbative short-distance contribution and a large-distance nonperturbative, but strongly universal, contribution. In the past, global fits, based mainly on larger Q Drell-Yan-like processes, have found substantial contributions from nonperturbative regions in the Collins-Soper evolution kernel. In this article, we investigate semi-inclusive deep inelastic scattering measurements in the region of relatively small Q, of the order of a few GeV, where sensitivity to nonperturbative transverse momentum dependence may become more important or even dominate the evolution. Using recently available deep inelastic scattering data from the COMPASS experiment, we provide estimates of the regions of coordinate space that dominate in transverse momentum dependent (TMD) processes when the hard scale is of the order of only a few GeV. We find that distance scales that are much larger than those commonly probed in large Q measurements become important, suggesting that the details of nonperturbative effects in TMD evolution are especially significant in the region of intermediate Q. We highlight the strongly universal nature of the nonperturbative component of evolution and its potential to be tightly constrained by fits from a wide variety of observables that include both large and moderate Q. On this basis, we recommend detailed treatments of the nonperturbative component of the Collins-Soper evolution kernel for future TMD studies.

  17. Real time evolution of non-Gaussian cumulants in the QCD critical regime

    DOE PAGES

    Mukherjee, Swagato; Venugopalan, Raju; Yin, Yi

    2015-09-23

    In this study, we derive a coupled set of equations that describe the nonequilibrium evolution of cumulants of critical fluctuations for spacetime trajectories on the crossover side of the QCD phase diagram. In particular, novel expressions are obtained for the nonequilibrium evolution of non-Gaussian skewness and kurtosis cumulants. UBy utilizing a simple model of the spacetime evolution of a heavy-ion collision, we demonstrate that, depending on the relaxation rate of critical fluctuations, skewness and kurtosis can differ significantly in magnitude as well as in sign from equilibrium expectations. Memory effects are important and shown to persist even for trajectories thatmore » skirt the edge of the critical regime. We use phenomenologically motivated parametrizations of freeze-out curves and of the beam-energy dependence of the net baryon chemical potential to explore the implications of our model study for the critical-point search in heavy-ion collisions.« less

  18. Markovian Monte Carlo program EvolFMC v.2 for solving QCD evolution equations

    NASA Astrophysics Data System (ADS)

    Jadach, S.; Płaczek, W.; Skrzypek, M.; Stokłosa, P.

    2010-02-01

    We present the program EvolFMC v.2 that solves the evolution equations in QCD for the parton momentum distributions by means of the Monte Carlo technique based on the Markovian process. The program solves the DGLAP-type evolution as well as modified-DGLAP ones. In both cases the evolution can be performed in the LO or NLO approximation. The quarks are treated as massless. The overall technical precision of the code has been established at 5×10. This way, for the first time ever, we demonstrate that with the Monte Carlo method one can solve the evolution equations with precision comparable to the other numerical methods. New version program summaryProgram title: EvolFMC v.2 Catalogue identifier: AEFN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including binary test data, etc.: 66 456 (7407 lines of C++ code) No. of bytes in distributed program, including test data, etc.: 412 752 Distribution format: tar.gz Programming language: C++ Computer: PC, Mac Operating system: Linux, Mac OS X RAM: Less than 256 MB Classification: 11.5 External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: Solution of the QCD evolution equations for the parton momentum distributions of the DGLAP- and modified-DGLAP-type in the LO and NLO approximations. Solution method: Monte Carlo simulation of the Markovian process of a multiple emission of partons. Restrictions:Limited to the case of massless partons. Implemented in the LO and NLO approximations only. Weighted events only. Unusual features: Modified-DGLAP evolutions included up to the NLO level. Additional comments: Technical precision established at 5×10. Running time: For the 10 6 events at 100 GeV: DGLAP NLO: 27s; C-type modified DGLAP NLO: 150s (MacBook Pro with Mac OS X v.10

  19. Two-baryon systems from HAL QCD method and the mirage in the temporal correlation of the direct method

    NASA Astrophysics Data System (ADS)

    Iritani, Takumi

    2018-03-01

    Both direct and HAL QCD methods are currently used to study the hadron interactions in lattice QCD. In the direct method, the eigen-energy of two-particle is measured from the temporal correlation. Due to the contamination of excited states, however, the direct method suffers from the fake eigen-energy problem, which we call the "mirage problem," while the HAL QCD method can extract information from all elastic states by using the spatial correlation. In this work, we further investigate systematic uncertainties of the HAL QCD method such as the quark source operator dependence, the convergence of the derivative expansion of the non-local interaction kernel, and the single baryon saturation, which are found to be well controlled. We also confirm the consistency between the HAL QCD method and the Lüscher's finite volume formula. Based on the HAL QCD potential, we quantitatively confirm that the mirage plateau in the direct method is indeed caused by the contamination of excited states.

  20. The Boer-Mulders Transverse Momentum Distribution in the Pion and its Evolution in Lattice QCD

    NASA Astrophysics Data System (ADS)

    Engelhardt, M.; Musch, B.; Hägler, P.; Schäfer, A.; Negele, J.

    2015-02-01

    Starting from a definition of transverse momentum-dependent parton distributions (TMDs) in terms of hadronic matrix elements of a quark bilocal operator containing a staple-shaped gauge link, selected TMD observables can be evaluated within Lattice QCD. A TMD ratio describing the Boer-Mulders effect in the pion is investigated, with a particular emphasis on its evolution as a function of a Collins-Soper-type parameter which quantifies the proximity of the staple-shaped gauge links to the light cone.

  1. Exclusive, hard diffraction in QCD

    NASA Astrophysics Data System (ADS)

    Freund, Andreas

    In the first chapter we give an introduction to hard diffractive scattering in QCD to introduce basic concepts and terminology, thus setting the stage for the following chapters. In the second chapter we make predictions for nondiagonal parton distributions in a proton in the LLA. We calculate the DGLAP-type evolution kernels in the LLA, solve the nondiagonal GLAP evolution equations with a modified version of the CTEQ-package and comment on the range of applicability of the LLA in the asymmetric regime. We show that the nondiagonal gluon distribution g(x1,x2,t,μ2) can be well approximated at small x by the conventional gluon density xG(x,μ2). In the third chapter, we discuss the algorithms used in the LO evolution program for nondiagonal parton distributions in the DGLAP region and discuss the stability of the code. Furthermore, we demonstrate that we can reproduce the case of the LO diagonal evolution within less than 0.5% of the original code as developed by the CTEQ-collaboration. In chapter 4, we show that factorization holds for the deeply virtual Compton scattering amplitude in QCD, up to power suppressed terms, to all orders in perturbation theory. Furthermore, we show that the virtuality of the produced photon does not influence the general theorem. In chapter 5, we demonstrate that perturbative QCD allows one to calculate the absolute cross section of diffractive exclusive production of photons at large Q2 at HERA, while the aligned jet model allows one to estimate the cross section for intermediate Q2~2GeV2. Furthermore, we find that the imaginary part of the amplitude for the production of real photons is larger than the imaginary part of the corresponding DIS amplitude, leading to predictions of a significant counting rate for the current generation of experiments at HERA. We also find a large azimuthal angle asymmetry in ep scattering for HERA kinematics which allows one to directly measure the real part of the DVCS amplitude and hence the

  2. Transverse momentum dependent evolution: Matching semi-inclusive deep inelastic scattering processes to Drell-Yan and W/Z boson production

    NASA Astrophysics Data System (ADS)

    Sun, Peng; Yuan, Feng

    2013-12-01

    We examine the QCD evolution for the transverse momentum dependent observables in hard processes of semi-inclusive hadron production in deep inelastic scattering and Drell-Yan lepton pair production in pp collisions, including the spin-average cross sections and Sivers single transverse spin asymmetries. We show that the evolution equations derived by a direct integral of the Collins-Soper-Sterman evolution kernel from low to high Q can describe well the transverse momentum distributions of the unpolarized cross sections in the Q2 range from 2 to 100GeV2. In addition, the matching is established between our evolution and the Collins-Soper-Sterman resummation with b* prescription and Konychev-Nodalsky parametrization of the nonperturbative form factors, which are formulated to describe the Drell-Yan lepton pair and W/Z boson production in hadronic collisions. With these results, we present the predictions for the Sivers single transverse spin asymmetries in Drell-Yan lepton pair production and W± boson production in polarized pp and π-p collisions for several proposed experiments. We emphasize that these experiments will not only provide crucial test of the sign change of the Sivers asymmetry but also provide important opportunities to study the QCD evolution effects.

  3. Modeling adaptive kernels from probabilistic phylogenetic trees.

    PubMed

    Nicotra, Luca; Micheli, Alessio

    2009-01-01

    Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.

  4. Anisotropic hydrodynamics with a scalar collisional kernel

    NASA Astrophysics Data System (ADS)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  5. Nuclear reactions from lattice QCD

    DOE PAGES

    Briceño, Raúl A.; Davoudi, Zohreh; Luu, Thomas C.

    2015-01-13

    In this study, one of the overarching goals of nuclear physics is to rigorously compute properties of hadronic systems directly from the fundamental theory of strong interactions, Quantum Chromodynamics (QCD). In particular, the hope is to perform reliable calculations of nuclear reactions which will impact our understanding of environments that occur during big bang nucleosynthesis, the evolution of stars and supernovae, and within nuclear reactors and high energy/density facilities. Such calculations, being truly ab initio, would include all two-nucleon and three- nucleon (and higher) interactions in a consistent manner. Currently, lattice QCD provides the only reliable option for performing calculationsmore » of some of the low-energy hadronic observables. With the aim of bridging the gap between lattice QCD and nuclear many-body physics, the Institute for Nuclear Theory held a workshop on Nuclear Reactions from Lattice QCD on March 2013. In this review article, we report on the topics discussed in this workshop and the path planned to move forward in the upcoming years.« less

  6. Heavy quarkonium production at collider energies: Factorization and evolution

    NASA Astrophysics Data System (ADS)

    Kang, Zhong-Bo; Ma, Yan-Qing; Qiu, Jian-Wei; Sterman, George

    2014-08-01

    We present a perturbative QCD factorization formalism for inclusive production of heavy quarkonia of large transverse momentum, pT at collider energies, including both leading power (LP) and next-to-leading power (NLP) behavior in pT. We demonstrate that both LP and NLP contributions can be factorized in terms of perturbatively calculable short-distance partonic coefficient functions and universal nonperturbative fragmentation functions, and derive the evolution equations that are implied by the factorization. We identify projection operators for all channels of the factorized LP and NLP infrared safe short-distance partonic hard parts, and corresponding operator definitions of fragmentation functions. For the NLP, we focus on the contributions involving the production of a heavy quark pair, a necessary condition for producing a heavy quarkonium. We evaluate the first nontrivial order of evolution kernels for all relevant fragmentation functions, and discuss the role of NLP contributions.

  7. Identifying QCD Transition Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Zhou, Kai; Pang, Long-gang; Su, Nan; Petersen, Hannah; Stoecker, Horst; Wang, Xin-Nian

    2018-02-01

    In this proceeding we review our recent work using supervised learning with a deep convolutional neural network (CNN) to identify the QCD equation of state (EoS) employed in hydrodynamic modeling of heavy-ion collisions given only final-state particle spectra ρ(pT, V). We showed that there is a traceable encoder of the dynamical information from phase structure (EoS) that survives the evolution and exists in the final snapshot, which enables the trained CNN to act as an effective "EoS-meter" in detecting the nature of the QCD transition.

  8. Light meson gas in the QCD vacuum and oscillating universe

    NASA Astrophysics Data System (ADS)

    Prokhorov, George; Pasechnik, Roman

    2018-01-01

    We have developed a phenomenological effective quantum-field theoretical model describing the "hadron gas" of the lightest pseudoscalar mesons, scalar σ-meson and σ-vacuum, i.e. the expectation value of the σ-field, at finite temperatures. The corresponding thermodynamic approach was formulated in terms of the generating functional derived from the effective Lagrangian providing the basic thermodynamic information about the "meson plasma + QCD condensate" system. This formalism enables us to study the QCD transition from the hadron phase with direct implications for cosmological evolution. Using the hypothesis about a positively-definite QCD vacuum contribution stochastically produced in early universe, we show that the universe could undergo a series of oscillations during the QCD epoch before resuming unbounded expansion.

  9. Relating quark confinement and chiral symmetry breaking in QCD

    NASA Astrophysics Data System (ADS)

    Suganuma, Hideo; Doi, Takahiro M.; Redlich, Krzysztof; Sasaki, Chihiro

    2017-12-01

    We study the relation between quark confinement and chiral symmetry breaking in QCD. Using lattice QCD formalism, we analytically express the various ‘confinement indicators’, such as the Polyakov loop, its fluctuations, the Wilson loop, the inter-quark potential and the string tension, in terms of the Dirac eigenmodes. In the Dirac spectral representation, there appears a power of the Dirac eigenvalue {λ }n such as {λ }n{Nt-1}, which behaves as a reduction factor for small {λ }n. Consequently, since this reduction factor cannot be cancelled, the low-lying Dirac eigenmodes give negligibly small contribution to the confinement quantities, while they are essential for chiral symmetry breaking. These relations indicate that there is no direct one-to-one correspondence between confinement and chiral symmetry breaking in QCD. In other words, there is some independence of quark confinement from chiral symmetry breaking, which can generally lead to different transition temperatures/densities for deconfinement and chiral restoration. We also investigate the Polyakov loop in terms of the eigenmodes of the Wilson, the clover and the domain-wall fermion kernels, and find similar results. The independence of quark confinement from chiral symmetry breaking seems to be natural, because confinement is realized independently of quark masses and heavy quarks are also confined even without the chiral symmetry.

  10. Instanton liquid properties from lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athenodorou, A.; Boucaud, Philippe; De Soto, F.

    Here, we examined the instanton contribution to the QCD configurations generated from lattice QCD for N F = 0, N F = 2 + 1 and N F = 2 + 1 + 1 dynamical quark flavors from two different and complementary approaches. First via the use of Gradient flow, we computed instanton liquid properties using an algorithm to localize instantons in the gauge field configurations and studied their evolution with flow time. Then, the analysis of the running at low momenta of gluon Green's functions serves as an independent confirmation of the instanton density which can also be derivedmore » without the use of the Gradient flow.« less

  11. Instanton liquid properties from lattice QCD

    DOE PAGES

    Athenodorou, A.; Boucaud, Philippe; De Soto, F.; ...

    2018-02-22

    Here, we examined the instanton contribution to the QCD configurations generated from lattice QCD for N F = 0, N F = 2 + 1 and N F = 2 + 1 + 1 dynamical quark flavors from two different and complementary approaches. First via the use of Gradient flow, we computed instanton liquid properties using an algorithm to localize instantons in the gauge field configurations and studied their evolution with flow time. Then, the analysis of the running at low momenta of gluon Green's functions serves as an independent confirmation of the instanton density which can also be derivedmore » without the use of the Gradient flow.« less

  12. QCD for Postgraduates (3/5)

    ScienceCinema

    Zanderighi, Giulia

    2018-04-27

    Modern QCD - Lecture 3 We will introduce processes with initial-state hadrons and discuss parton distributions, sum rules, as well as the need for a factorization scale once radiative corrections are taken into account. We will then discuss the DGLAP equation, the evolution of parton densities, as well as ways in which parton densities are extracted from data.

  13. Understanding the large-distance behavior of transverse-momentum-dependent parton densities and the Collins-Soper evolution kernel

    NASA Astrophysics Data System (ADS)

    Collins, John; Rogers, Ted

    2015-04-01

    There is considerable controversy about the size and importance of nonperturbative contributions to the evolution of transverse-momentum-dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that nonperturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and nonperturbative. We make a motivated proposal for the parametrization of the nonperturbative part of the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical nonperturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A (bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A (bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell-Yan experiments to measure the Sivers function.

  14. Understanding the large-distance behavior of transverse-momentum-dependent parton densities and the Collins-Soper evolution kernel

    DOE PAGES

    Collins, John; Rogers, Ted

    2015-04-01

    There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part ofmore » the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.« less

  15. Understanding the large-distance behavior of transverse-momentum-dependent parton densities and the Collins-Soper evolution kernel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, John; Rogers, Ted

    There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part ofmore » the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.« less

  16. Inheritance of Kernel Color in Corn: Explanations and Investigations.

    ERIC Educational Resources Information Center

    Ford, Rosemary H.

    2000-01-01

    Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)

  17. On the interface between perturbative and nonperturbative QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deur, Alexandre; Brodsky, Stanley J.; de Teramond, Guy F.

    2016-04-04

    The QCD running couplingmore » $$\\alpha_s(Q^2)$$ sets the strength of the interactions of quarks and gluons as a function of the momentum transfer $Q$. The $Q^2$ dependence of the coupling is required to describe hadronic interactions at both large and short distances. In this article we adopt the light-front holographic approach to strongly-coupled QCD, a formalism which incorporates confinement, predicts the spectroscopy of hadrons composed of light quarks, and describes the low-$Q^2$ analytic behavior of the strong coupling $$\\alpha_s(Q^2)$$. The high-$Q^2$ dependence of the coupling $$\\alpha_s(Q^2)$$ is specified by perturbative QCD and its renormalization group equation. The matching of the high and low $Q^2$ regimes of $$\\alpha_s(Q^2)$$ then determines the scale $$Q_0$$ which sets the interface between perturbative and nonperturbative hadron dynamics. The value of $$Q_0$$ can be used to set the factorization scale for DGLAP evolution of hadronic structure functions and the ERBL evolution of distribution amplitudes. We discuss the scheme-dependence of the value of $$Q_0$$ and the infrared fixed-point of the QCD coupling. Our analysis is carried out for the $$\\bar{MS}$$, $$g_1$$, $MOM$ and $V$ renormalization schemes. Our results show that the discrepancies on the value of $$\\alpha_s$$ at large distance seen in the literature can be explained by different choices of renormalization schemes. Lastly, we also provide the formulae to compute $$\\alpha_s(Q^2)$$ over the entire range of space-like momentum transfer for the different renormalization schemes discussed in this article.« less

  18. Effective holographic models for QCD: Glueball spectrum and trace anomaly

    NASA Astrophysics Data System (ADS)

    Ballon-Bayona, Alfonso; Boschi-Filho, Henrique; Mamani, Luis A. H.; Miranda, Alex S.; Zanchin, Vilson T.

    2018-02-01

    We investigate effective holographic models for QCD arising from five-dimensional dilaton gravity. The models are characterized by a dilaton with a mass term in the UV, dual to a CFT deformation by a relevant operator, and quadratic in the IR. The UV constraint leads to the explicit breaking of conformal symmetry, whereas the IR constraint guarantees linear confinement. We propose semianalytic interpolations between the UV and the IR and obtain a spectrum for scalar and tensor glueballs consistent with lattice QCD data. We use the glueball spectrum as a physical constraint to find the evolution of the model parameters as the mass term goes to 0. Finally, we reproduce the universal result for the trace anomaly of deformed CFTs and propose a dictionary between this result and the QCD trace anomaly. A nontrivial consequence of this dictionary is the emergence of a β function similar to the two-loop perturbative QCD result.

  19. Continuous Advances in QCD 2008

    NASA Astrophysics Data System (ADS)

    Peloso, Marco M.

    2008-12-01

    1. High-order calculations in QCD and in general gauge theories. NLO evolution of color dipoles / I. Balitsky. Recent perturbative results on heavy quark decays / J. H. Piclum, M. Dowling, A. Pak. Leading and non-leading singularities in gauge theory hard scattering / G. Sterman. The space-cone gauge, Lorentz invariance and on-shell recursion for one-loop Yang-Mills amplitudes / D. Vaman, Y.-P. Yao -- 2. Heavy flavor physics. Exotic cc¯ mesons / E. Braaten. Search for new physics in B[symbol]-mixing / A. J. Lenz. Implications of D[symbol]-D[symbol] mixing for new physics / A. A. Petrov. Precise determinations of the charm quark mass / M. Steinhauser -- 3. Quark-gluon dynamics at high density and/or high temperature. Crystalline condensate in the chiral Gross-Neveu model / G. V. Dunne, G. Basar. The strong coupling constant at low and high energies / J. H. Kühn. Quarkyonic matter and the phase diagram of QCD / L. McLerran. Statistical QCD with non-positive measure / J. C. Osborn, K. Splittorff, J. J. M. Verbaarschot. From equilibrium to transport properties of strongly correlated fermi liquids / T. Schäfer. Lessons from random matrix theory for QCD at finite density / K. Splittorff, J. J. M. Verbaarschot -- 4. Methods and models of holographic correspondence. Soft-wall dynamics in AdS/QCD / B. Batell. Holographic QCD / N. Evans, E. Threlfall. QCD glueball sum rules and vacuum topology / H. Forkel. The pion form factor in AdS/QCD / H. J. Kwee, R. F. Lebed. The fast life of holographic mesons / R. C. Myers, A. Sinha. Properties of Baryons from D-branes and instantons / S. Sugimoto. The master space of N = 1 quiver gauge theories: counting BPS operators / A. Zaffaroni. Topological field congurations. Skyrmions in theories with massless adjoint quarks / R. Auzzi. Domain walls, localization and confinement: what binds strings inside walls / S. Bolognesi. Static interactions of non-abelian vortices / M. Eto. Vortices which do not abelianize dynamically: semi

  20. Phenomenological consequences of enhanced bulk viscosity near the QCD critical point

    DOE PAGES

    Monnai, Akihiko; Mukherjee, Swagato; Yin, Yi

    2017-03-06

    In the proximity of the QCD critical point the bulk viscosity of quark-gluon matter is expected to be proportional to nearly the third power of the critical correlation length, and become significantly enhanced. Here, this work is the first attempt to study the phenomenological consequences of enhanced bulk viscosity near the QCD critical point. For this purpose, we implement the expected critical behavior of the bulk viscosity within a non-boost-invariant, longitudinally expanding 1 + 1 dimensional causal relativistic hydrodynamical evolution at nonzero baryon density. We demonstrate that the critically enhanced bulk viscosity induces a substantial nonequilibrium pressure, effectively softening themore » equation of state, and leads to sizable effects in the flow velocity and single-particle distributions at the freeze-out. In conclusion, the observable effects that may arise due to the enhanced bulk viscosity in the vicinity of the QCD critical point can be used as complementary information to facilitate searches for the QCD critical point.« less

  1. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    PubMed Central

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  2. Iterative filtering decomposition based on local spectral evolution kernel

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559

  3. The site, size, spatial stability, and energetics of an X-ray flare kernel

    NASA Technical Reports Server (NTRS)

    Petrasso, R.; Gerassimenko, M.; Nolte, J.

    1979-01-01

    The site, size evolution, and energetics of an X-ray kernel that dominated a solar flare during its rise and somewhat during its peak are investigated. The position of the kernel remained stationary to within about 3 arc sec over the 30-min interval of observations, despite pulsations in the kernel X-ray brightness in excess of a factor of 10. This suggests a tightly bound, deeply rooted magnetic structure, more plausibly associated with the near chromosphere or low corona rather than with the high corona. The H-alpha flare onset coincided with the appearance of the kernel, again suggesting a close spatial and temporal coupling between the chromospheric H-alpha event and the X-ray kernel. At the first kernel brightness peak its size was no larger than about 2 arc sec, when it accounted for about 40% of the total flare flux. In the second rise phase of the kernel, a source power input of order 2 times 10 to the 24th ergs/sec is minimally required.

  4. Cosmological abundance of the QCD axion coupled to hidden photons

    NASA Astrophysics Data System (ADS)

    Kitajima, Naoya; Sekiguchi, Toyokazu; Takahashi, Fuminobu

    2018-06-01

    We study the cosmological evolution of the QCD axion coupled to hidden photons. For a moderately strong coupling, the motion of the axion field leads to an explosive production of hidden photons by tachyonic instability. We use lattice simulations to evaluate the cosmological abundance of the QCD axion. In doing so, we incorporate the backreaction of the produced hidden photons on the axion dynamics, which becomes significant in the non-linear regime. We find that the axion abundance is suppressed by at most O (102) for the decay constant fa =1016GeV, compared to the case without the coupling. For a sufficiently large coupling, the motion of the QCD axion becomes strongly damped, and as a result, the axion abundance is enhanced. Our results show that the cosmological upper bound on the axion decay constant can be relaxed by a few hundred for a certain range of the coupling to hidden photons.

  5. QCD In Extreme Conditions

    NASA Astrophysics Data System (ADS)

    Wilczek, Frank

    Introduction Symmetry and the Phenomena of QCD Apparent and Actual Symmetries Asymptotic Freedom Confinement Chiral Symmetry Breaking Chiral Anomalies and Instantons High Temperature QCD: Asymptotic Properties Significance of High Temperature QCD Numerical Indications for Quasi-Free Behavior Ideas About Quark-Gluon Plasma Screening Versus Confinement Models of Chiral Symmetry Breaking More Refined Numerical Experiments High-Temperature QCD: Phase Transitions Yoga of Phase Transitions and Order Parameters Application to Glue Theories Application to Chiral Transitions Close Up on Two Flavors A Genuine Critical Point! (?) High-Density QCD: Methods Hopes, Doubts, and Fruition Another Renormalization Group Pairing Theory Taming the Magnetic Singularity High-Density QCD: Color-Flavor Locking and Quark-Hadron Continuity Gauge Symmetry (Non)Breaking Symmetry Accounting Elementary Excitations A Modified Photon Quark-Hadron Continuity Remembrance of Things Past More Quarks Fewer Quarks and Reality

  6. Lattice QCD in rotating frames.

    PubMed

    Yamamoto, Arata; Hirono, Yuji

    2013-08-23

    We formulate lattice QCD in rotating frames to study the physics of QCD matter under rotation. We construct the lattice QCD action with the rotational metric and apply it to the Monte Carlo simulation. As the first application, we calculate the angular momenta of gluons and quarks in the rotating QCD vacuum. This new framework is useful to analyze various rotation-related phenomena in QCD.

  7. Kernel Abortion in Maize 1

    PubMed Central

    Hanft, Jonathan M.; Jones, Robert J.

    1986-01-01

    Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846

  8. NLO Hierarchy of Wilson Lines Evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balitsky, Ian

    2015-03-01

    The high-energy behavior of QCD amplitudes can be described in terms of the rapidity evolution of Wilson lines. I present the hierarchy of evolution equations for Wilson lines in the next-to-leading order.

  9. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. The gluon density of the proton at low x from a QCD analysis of F2

    NASA Astrophysics Data System (ADS)

    Aid, S.; Andreev, V.; Andrieu, B.; Appuhn, R.-D.; Arpagaus, M.; Babaev, A.; Baehr, J.; Bán, J.; Ban, Y.; Baranov, P.; Barrelet, E.; Barschke, R.; Bartel, W.; Barth, M.; Bassler, U.; Beck, H. P.; Behrend, H.-J.; Belousov, A.; Berger, Ch.; Bernardi, G.; Bernet, R.; Bertrand-Coremans, G.; Besançon, M.; Beyer, R.; Biddulph, P.; Bispham, P.; Bizot, J. C.; Blobel, V.; Borras, K.; Botterweck, F.; Boudry, V.; Braemer, A.; Brasse, F.; Braunschweig, W.; Brisson, V.; Bruncko, D.; Brune, C.; Buchholz, R.; Büngener, L.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Burton, M.; Buschhorn, G.; Campbell, A. J.; Carli, T.; Charles, F.; Charlet, M.; Clarke, D.; Clegg, A. B.; Clerbaux, B.; Colombo, M.; Contreras, J. G.; Cormack, C.; Coughlan, J. A.; Courau, A.; Coutures, Ch.; Cozzika, G.; Criegee, L.; Cussans, D. G.; Cvach, J.; Dagoret, S.; Dainton, J. B.; Dau, W. D.; Daum, K.; David, M.; Delcourt, B.; Del Buono, L.; De Roeck, A.; De Wolf, E. A.; Di Nezza, P.; Dollfus, C.; Dowell, J. D.; Dreis, H. B.; Droutskoi, A.; Duboc, J.; Düllmann, D.; Dünger, O.; Duhm, H.; Ebert, J.; Ebert, T. R.; Eckerlin, G.; Efremenko, V.; Egli, S.; Ehrlichmann, H.; Eichenberger, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Ellison, R. J.; Elsen, E.; Erdmann, M.; Erdmann, W.; Evrard, E.; Favart, L.; Fedotov, A.; Feeken, D.; Felst, R.; Feltesse, J.; Ferencei, J.; Ferrarotto, F.; Flamm, K.; Fleischer, M.; Flieser, M.; Flügge, G.; Fomenko, A.; Fominykh, B.; Forbush, M.; Formánek, J.; Foster, J. M.; Franke, G.; Fretwurst, E.; Gabathuler, E.; Gabathuler, K.; Gamerdinger, K.; Garvey, J.; Gayler, J.; Gebauer, M.; Gellrich, A.; Genzel, H.; Gerhards, R.; Goerlach, U.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Goldner, D.; Gonzalez-Pineiro, B.; Gorelov, I.; Goritchev, P.; Grab, C.; Grässler, H.; Grässler, R.; Greenshaw, T.; Grindhammer, G.; Gruber, A.; Gruber, C.; Haack, J.; Haidt, D.; Hajduk, L.; Hamon, O.; Hampel, M.; Hanlon, E. M.; Hapke, M.; Haynes, W. J.; Heatherington, J.; Heinzelmann, G.; Henderson, R. C. W.; Henschel, H.; Herynek, I.; Hess, M. F.; Hildesheim, W.; Hill, P.; Hiller, K. H.; Hilton, C. D.; Hladký, J.; Hoeger, K. C.; Höppner, M.; Horisberger, R.; Hudgson, V. L.; Huet, Ph.; Hütte, M.; Hufnagel, H.; Ibbotson, M.; Itterbeck, H.; Jabiol, M.-A.; Jacholkowska, A.; Jacobsson, C.; Jaffre, M.; Janoth, J.; Jansen, T.; Jönsson, L.; Johnson, D. P.; Johnson, L.; Jung, H.; Kalmus, P. I. P.; Kant, D.; Kaschowitz, R.; Kasselmann, P.; Kathage, U.; Katzy, J.; Kaufmann, H. H.; Kazarian, S.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Ko, W.; Köhler, T.; Köhne, J. H.; Kolanoski, H.; Kole, F.; Kolya, S. D.; Korbel, V.; Korn, M.; Kostka, P.; Kotelnikov, S. K.; Krämerkämper, T.; Krasny, M. W.; Krehbiel, H.; Krücker, D.; Krüger, U.; Krüner-Marquis, U.; Kubenka, J. P.; Küster, H.; Kuhlen, M.; Kurča, T.; Kurzhöfer, J.; Kuznik, B.; Lacour, D.; Lamarche, F.; Lander, R.; Landon, M. P. J.; Lange, W.; Lanius, P.; Laporte, J.-F.; Lebedev, A.; Leverenz, C.; Levonian, S.; Ley, Ch.; Lindner, A.; Lindström, G.; Link, J.; Linsel, F.; Lipinski, J.; List, B.; Lobo, G.; Loch, P.; Lohmander, H.; Lomas, J.; Lopez, G. C.; Lubimov, V.; Lüke, D.; Magnussen, N.; Malinovski, E.; Mani, S.; Maraček, R.; Marage, P.; Marks, J.; Marshall, R.; Martens, J.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Masson, S.; Mavroidis, T.; Maxfield, S. J.; McMahon, S. J.; Mehta, A.; Meier, K.; Mercer, D.; Merz, T.; Meyer, C. A.; Meyer, H.; Meyer, J.; Migliori, A.; Mikocki, S.; Milstead, D.; Moreau, F.; Morris, J. V.; Mroczko, E.; Müller, G.; Müller, K.; Murín, P.; Nagovizin, V.; Nahnhauer, R.; Naroska, B.; Naumann, Th.; Newman, P. R.; Newton, D.; Neyret, D.; Nguyen, H. K.; Nicholls, T. C.; Niebergall, F.; Niebuhr, C.; Niedzballa, Ch.; Nisius, R.; Nowak, G.; Noyes, G. W.; Nyberg-Werther, M.; Oakden, M.; Oberlack, H.; Obrock, U.; Olsson, J. E.; Ozerov, D.; Panaro, E.; Panitch, A.; Pascaud, C.; Patel, G. D.; Peppel, E.; Perez, E.; Phillips, J. P.; Pichler, Ch.; Pieuchot, A.; Pitzl, D.; Pope, G.; Prell, S.; Prosi, R.; Rabbertz, K.; Rädel, G.; Raupach, F.; Reimer, P.; Reinshagen, S.; Ribarics, P.; Rick, H.; Riech, V.; Riedlberger, J.; Riess, S.; Rietz, M.; Rizvi, E.; Robertson, S. M.; Robmann, P.; Roloff, H. E.; Roosen, R.; Rosenbauer, K.; Rostovtsev, A.; Rouse, F.; Royon, C.; Rüter, K.; Rusakov, S.; Rybicki, K.; Rylko, R.; Sahlmann, N.; Sanchez, E.; Sankey, D. P. C.; Schacht, P.; Schiek, S.; Schleper, P.; von Schlippe, W.; Schmidt, C.; Schmidt, D.; Schmidt, G.; Schöning, A.; Schröder, V.; Schuhmann, E.; Schwab, B.; Schwind, A.; Sefkow, F.; Seidel, M.; Sell, R.; Semenov, A.; Shekelyan, V.; Sheviakov, I.; Shooshtari, H.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Skillicorn, I. O.; Smirnov, P.; Smith, J. R.; Solochenko, V.; Soloviev, Y.; Spiekermann, J.; Spielman, S.; Spitzer, H.; Starosta, R.; Steenbock, M.; Steffen, P.; Steinberg, R.; Stella, B.; Stephens, K.; Stier, J.; Stiewe, J.; Stösslein, U.; Stolze, K.; Strachota, J.; Straumann, U.; Struczinski, W.; Sutton, J. P.; Tapprogge, S.; Tchernyshov, V.; Thiebaux, C.; Thompson, G.; Truöl, P.; Turnau, J.; Tutas, J.; Uelkes, P.; Usik, A.; Valkár, S.; Valkárová, A.; Vallée, C.; Van Esch, P.; Van Mechelen, P.; Vartapetian, A.; Vazdik, Y.; Verrecchia, P.; Villet, G.; Wacker, K.; Wagener, A.; Wagener, M.; Walker, I. W.; Walther, A.; Weber, G.; Weber, M.; Wegener, D.; Wegner, A.; Wellisch, H. P.; West, L. R.; Willard, S.; Willard, S.; Winde, M.; Winter, G.-G.; Wittek, C.; Wright, A. E.; Wünsch, E.; Wulff, N.; Yiou, T. P.; Žáček, J.; Zarbock, D.; Zhang, Z.; Zhokin, A.; Zimmer, M.; Zimmermann, W.; Zomer, F.; Zuber, K.; H1 Collaboration

    1995-02-01

    We present a QCD analysis of the proton structure function F2 measured by the H1 experiment at HERA, combined with data from previous fixed target experiments. The gluon density is extracted from the scaling violations of F2 in the range 2 · 10 -4 < x < 3 · 10 -2 and compared with an approximate solution of the QCD evolution equations. The gluon density is found to rise steeply with decreasing x.

  11. Classification With Truncated Distance Kernel.

    PubMed

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  12. QCD for Postgraduates (1/5)

    ScienceCinema

    Zanderighi, Giulia

    2018-04-26

    Modern QCD - Lecture 1 Starting from the QCD Lagrangian we will revisit some basic QCD concepts and derive fundamental properties like gauge invariance and isospin symmetry and will discuss the Feynman rules of the theory. We will then focus on the gauge group of QCD and derive the Casimirs CF and CA and some useful color identities.

  13. QCD evolution of (un)polarized gluon TMDPDFs and the Higgs q T -distribution

    NASA Astrophysics Data System (ADS)

    Echevarria, Miguel G.; Kasemets, Tomas; Mulders, Piet J.; Pisano, Cristian

    2015-07-01

    We provide the proper definition of all the leading-twist (un)polarized gluon transverse momentum dependent parton distribution functions (TMDPDFs), by considering the Higgs boson transverse momentum distribution in hadron-hadron collisions and deriving the factorization theorem in terms of them. We show that the evolution of all the (un)polarized gluon TMDPDFs is driven by a universal evolution kernel, which can be resummed up to next-to-next-to-leading-logarithmic accuracy. Considering the proper definition of gluon TMDPDFs, we perform an explicit next-to-leading-order calculation of the unpolarized ( f {1/ g }), linearly polarized ( h {1/⊥ g }) and helicity ( g {1/L g }) gluon TMDPDFs, and show that, as expected, they are free from rapidity divergences. As a byproduct, we obtain the Wilson coefficients of the refactorization of these TMDPDFs at large transverse momentum. In particular, the coefficient of g {1/L g }, which has never been calculated before, constitutes a new and necessary ingredient for a reliable phenomenological extraction of this quantity, for instance at RHIC or the future AFTER@LHC or Electron-Ion Collider. The coefficients of f {1/ g } and h {1/⊥ g } have never been calculated in the present formalism, although they could be obtained by carefully collecting and recasting previous results in the new TMD formalism. We apply these results to analyze the contribution of linearly polarized gluons at different scales, relevant, for instance, for the inclusive production of the Higgs boson and the C-even pseudoscalar bottomonium state η b . Applying our resummation scheme we finally provide predictions for the Higgs boson q T -distribution at the LHC.

  14. Numerical study of the ignition behavior of a post-discharge kernel injected into a turbulent stratified cross-flow

    NASA Astrophysics Data System (ADS)

    Jaravel, Thomas; Labahn, Jeffrey; Ihme, Matthias

    2017-11-01

    The reliable initiation of flame ignition by high-energy spark kernels is critical for the operability of aviation gas turbines. The evolution of a spark kernel ejected by an igniter into a turbulent stratified environment is investigated using detailed numerical simulations with complex chemistry. At early times post ejection, comparisons of simulation results with high-speed Schlieren data show that the initial trajectory of the kernel is well reproduced, with a significant amount of air entrainment from the surrounding flow that is induced by the kernel ejection. After transiting in a non-flammable mixture, the kernel reaches a second stream of flammable methane-air mixture, where the successful of the kernel ignition was found to depend on the local flow state and operating conditions. By performing parametric studies, the probability of kernel ignition was identified, and compared with experimental observations. The ignition behavior is characterized by analyzing the local chemical structure, and its stochastic variability is also investigated.

  15. Integrated Model of Multiple Kernel Learning and Differential Evolution for EUR/USD Trading

    PubMed Central

    Deng, Shangkun; Sakurai, Akito

    2014-01-01

    Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL) with differential evolution (DE) for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI), while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX) currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits. PMID:25097891

  16. Integrated model of multiple kernel learning and differential evolution for EUR/USD trading.

    PubMed

    Deng, Shangkun; Sakurai, Akito

    2014-01-01

    Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL) with differential evolution (DE) for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI), while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX) currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits.

  17. APFEL: A PDF evolution library with QED corrections

    NASA Astrophysics Data System (ADS)

    Bertone, Valerio; Carrazza, Stefano; Rojo, Juan

    2014-06-01

    Quantum electrodynamics and electroweak corrections are important ingredients for many theoretical predictions at the LHC. This paper documents APFEL, a new PDF evolution package that allows for the first time to perform DGLAP evolution up to NNLO in QCD and to LO in QED, in the variable-flavor-number scheme and with either pole or MS bar heavy quark masses. APFEL consistently accounts for the QED corrections to the evolution of quark and gluon PDFs and for the contribution from the photon PDF in the proton. The coupled QCD ⊗ QED equations are solved in x-space by means of higher order interpolation, followed by Runge-Kutta solution of the resulting discretized evolution equations. APFEL is based on an innovative and flexible methodology for the sequential solution of the QCD and QED evolution equations and their combination. In addition to PDF evolution, APFEL provides a module that computes Deep-Inelastic Scattering structure functions in the FONLL general-mass variable-flavor-number scheme up to O(αs2) . All the functionalities of APFEL can be accessed via a Graphical User Interface, supplemented with a variety of plotting tools for PDFs, parton luminosities and structure functions. Written in FORTRAN 77, APFEL can also be used via the C/C++ and Python interfaces, and is publicly available from the HepForge repository.

  18. The QCD running coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.

    Here, we review present knowledge onmore » $$\\alpha_{s}$$, the Quantum Chromodynamics (QCD) running coupling. The dependence of $$\\alpha_s(Q^2)$$ on momentum transfer $Q$ encodes the underlying dynamics of hadron physics --from color confinement in the infrared domain to asymptotic freedom at short distances. We will survey our present theoretical and empirical knowledge of $$\\alpha_s(Q^2)$$, including constraints at high $Q^2$ predicted by perturbative QCD, and constraints at small $Q^2$ based on models of nonperturbative dynamics. In the first, introductory, part of this review, we explain the phenomenological meaning of the coupling, the reason for its running, and the challenges facing a complete understanding of its analytic behavior in the infrared domain. In the second, more technical, part of the review, we discuss $$\\alpha_s(Q^2)$$ in the high momentum transfer domain of QCD. We review how $$\\alpha_s$$ is defined, including its renormalization scheme dependence, the definition of its renormalization scale, the utility of effective charges, as well as `` Commensurate Scale Relations" which connect the various definitions of the QCD coupling without renormalization scale ambiguity. We also report recent important experimental measurements and advanced theoretical analyses which have led to precise QCD predictions at high energy. As an example of an important optimization procedure, we discuss the ``Principle of Maximum Conformality" which enhances QCD's predictive power by removing the dependence of the predictions for physical observables on the choice of the gauge and renormalization scheme. In last part of the review, we discuss $$\\alpha_s(Q^2)$$ in the low momentum transfer domain, where there has been no consensus on how to define $$\\alpha_s(Q^2)$$ or its analytic behavior. We will discuss the various approaches used for low energy calculations. Among them, we will discuss the light-front holographic approach to QCD in the strongly coupled

  19. The QCD running coupling

    DOE PAGES

    Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.

    2016-05-09

    Here, we review present knowledge onmore » $$\\alpha_{s}$$, the Quantum Chromodynamics (QCD) running coupling. The dependence of $$\\alpha_s(Q^2)$$ on momentum transfer $Q$ encodes the underlying dynamics of hadron physics --from color confinement in the infrared domain to asymptotic freedom at short distances. We will survey our present theoretical and empirical knowledge of $$\\alpha_s(Q^2)$$, including constraints at high $Q^2$ predicted by perturbative QCD, and constraints at small $Q^2$ based on models of nonperturbative dynamics. In the first, introductory, part of this review, we explain the phenomenological meaning of the coupling, the reason for its running, and the challenges facing a complete understanding of its analytic behavior in the infrared domain. In the second, more technical, part of the review, we discuss $$\\alpha_s(Q^2)$$ in the high momentum transfer domain of QCD. We review how $$\\alpha_s$$ is defined, including its renormalization scheme dependence, the definition of its renormalization scale, the utility of effective charges, as well as `` Commensurate Scale Relations" which connect the various definitions of the QCD coupling without renormalization scale ambiguity. We also report recent important experimental measurements and advanced theoretical analyses which have led to precise QCD predictions at high energy. As an example of an important optimization procedure, we discuss the ``Principle of Maximum Conformality" which enhances QCD's predictive power by removing the dependence of the predictions for physical observables on the choice of the gauge and renormalization scheme. In last part of the review, we discuss $$\\alpha_s(Q^2)$$ in the low momentum transfer domain, where there has been no consensus on how to define $$\\alpha_s(Q^2)$$ or its analytic behavior. We will discuss the various approaches used for low energy calculations. Among them, we will discuss the light-front holographic approach to QCD in the strongly coupled

  20. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  1. The QCD running coupling

    NASA Astrophysics Data System (ADS)

    Deur, Alexandre; Brodsky, Stanley J.; de Téramond, Guy F.

    2016-09-01

    We review the present theoretical and empirical knowledge for αs, the fundamental coupling underlying the interactions of quarks and gluons in Quantum Chromodynamics (QCD). The dependence of αs(Q2) on momentum transfer Q encodes the underlying dynamics of hadron physics-from color confinement in the infrared domain to asymptotic freedom at short distances. We review constraints on αs(Q2) at high Q2, as predicted by perturbative QCD, and its analytic behavior at small Q2, based on models of nonperturbative dynamics. In the introductory part of this review, we explain the phenomenological meaning of the coupling, the reason for its running, and the challenges facing a complete understanding of its analytic behavior in the infrared domain. In the second, more technical, part of the review, we discuss the behavior of αs(Q2) in the high momentum transfer domain of QCD. We review how αs is defined, including its renormalization scheme dependence, the definition of its renormalization scale, the utility of effective charges, as well as "Commensurate Scale Relations" which connect the various definitions of the QCD coupling without renormalization-scale ambiguity. We also report recent significant measurements and advanced theoretical analyses which have led to precise QCD predictions at high energy. As an example of an important optimization procedure, we discuss the "Principle of Maximum Conformality", which enhances QCD's predictive power by removing the dependence of the predictions for physical observables on the choice of theoretical conventions such as the renormalization scheme. In the last part of the review, we discuss the challenge of understanding the analytic behavior αs(Q2) in the low momentum transfer domain. We survey various theoretical models for the nonperturbative strongly coupled regime, such as the light-front holographic approach to QCD. This new framework predicts the form of the quark-confinement potential underlying hadron spectroscopy and

  2. Carbothermic Synthesis of ~820- m UN Kernels. Investigation of Process Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindemer, Terrence; Silva, Chinthaka M; Henry, Jr, John James

    2015-06-01

    This report details the continued investigation of process variables involved in converting sol-gel-derived, urainia-carbon microspheres to ~820-μm-dia. UN fuel kernels in flow-through, vertical refractory-metal crucibles at temperatures up to 2123 K. Experiments included calcining of air-dried UO 3-H 2O-C microspheres in Ar and H 2-containing gases, conversion of the resulting UO 2-C kernels to dense UO 2:2UC in the same gases and vacuum, and its conversion in N 2 to in UC 1-xN x. The thermodynamics of the relevant reactions were applied extensively to interpret and control the process variables. Producing the precursor UO 2:2UC kernel of ~96% theoretical densitymore » was required, but its subsequent conversion to UC 1-xN x at 2123 K was not accompanied by sintering and resulted in ~83-86% of theoretical density. Decreasing the UC 1-xN x kernel carbide component via HCN evolution was shown to be quantitatively consistent with present and past experiments and the only useful application of H2 in the entire process.« less

  3. Cosmological evolution of the Higgs boson's vacuum expectation value

    NASA Astrophysics Data System (ADS)

    Calmet, Xavier

    2017-11-01

    We point out that the expansion of the universe leads to a cosmological time evolution of the vacuum expectation of the Higgs boson. Within the standard model of particle physics, the cosmological time evolution of the vacuum expectation of the Higgs leads to a cosmological time evolution of the masses of the fermions and of the electroweak gauge bosons, while the scale of Quantum Chromodynamics (QCD) remains constant. Precise measurements of the cosmological time evolution of μ =m_e/m_p, where m_e and m_p are, respectively, the electron and proton mass (which is essentially determined by the QCD scale), therefore provide a test of the standard models of particle physics and of cosmology. This ratio can be measured using modern atomic clocks.

  4. TEMPORAL EVOLUTION AND SPATIAL DISTRIBUTION OF WHITE-LIGHT FLARE KERNELS IN A SOLAR FLARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawate, T.; Ishii, T. T.; Nakatani, Y.

    2016-12-10

    On 2011 September 6, we observed an X2.1-class flare in continuum and H α with a frame rate of about 30 Hz. After processing images of the event by using a speckle-masking image reconstruction, we identified white-light (WL) flare ribbons on opposite sides of the magnetic neutral line. We derive the light curve decay times of the WL flare kernels at each resolution element by assuming that the kernels consist of one or two components that decay exponentially, starting from the peak time. As a result, 42% of the pixels have two decay-time components with average decay times of 15.6 andmore » 587 s, whereas the average decay time is 254 s for WL kernels with only one decay-time component. The peak intensities of the shorter decay-time component exhibit good spatial correlation with the WL intensity, whereas the peak intensities of the long decay-time components tend to be larger in the early phase of the flare at the inner part of the flare ribbons, close to the magnetic neutral line. The average intensity of the longer decay-time components is 1.78 times higher than that of the shorter decay-time components. If the shorter decay time is determined by either the chromospheric cooling time or the nonthermal ionization timescale and the longer decay time is attributed to the coronal cooling time, this result suggests that WL sources from both regions appear in 42% of the WL kernels and that WL emission of the coronal origin is sometimes stronger than that of chromospheric origin.« less

  5. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  6. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  7. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  8. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  9. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  10. Phylodynamic Inference with Kernel ABC and Its Application to HIV Epidemiology.

    PubMed

    Poon, Art F Y

    2015-09-01

    The shapes of phylogenetic trees relating virus populations are determined by the adaptation of viruses within each host, and by the transmission of viruses among hosts. Phylodynamic inference attempts to reverse this flow of information, estimating parameters of these processes from the shape of a virus phylogeny reconstructed from a sample of genetic sequences from the epidemic. A key challenge to phylodynamic inference is quantifying the similarity between two trees in an efficient and comprehensive way. In this study, I demonstrate that a new distance measure, based on a subset tree kernel function from computational linguistics, confers a significant improvement over previous measures of tree shape for classifying trees generated under different epidemiological scenarios. Next, I incorporate this kernel-based distance measure into an approximate Bayesian computation (ABC) framework for phylodynamic inference. ABC bypasses the need for an analytical solution of model likelihood, as it only requires the ability to simulate data from the model. I validate this "kernel-ABC" method for phylodynamic inference by estimating parameters from data simulated under a simple epidemiological model. Results indicate that kernel-ABC attained greater accuracy for parameters associated with virus transmission than leading software on the same data sets. Finally, I apply the kernel-ABC framework to study a recent outbreak of a recombinant HIV subtype in China. Kernel-ABC provides a versatile framework for phylodynamic inference because it can fit a broader range of models than methods that rely on the computation of exact likelihoods. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  11. Hard QCD processes in the nuclear medium

    NASA Astrophysics Data System (ADS)

    Freese, Adam

    The environment inside the atomic nucleus is one of the most fascinating arenas for the study of quantum chromodynamics (QCD). The strongly-interacting nature of the nuclear medium a?ects the nature of both QCD processes and the quark-gluon structure of hadrons, allowing several unique aspects of the strong nuclear force to be investigated in reactions involving nuclear targets. The research presented in this dissertation explores two aspects of nuclear QCD: firstly, the partonic structure of the nucleus itself; and secondly, the use of the nucleus as a micro-laboratory in which QCD processes can be studied. The partonic structure of the nucleus is calculated in this work by deriving and utilizing a convolution formula. The hadronic structure of the nucleus and the quark-gluon structure of its constituent nucleons are taken together to determine the nuclear partonic structure. Light cone descriptions of short range correlations, in terms of both hadronic and partonic structure, are derived and taken into account. Medium modifications of the bound nucleons are accounted for using the color screening model, and QCD evolution is used to connect nuclear partonic structure at vastly di?erent energy scales. The formalism developed for calculating nuclear partonic structure is applied to inclusive dijet production from proton-nucleus collisions at LHC kinematics, and novel predictions are calculated and presented for the dijet cross section. The nucleus is investigated as a micro-laboratory in vector meson photoproduction reactions. In particular, the deuteron is studied in the break-up reaction gammad → Vpn, for both the φ(1020) and J/v vector mesons. The generalized eikonal approximation is utilized, allowing unambiguous separation of the impulse approximation and final state interactions (FSIs). Two peaks or valleys are seen in the angular distribution of the reaction cross section, each of which is due to an FSI between either the proton and neutron, or the

  12. QCD as a Theory of Hadrons

    NASA Astrophysics Data System (ADS)

    Narison, Stephan

    2004-05-01

    About Stephan Narison; Outline of the book; Preface; Acknowledgements; Part I. General Introduction: 1. A short flash on particle physics; 2. The pre-QCD era; 3. The QCD story; 4. Field theory ingredients; Part II. QCD Gauge Theory: 5. Lagrangian and gauge invariance; 6. Quantization using path integral; 7. QCD and its global invariance; Part III. MS scheme for QCD and QED: Introduction; 8. Dimensional regularization; 9. The MS renormalization scheme; 10. Renormalization of operators using the background field method; 11. The renormalization group; 12. Other renormalization schemes; 13. MS scheme for QED; 14. High-precision low-energy QED tests; Part IV. Deep Inelastic Scattering at Hadron Colliders: 15. OPE for deep inelastic scattering; 16. Unpolarized lepton-hadron scattering; 17. The Altarelli-Parisi equation; 18. More on unpolarized deep inelastic scatterings; 19. Polarized deep-inelastic processes; 20. Drell-Yan process; 21. One 'prompt photon' inclusive production; Part V. Hard Processes in e+e- Collisions: Introduction; 22. One hadron inclusive production; 23. gg scatterings and the 'spin' of the photon; 24. QCD jets; 25. Total inclusive hadron productions; Part VI. Summary of QCD Tests and as Measurements; Part VII. Power Corrections in QCD: 26. Introduction; 27. The SVZ expansion; 28. Technologies for evaluating Wilson coefficients; 29. Renormalons; 30. Beyond the SVZ expansion; Part VIII. QCD Two-Point Functions: 31. References guide to original works; 32. (Pseudo)scalar correlators; 33. (Axial-)vector two-point functions; 34. Tensor-quark correlator; 35. Baryonic correlators; 36. Four-quark correlators; 37. Gluonia correlators; 38. Hybrid correlators; 39. Correlators in x-space; Part IX. QCD Non-Perturbative Methods: 40. Introduction; 41. Lattice gauge theory; 42. Chiral perturbation theory; 43. Models of the QCD effective action; 44. Heavy quark effective theory; 45. Potential approaches to quarkonia; 46. On monopole and confinement; Part X. QCD

  13. QCD as a Theory of Hadrons

    NASA Astrophysics Data System (ADS)

    Narison, Stephan

    2007-07-01

    About Stephan Narison; Outline of the book; Preface; Acknowledgements; Part I. General Introduction: 1. A short flash on particle physics; 2. The pre-QCD era; 3. The QCD story; 4. Field theory ingredients; Part II. QCD Gauge Theory: 5. Lagrangian and gauge invariance; 6. Quantization using path integral; 7. QCD and its global invariance; Part III. MS scheme for QCD and QED: Introduction; 8. Dimensional regularization; 9. The MS renormalization scheme; 10. Renormalization of operators using the background field method; 11. The renormalization group; 12. Other renormalization schemes; 13. MS scheme for QED; 14. High-precision low-energy QED tests; Part IV. Deep Inelastic Scattering at Hadron Colliders: 15. OPE for deep inelastic scattering; 16. Unpolarized lepton-hadron scattering; 17. The Altarelli-Parisi equation; 18. More on unpolarized deep inelastic scatterings; 19. Polarized deep-inelastic processes; 20. Drell-Yan process; 21. One 'prompt photon' inclusive production; Part V. Hard Processes in e+e- Collisions: Introduction; 22. One hadron inclusive production; 23. gg scatterings and the 'spin' of the photon; 24. QCD jets; 25. Total inclusive hadron productions; Part VI. Summary of QCD Tests and as Measurements; Part VII. Power Corrections in QCD: 26. Introduction; 27. The SVZ expansion; 28. Technologies for evaluating Wilson coefficients; 29. Renormalons; 30. Beyond the SVZ expansion; Part VIII. QCD Two-Point Functions: 31. References guide to original works; 32. (Pseudo)scalar correlators; 33. (Axial-)vector two-point functions; 34. Tensor-quark correlator; 35. Baryonic correlators; 36. Four-quark correlators; 37. Gluonia correlators; 38. Hybrid correlators; 39. Correlators in x-space; Part IX. QCD Non-Perturbative Methods: 40. Introduction; 41. Lattice gauge theory; 42. Chiral perturbation theory; 43. Models of the QCD effective action; 44. Heavy quark effective theory; 45. Potential approaches to quarkonia; 46. On monopole and confinement; Part X. QCD

  14. QCD for Postgraduates (2/5)

    ScienceCinema

    Zanderighi, Giulia

    2018-05-21

    Modern QCD - Lecture 2 We will start discussing the matter content of the theory and revisit the experimental measurements that led to the discovery of quarks. We will then consider a classic QCD observable, the R-ratio, and use it to illustrate the appearance of UV divergences and the need to renormalize the coupling constant of QCD. We will then discuss asymptotic freedom and confinement. Finally, we will examine a case where soft and collinear infrared divergences appear, will discuss the soft approximation in QCD and will introduce the concept of infrared safe jets.

  15. Renormalization of Extended QCD2

    NASA Astrophysics Data System (ADS)

    Fukaya, Hidenori; Yamamura, Ryo

    2015-10-01

    Extended QCD (XQCD), proposed by Kaplan [D. B. Kaplan, arXiv:1306.5818], is an interesting reformulation of QCD with additional bosonic auxiliary fields. While its partition function is kept exactly the same as that of original QCD, XQCD naturally contains properties of low-energy hadronic models. We analyze the renormalization group flow of 2D (X)QCD, which is solvable in the limit of a large number of colors N_c, to understand what kind of roles the auxiliary degrees of freedom play and how the hadronic picture emerges in the low-energy region.

  16. FOREWORD: Extreme QCD 2012 (xQCD)

    NASA Astrophysics Data System (ADS)

    Alexandru, Andrei; Bazavov, Alexei; Liu, Keh-Fei

    2013-04-01

    The Extreme QCD 2012 conference, held at the George Washington University in August 2012, celebrated the 10th event in the series. It has been held annually since 2003 at different locations: San Carlos (2011), Bad Honnef (2010), Seoul (2009), Raleigh (2008), Rome (2007), Brookhaven (2006), Swansea (2005), Argonne (2004), and Nara (2003). As usual, it was a very productive and inspiring meeting that brought together experts in the field of finite-temperature QCD, both theoretical and experimental. On the experimental side, we heard about recent results from major experiments, such as PHENIX and STAR at Brookhaven National Laboratory, ALICE and CMS at CERN, and also about the constraints on the QCD phase diagram coming from astronomical observations of one of the largest laboratories one can imagine, neutron stars. The theoretical contributions covered a wide range of topics, including QCD thermodynamics at zero and finite chemical potential, new ideas to overcome the sign problem in the latter case, fluctuations of conserved charges and how they allow one to connect calculations in lattice QCD with experimentally measured quantities, finite-temperature behavior of theories with many flavors of fermions, properties and the fate of heavy quarkonium states in the quark-gluon plasma, and many others. The participants took the time to write up and revise their contributions and submit them for publication in these proceedings. Thanks to their efforts, we have now a good record of the ideas presented and discussed during the workshop. We hope that this will serve both as a reminder and as a reference for the participants and for other researchers interested in the physics of nuclear matter at high temperatures and density. To preserve the atmosphere of the event the contributions are ordered in the same way as the talks at the conference. We are honored to have helped organize the 10th meeting in this series, a milestone that reflects the lasting interest in this

  17. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    PubMed

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  18. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  19. Two-color QCD at high density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boz, Tamer; Skullerud, Jon-Ivar; Centre for the Subatomic Structure of Matter, Adelaide University, Adelaide, SA 5005

    2016-01-22

    QCD at high chemical potential has interesting properties such as deconfinement of quarks. Two-color QCD, which enables numerical simulations on the lattice, constitutes a laboratory to study QCD at high chemical potential. Among the interesting properties of two-color QCD at high density is the diquark condensation, for which we present recent results obtained on a finer lattice compared to previous studies. The quark propagator in two-color QCD at non-zero chemical potential is referred to as the Gor’kov propagator. We express the Gor’kov propagator in terms of form factors and present recent lattice simulation results.

  20. Unconventional protein sources: apricot seed kernels.

    PubMed

    Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M

    1981-09-01

    Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.

  1. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  2. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...

  3. Forward and small-x QCD physics results from CMS experiment at LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerci, Deniz Sunar, E-mail: deniz.sunar.cerci@cern.ch

    2016-03-25

    The Compact Muon Solenoid (CMS) is one of the two large, multi-purpose experiments at the Large Hadron Collider (LHC) at CERN. During the Run I Phase a large pp collision dataset has been collected and the CMS collaboration has explored measurements that shed light on a new era. Forward and small-x quantum chromodynamics (QCD) physics measurements with CMS experiment covers a wide range of physics subjects. Some of highlights in terms of testing the very low-x QCD, underlying event and multiple interaction characteristics, photon-mediated processes, jets with large rapidity separation at high pseudo-rapidities and the inelastic proton-proton cross section dominatedmore » by diffractive interactions are presented. Results are compared to Monte Carlo (MC) models with different parameter tunes for the description of the underlying event and to perturbative QCD calculations. The prominent role of multi-parton interactions has been confirmed in the semihard sector but no clear deviation from the standard Dglap parton evolution due to Bfkl has been observed. An outlook to the prospects at 13 TeV is given.« less

  4. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  5. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    PubMed

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  6. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  7. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  8. Nucleon-nucleon interactions via Lattice QCD: Methodology. HAL QCD approach to extract hadronic interactions in lattice QCD

    NASA Astrophysics Data System (ADS)

    Aoki, Sinya

    2013-07-01

    We review the potential method in lattice QCD, which has recently been proposed to extract nucleon-nucleon interactions via numerical simulations. We focus on the methodology of this approach by emphasizing the strategy of the potential method, the theoretical foundation behind it, and special numerical techniques. We compare the potential method with the standard finite volume method in lattice QCD, in order to make pros and cons of the approach clear. We also present several numerical results for nucleon-nucleon potentials.

  9. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  10. Oecophylla longinoda (Hymenoptera: Formicidae) Lead to Increased Cashew Kernel Size and Kernel Quality.

    PubMed

    Anato, F M; Sinzogan, A A C; Offenberg, J; Adandonon, A; Wargui, R B; Deguenon, J M; Ayelo, P M; Vayssières, J-F; Kossou, D K

    2017-06-01

    Weaver ants, Oecophylla spp., are known to positively affect cashew, Anacardium occidentale L., raw nut yield, but their effects on the kernels have not been reported. We compared nut size and the proportion of marketable kernels between raw nuts collected from trees with and without ants. Raw nuts collected from trees with weaver ants were 2.9% larger than nuts from control trees (i.e., without weaver ants), leading to 14% higher proportion of marketable kernels. On trees with ants, the kernel: raw nut ratio from nuts damaged by formic acid was 4.8% lower compared with nondamaged nuts from the same trees. Weaver ants provided three benefits to cashew production by increasing yields, yielding larger nuts, and by producing greater proportions of marketable kernel mass. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Kernel abortion in maize : I. Carbohydrate concentration patterns and Acid invertase activity of maize kernels induced to abort in vitro.

    PubMed

    Hanft, J M; Jones, R J

    1986-06-01

    Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.

  12. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  13. Viscozyme L pretreatment on palm kernels improved the aroma of palm kernel oil after kernel roasting.

    PubMed

    Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan

    2018-05-01

    With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  15. Extension of the HAL QCD approach to inelastic and multi-particle scatterings in lattice QCD

    NASA Astrophysics Data System (ADS)

    Aoki, S.

    We extend the HAL QCD approach, with which potentials between two hadrons can be obtained in QCD at energy below inelastic thresholds, to inelastic and multi-particle scatterings. We first derive asymptotic behaviors of the Nambu-Bethe-Salpeter (NBS) wave function at large space separations for systems with more than 2 particles, in terms of the one-shell $T$-matrix consrainted by the unitarity of quantum field theories. We show that its asymptotic behavior contains phase shifts and mixing angles of $n$ particle scatterings. This property is one of the essential ingredients of the HAL QCD scheme to define "potential" from the NBS wave function in quantum field theories such as QCD. We next construct energy independent but non-local potentials above inelastic thresholds, in terms of these NBS wave functions. We demonstrate an existence of energy-independent coupled channel potentials with a non-relativistic approximation, where momenta of all particles are small compared with their own masses. Combining these two results, we can employ the HAL QCD approach also to investigate inelastic and multi-particle scatterings.

  16. An introduction to kernel-based learning algorithms.

    PubMed

    Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B

    2001-01-01

    This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.

  17. A new discriminative kernel from probabilistic models.

    PubMed

    Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert

    2002-10-01

    Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.

  18. Assessing the role of the Kelvin-Helmholtz instability at the QCD cosmological transition

    NASA Astrophysics Data System (ADS)

    Mourão Roque, V. R. C.; Lugones, G.

    2018-03-01

    We performed numerical simulations with the PLUTO code in order to analyze the non-linear behavior of the Kelvin-Helmholtz instability in non-magnetized relativistic fluids. The relevance of the instability at the cosmological QCD phase transition was explored using an equation of state based on lattice QCD results with the addition of leptons. The results of the simulations were compared with the theoretical predictions of the linearized theory. For small Mach numbers up to Ms ~ 0.1 we find that both results are in good agreement. However, for higher Mach numbers, non-linear effects are significant. In particular, many initial conditions that look stable according to the linear analysis are shown to be unstable according to the full calculation. Since according to lattice calculations the cosmological QCD transition is a smooth crossover, violent fluid motions are not expected. Thus, in order to assess the role of the Kelvin-Helmholtz instability at the QCD epoch, we focus on simulations with low shear velocity and use monochromatic as well as random perturbations to trigger the instability. We find that the Kelvin-Helmholtz instability can strongly amplify turbulence in the primordial plasma and as a consequence it may increase the amount of primordial gravitational radiation. Such turbulence may be relevant for the evolution of the Universe at later stages and may have an impact in the stochastic gravitational wave background.

  19. Kernel Abortion in Maize 1

    PubMed Central

    Hanft, Jonathan M.; Jones, Robert J.

    1986-01-01

    This study was designed to compare the uptake and distribution of 14C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 30 and 35°C were transferred to [14C]sucrose media 10 days after pollination. Kernels cultured at 35°C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on labeled media. After 8 days in culture on [14C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35°C, respectively. This indicates that some of the sucrose taken up by the cob tissue was cleaved to fructose and glucose in the cob. Of the total carbohydrates, a higher percentage of label was associated with sucrose and a lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35°C compared to kernels cultured at 30°C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35°C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30°C (89%). Kernels cultured at 35°C had a correspondingly higher proportion of 14C in endosperm fructose, glucose, and sucrose. These results indicate that starch synthesis in the endosperm is strongly inhibited in kernels induced to abort by high temperature even though there is an adequate supply of sugar. PMID:16664847

  20. Baryon interactions in lattice QCD: the direct method vs. the HAL QCD potential method

    NASA Astrophysics Data System (ADS)

    Iritani, T.; HAL QCD Collaboration

    We make a detailed comparison between the direct method and the HAL QCD potential method for the baryon-baryon interactions, taking the $\\Xi\\Xi$ system at $m_\\pi= 0.51$ GeV in 2+1 flavor QCD and using both smeared and wall quark sources. The energy shift $\\Delta E_\\mathrm{eff}(t)$ in the direct method shows the strong dependence on the choice of quark source operators, which means that the results with either (or both) source are false. The time-dependent HAL QCD method, on the other hand, gives the quark source independent $\\Xi\\Xi$ potential, thanks to the derivative expansion of the potential, which absorbs the source dependence to the next leading order correction. The HAL QCD potential predicts the absence of the bound state in the $\\Xi\\Xi$($^1$S$_0$) channel at $m_\\pi= 0.51$ GeV, which is also confirmed by the volume dependence of finite volume energy from the potential. We also demonstrate that the origin of the fake plateau in the effective energy shift $\\Delta E_\\mathrm{eff}(t)$ at $t \\sim 1$ fm can be clarified by a few low-lying eigenfunctions and eigenvalues on the finite volume derived from the HAL QCD potential, which implies that the ground state saturation of $\\Xi\\Xi$($^1$S$_0$) requires $t \\sim 10$ fm in the direct method for the smeared source on $(4.3 \\ \\mathrm{fm})^3$ lattice, while the HAL QCD method does not suffer from such a problem.

  1. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  2. Credit scoring analysis using kernel discriminant

    NASA Astrophysics Data System (ADS)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  3. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  4. The Emergence of Hadrons from QCD Color

    NASA Astrophysics Data System (ADS)

    Brooks, William; Color Dynamics in Cold Matter (CDCM) Collaboration

    2015-10-01

    The formation of hadrons from energetic quarks, the dynamical enforcement of QCD confinement, is not well understood at a fundamental level. In Deep Inelastic Scattering, modifications of the distributions of identified hadrons emerging from nuclei of different sizes reveal a rich variety of spatial and temporal characteristics of the hadronization process, including its dependence on spin, flavor, energy, and hadron mass and structure. The EIC will feature a wide range of kinematics, allowing a complete investigation of medium-induced gluon bremsstrahlung by the propagating quarks, leading to partonic energy loss. This fundamental process, which is also at the heart of jet quenching in heavy ion collisions, can be studied for light and heavy quarks at the EIC through observables quantifying hadron ``attenuation'' for a variety of hadron species. Transverse momentum broadening of hadrons, which is sensitive to the nuclear gluonic field, will also be accessible, and can be used to test our understanding from pQCD of how this quantity evolves with pathlength, as well as its connection to partonic energy loss. The evolution of the forming hadrons in the medium will shed new light on the dynamical origins of the forces between hadrons, and thus ultimately on the nuclear force. Supported by the Comision Nacional de Investigacion Cientifica y Tecnologica (CONICYT) of Chile.

  5. Aligning Biomolecular Networks Using Modular Graph Kernels

    NASA Astrophysics Data System (ADS)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  6. Some New/Old Approaches to QCD

    DOE R&D Accomplishments Database

    Gross, D. J.

    1992-11-01

    In this lecture I shall discuss some recent attempts to revive some old ideas to address the problem of solving QCD. I believe that it is timely to return to this problem which has been woefully neglected for the last decade. QCD is a permanent part of the theoretical landscape and eventually we will have to develop analytic tools for dealing with the theory in the infra-red. Lattice techniques are useful but they have not yet lived up to their promise. Even if one manages to derive the hadronic spectrum numerically, to an accuracy of 10% or even 1%, we will not be truly satisfied unless we have some analytic understanding of the results. Also, lattice Monte-Carlo methods can only be used to answer a small set of questions. Many issues of great conceptual and practical interest-in particular the calculation of scattering amplitudes, are thus far beyond lattice control. Any progress in controlling QCD in an explicit analytic, fashion would be of great conceptual value. It would also be of great practical aid to experimentalists, who must use rather ad-hoc and primitive models of QCD scattering amplitudes to estimate the backgrounds to interesting new physics. I will discuss an attempt to derive a string representation of QCD and a revival of the large N approach to QCD. Both of these ideas have a long history, many theorist-years have been devoted to their pursuit-so far with little success. I believe that it is time to try again. In part this is because of the progress in the last few years in string theory. Our increased understanding of string theory should make the attempt to discover a stringy representation of QCD easier, and the methods explored in matrix models might be employed to study the large N limit of QCD.

  7. Graph Kernels for Molecular Similarity.

    PubMed

    Rupp, Matthias; Schneider, Gisbert

    2010-04-12

    Molecular similarity measures are important for many cheminformatics applications like ligand-based virtual screening and quantitative structure-property relationships. Graph kernels are formal similarity measures defined directly on graphs, such as the (annotated) molecular structure graph. Graph kernels are positive semi-definite functions, i.e., they correspond to inner products. This property makes them suitable for use with kernel-based machine learning algorithms such as support vector machines and Gaussian processes. We review the major types of kernels between graphs (based on random walks, subgraphs, and optimal assignments, respectively), and discuss their advantages, limitations, and successful applications in cheminformatics. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Kenneth Wilson and Lattice QCD

    NASA Astrophysics Data System (ADS)

    Ukawa, Akira

    2015-09-01

    We discuss the physics and computation of lattice QCD, a space-time lattice formulation of quantum chromodynamics, and Kenneth Wilson's seminal role in its development. We start with the fundamental issue of confinement of quarks in the theory of the strong interactions, and discuss how lattice QCD provides a framework for understanding this phenomenon. A conceptual issue with lattice QCD is a conflict of space-time lattice with chiral symmetry of quarks. We discuss how this problem is resolved. Since lattice QCD is a non-linear quantum dynamical system with infinite degrees of freedom, quantities which are analytically calculable are limited. On the other hand, it provides an ideal case of massively parallel numerical computations. We review the long and distinguished history of parallel-architecture supercomputers designed and built for lattice QCD. We discuss algorithmic developments, in particular the difficulties posed by the fermionic nature of quarks, and their resolution. The triad of efforts toward better understanding of physics, better algorithms, and more powerful supercomputers have produced major breakthroughs in our understanding of the strong interactions. We review the salient results of this effort in understanding the hadron spectrum, the Cabibbo-Kobayashi-Maskawa matrix elements and CP violation, and quark-gluon plasma at high temperatures. We conclude with a brief summary and a future perspective.

  9. Conformal Aspects of QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, S

    2003-11-19

    Theoretical and phenomenological evidence is now accumulating that the QCD coupling becomes constant at small virtuality; i.e., {alpha}{sub s}(Q{sup 2}) develops an infrared fixed point in contradiction to the usual assumption of singular growth in the infrared. For example, the hadronic decays of the {tau} lepton can be used to determine the effective charge {alpha}{sub {tau}}(m{sub {tau}{prime}}{sup 2}) for a hypothetical {tau}-lepton with mass in the range 0 < m{sub {tau}{prime}} < m{sub {tau}}. The {tau} decay data at low mass scales indicates that the effective charge freezes at a value of s = m{sub {tau}{prime}}{sup 2} of order 1more » GeV{sup 2} with a magnitude {alpha}{sub {tau}} {approx} 0.9 {+-} 0.1. The near-constant behavior of effective couplings suggests that QCD can be approximated as a conformal theory even at relatively small momentum transfer and why there are no significant running coupling corrections to quark counting rules for exclusive processes. The AdS/CFT correspondence of large N{sub c} supergravity theory in higher-dimensional anti-de Sitter space with supersymmetric QCD in 4-dimensional space-time also has interesting implications for hadron phenomenology in the conformal limit, including an all-orders demonstration of counting rules for exclusive processes and light-front wavefunctions. The utility of light-front quantization and light-front Fock wavefunctions for analyzing nonperturbative QCD and representing the dynamics of QCD bound states is also discussed.« less

  10. Hadronic and nuclear interactions in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Despite the evidence that QCD - or something close to it - gives a correct description of the structure of hadrons and their interactions, it seems paradoxical that the theory has thus far had very little impact in nuclear physics. One reason for this is that the application of QCD to distances larger than 1 fm involves coherent, non-perturbative dynamics which is beyond present calculational techniques. For example, in QCD the nuclear force can evidently be ascribed to quark interchange and gluon exchange processes. These, however, are as complicated to analyze from a fundamental point of view as is themore » analogous covalent bond in molecular physics. Since a detailed description of quark-quark interactions and the structure of hadronic wavefunctions is not yet well-understood in QCD, it is evident that a quantitative first-principle description of the nuclear force will require a great deal of theoretical effort. Another reason for the limited impact of QCD in nuclear physics has been the conventional assumption that nuclear interactions can for the most part be analyzed in terms of an effective meson-nucleon field theory or potential model in isolation from the details of short distance quark and gluon structure of hadrons. These lectures, argue that this view is untenable: in fact, there is no correspondence principle which yields traditional nuclear physics as a rigorous large-distance or non-relativistic limit of QCD dynamics. On the other hand, the distinctions between standard nuclear physics dynamics and QCD at nuclear dimensions are extremely interesting and illuminating for both particle and nuclear physics.« less

  11. The current matrix elements from HAL QCD method

    NASA Astrophysics Data System (ADS)

    Watanabe, Kai; Ishii, Noriyoshi

    2018-03-01

    HAL QCD method is a method to construct a potential (HAL QCD potential) that reproduces the NN scattering phase shift faithful to the QCD. The HAL QCD potential is obtained from QCD by eliminating the degrees of freedom of quarks and gluons and leaving only two particular hadrons. Therefor, in the effective quantum mechanics of two nucleons defined by HAL QCD potential, the conserved current consists not only of the nucleon current but also an extra current originating from the potential (two-body current). Though the form of the two-body current is closely related to the potential, it is not straight forward to extract the former from the latter. In this work, we derive the the current matrix element formula in the quantum mechanics defined by the HAL QCD potential. As a first step, we focus on the non-relativistic case. To give an explicit example, we consider a second quantized non-relativistic two-channel coupling model which we refer to as the original model. From the original model, the HAL QCD potential for the open channel is constructed by eliminating the closed channel in the elastic two-particle scattering region. The current matrix element formula is derived by demanding the effective quantum mechanics defined by the HAL QCD potential to respond to the external field in the same way as the original two-channel coupling model.

  12. Kinetic Rate Kernels via Hierarchical Liouville-Space Projection Operator Approach.

    PubMed

    Zhang, Hou-Dao; Yan, YiJing

    2016-05-19

    Kinetic rate kernels in general multisite systems are formulated on the basis of a nonperturbative quantum dissipation theory, the hierarchical equations of motion (HEOM) formalism, together with the Nakajima-Zwanzig projection operator technique. The present approach exploits the HEOM-space linear algebra. The quantum non-Markovian site-to-site transfer rate can be faithfully evaluated via projected HEOM dynamics. The developed method is exact, as evident by the comparison to the direct HEOM evaluation results on the population evolution.

  13. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  14. Scheme variations of the QCD coupling

    NASA Astrophysics Data System (ADS)

    Boito, Diogo; Jamin, Matthias; Miravitllas, Ramon

    2017-03-01

    The Quantum Chromodynamics (QCD) coupling αs is a central parameter in the Standard Model of particle physics. However, it depends on theoretical conventions related to renormalisation and hence is not an observable quantity. In order to capture this dependence in a transparent way, a novel definition of the QCD coupling, denoted by â, is introduced, whose running is explicitly renormalisation scheme invariant. The remaining renormalisation scheme dependence is related to transformations of the QCD scale Λ, and can be parametrised by a single parameter C. Hence, we call â the C-scheme coupling. The dependence on C can be exploited to study and improve perturbative predictions of physical observables. This is demonstrated for the QCD Adler function and hadronic decays of the τ lepton.

  15. Novel QCD Phenomenology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins

    2011-08-12

    I review a number of topics where conventional wisdom in hadron physics has been challenged. For example, hadrons can be produced at large transverse momentum directly within a hard higher-twist QCD subprocess, rather than from jet fragmentation. Such 'direct' processes can explain the deviations from perturbative QCD predictions in measurements of inclusive hadron cross sections at fixed x{sub T} = 2p{sub T}/{radical}s, as well as the 'baryon anomaly', the anomalously large proton-to-pion ratio seen in high centrality heavy ion collisions. Initial-state and final-state interactions of the struck quark, the soft-gluon rescattering associated with its Wilson line, lead to Bjorken-scaling single-spinmore » asymmetries, diffractive deep inelastic scattering, the breakdown of the Lam-Tung relation in Drell-Yan reactions, as well as nuclear shadowing and antishadowing. The Gribov-Glauber theory predicts that antishadowing of nuclear structure functions is not universal, but instead depends on the flavor quantum numbers of each quark and antiquark, thus explaining the anomalous nuclear dependence measured in deep-inelastic neutrino scattering. Since shadowing and antishadowing arise from the physics of leading-twist diffractive deep inelastic scattering, one cannot attribute such phenomena to the structure of the nucleus itself. It is thus important to distinguish 'static' structure functions, the probability distributions computed from the square of the target light-front wavefunctions, versus 'dynamical' structure functions which include the effects of the final-state rescattering of the struck quark. The importance of the J = 0 photon-quark QCD contact interaction in deeply virtual Compton scattering is also emphasized. The scheme-independent BLM method for setting the renormalization scale is discussed. Eliminating the renormalization scale ambiguity greatly improves the precision of QCD predictions and increases the sensitivity of searches for new physics at the

  16. QCD and Light-Front Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, Stanley J.; de Teramond, Guy F.; /SLAC /Southern Denmark U., CP3-Origins /Costa Rica U.

    2011-01-10

    AdS/QCD, the correspondence between theories in a dilaton-modified five-dimensional anti-de Sitter space and confining field theories in physical space-time, provides a remarkable semiclassical model for hadron physics. Light-front holography allows hadronic amplitudes in the AdS fifth dimension to be mapped to frame-independent light-front wavefunctions of hadrons in physical space-time. The result is a single-variable light-front Schroedinger equation which determines the eigenspectrum and the light-front wavefunctions of hadrons for general spin and orbital angular momentum. The coordinate z in AdS space is uniquely identified with a Lorentz-invariant coordinate {zeta} which measures the separation of the constituents within a hadron at equalmore » light-front time and determines the off-shell dynamics of the bound state wavefunctions as a function of the invariant mass of the constituents. The hadron eigenstates generally have components with different orbital angular momentum; e.g., the proton eigenstate in AdS/QCD with massless quarks has L = 0 and L = 1 light-front Fock components with equal probability. Higher Fock states with extra quark-anti quark pairs also arise. The soft-wall model also predicts the form of the nonperturbative effective coupling and its {beta}-function. The AdS/QCD model can be systematically improved by using its complete orthonormal solutions to diagonalize the full QCD light-front Hamiltonian or by applying the Lippmann-Schwinger method to systematically include QCD interaction terms. Some novel features of QCD are discussed, including the consequences of confinement for quark and gluon condensates. A method for computing the hadronization of quark and gluon jets at the amplitude level is outlined.« less

  17. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  18. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  19. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  20. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  1. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  2. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  3. Transverse Momentum-Dependent Parton Distributions from Lattice QCD

    NASA Astrophysics Data System (ADS)

    Engelhardt, M.; Musch, B.; Hägler, P.; Negele, J.; Schäfer, A.

    Starting from a definition of transverse momentum-dependent parton distributions for semi-inclusive deep inelastic scattering and the Drell-Yan process, given in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection, a scheme to determine such observables in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are presented, focusing in particular on their dependence on the staple extent and the Collins-Soper evolution parameter.

  4. Transverse Momentum-Dependent Parton Distributions From Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael Engelhardt, Bernhard Musch, Philipp Haegler, Andreas Schaefer

    Starting from a definition of transverse momentum-dependent parton distributions for semi-inclusive deep inelastic scattering and the Drell-Yan process, given in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection, a scheme to determine such observables in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are presented, focusing in particular on their dependence on the staple extent and the Collins-Soper evolution parameter.

  5. Scattering processes and resonances from lattice QCD

    NASA Astrophysics Data System (ADS)

    Briceño, Raúl A.; Dudek, Jozef J.; Young, Ross D.

    2018-04-01

    The vast majority of hadrons observed in nature are not stable under the strong interaction; rather they are resonances whose existence is deduced from enhancements in the energy dependence of scattering amplitudes. The study of hadron resonances offers a window into the workings of quantum chromodynamics (QCD) in the low-energy nonperturbative region, and in addition many probes of the limits of the electroweak sector of the standard model consider processes which feature hadron resonances. From a theoretical standpoint, this is a challenging field: the same dynamics that binds quarks and gluons into hadron resonances also controls their decay into lighter hadrons, so a complete approach to QCD is required. Presently, lattice QCD is the only available tool that provides the required nonperturbative evaluation of hadron observables. This article reviews progress in the study of few-hadron reactions in which resonances and bound states appear using lattice QCD techniques. The leading approach is described that takes advantage of the periodic finite spatial volume used in lattice QCD calculations to extract scattering amplitudes from the discrete spectrum of QCD eigenstates in a box. An explanation is given of how from explicit lattice QCD calculations one can rigorously garner information about a variety of resonance properties, including their masses, widths, decay couplings, and form factors. The challenges which currently limit the field are discussed along with the steps being taken to resolve them.

  6. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  7. Calculation of TMD Evolution for Transverse Single Spin Asymmetry Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mert Aybat, Ted Rogers, Alexey Prokudin

    In this letter, we show that it is necessary to include the full treatment of QCD evolution of Transverse Momentum Dependent parton densities to explain discrepancies between HERMES data and recent COMPASS data on a proton target for the Sivers transverse single spin asymmetry in Semi-Inclusive Deep Inelastic Scattering (SIDIS). Calculations based on existing fits to TMDs in SIDIS, and including evolution within the Collins-Soper-Sterman with properly defined TMD PDFs are shown to provide a good explanation for the discrepancy. The non-perturbative input needed for the implementation of evolution is taken from earlier analyses of unpolarized Drell-Yan (DY) scattering atmore » high energy. Its success in describing the Sivers function in SIDIS data at much lower energies is strong evidence in support of the unifying aspect of the QCD TMD-factorization formalism.« less

  8. Computed tomography coronary stent imaging with iterative reconstruction: a trade-off study between medium kernel and sharp kernel.

    PubMed

    Zhou, Qijing; Jiang, Biao; Dong, Fei; Huang, Peiyu; Liu, Hongtao; Zhang, Minming

    2014-01-01

    To evaluate the improvement of iterative reconstruction in image space (IRIS) technique in computed tomographic (CT) coronary stent imaging with sharp kernel, and to make a trade-off analysis. Fifty-six patients with 105 stents were examined by 128-slice dual-source CT coronary angiography (CTCA). Images were reconstructed using standard filtered back projection (FBP) and IRIS with both medium kernel and sharp kernel applied. Image noise and the stent diameter were investigated. Image noise was measured both in background vessel and in-stent lumen as objective image evaluation. Image noise score and stent score were performed as subjective image evaluation. The CTCA images reconstructed with IRIS were associated with significant noise reduction compared to that of CTCA images reconstructed using FBP technique in both of background vessel and in-stent lumen (the background noise decreased by approximately 25.4% ± 8.2% in medium kernel (P kernel (P kernel (P kernel (P kernel showed better visualization of the stent struts and in-stent lumen than that with medium kernel. Iterative reconstruction in image space reconstruction can effectively reduce the image noise and improve image quality. The sharp kernel images constructed with iterative reconstruction are considered the optimal images to observe coronary stents in this study.

  9. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  10. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  11. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    PubMed

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  12. Wigner functions defined with Laplace transform kernels.

    PubMed

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  13. Metabolic network prediction through pairwise rational kernels.

    PubMed

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy

  14. Polyakov loop modeling for hot QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fukushima, Kenji; Skokov, Vladimir

    Here, we review theoretical aspects of quantum chromodynamics (QCD) at finite temperature. The most important physical variable to characterize hot QCD is the Polyakov loop, which is an approximate order parameter for quark deconfinement in a hot gluonic medium. Additionally to its role as an order parameter, the Polyakov loop has rich physical contents in both perturbative and non-perturbative sectors. This review covers a wide range of subjects associated with the Polyakov loop from topological defects in hot QCD to model building with coupling to the Polyakov loop.

  15. Polyakov loop modeling for hot QCD

    DOE PAGES

    Fukushima, Kenji; Skokov, Vladimir

    2017-06-19

    Here, we review theoretical aspects of quantum chromodynamics (QCD) at finite temperature. The most important physical variable to characterize hot QCD is the Polyakov loop, which is an approximate order parameter for quark deconfinement in a hot gluonic medium. Additionally to its role as an order parameter, the Polyakov loop has rich physical contents in both perturbative and non-perturbative sectors. This review covers a wide range of subjects associated with the Polyakov loop from topological defects in hot QCD to model building with coupling to the Polyakov loop.

  16. Scattering processes and resonances from lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briceno, Raul A.; Dudek, Jozef J.; Young, Ross D.

    The vast majority of hadrons observed in nature are not stable under the strong interaction; rather they are resonances whose existence is deduced from enhancements in the energy dependence of scattering amplitudes. The study of hadron resonances offers a window into the workings of quantum chromodynamics (QCD) in the low-energy nonperturbative region, and in addition many probes of the limits of the electroweak sector of the standard model consider processes which feature hadron resonances. From a theoretical standpoint, this is a challenging field: the same dynamics that binds quarks and gluons into hadron resonances also controls their decay into lightermore » hadrons, so a complete approach to QCD is required. Presently, lattice QCD is the only available tool that provides the required nonperturbative evaluation of hadron observables. This paper reviews progress in the study of few-hadron reactions in which resonances and bound states appear using lattice QCD techniques. The leading approach is described that takes advantage of the periodic finite spatial volume used in lattice QCD calculations to extract scattering amplitudes from the discrete spectrum of QCD eigenstates in a box. An explanation is given of how from explicit lattice QCD calculations one can rigorously garner information about a variety of resonance properties, including their masses, widths, decay couplings, and form factors. Finally, the challenges which currently limit the field are discussed along with the steps being taken to resolve them.« less

  17. Scattering processes and resonances from lattice QCD

    DOE PAGES

    Briceno, Raul A.; Dudek, Jozef J.; Young, Ross D.

    2018-04-18

    The vast majority of hadrons observed in nature are not stable under the strong interaction; rather they are resonances whose existence is deduced from enhancements in the energy dependence of scattering amplitudes. The study of hadron resonances offers a window into the workings of quantum chromodynamics (QCD) in the low-energy nonperturbative region, and in addition many probes of the limits of the electroweak sector of the standard model consider processes which feature hadron resonances. From a theoretical standpoint, this is a challenging field: the same dynamics that binds quarks and gluons into hadron resonances also controls their decay into lightermore » hadrons, so a complete approach to QCD is required. Presently, lattice QCD is the only available tool that provides the required nonperturbative evaluation of hadron observables. This paper reviews progress in the study of few-hadron reactions in which resonances and bound states appear using lattice QCD techniques. The leading approach is described that takes advantage of the periodic finite spatial volume used in lattice QCD calculations to extract scattering amplitudes from the discrete spectrum of QCD eigenstates in a box. An explanation is given of how from explicit lattice QCD calculations one can rigorously garner information about a variety of resonance properties, including their masses, widths, decay couplings, and form factors. Finally, the challenges which currently limit the field are discussed along with the steps being taken to resolve them.« less

  18. Progress on Complex Langevin simulations of a finite density matrix model for QCD

    NASA Astrophysics Data System (ADS)

    Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus; Zafeiropoulos, Savvas

    2018-03-01

    We study the Stephanov model, which is an RMT model for QCD at finite density, using the Complex Langevin algorithm. Naive implementation of the algorithm shows convergence towards the phase quenched or quenched theory rather than to intended theory with dynamical quarks. A detailed analysis of this issue and a potential resolution of the failure of this algorithm are discussed. We study the effect of gauge cooling on the Dirac eigenvalue distribution and time evolution of the norm for various cooling norms, which were specifically designed to remove the pathologies of the complex Langevin evolution. The cooling is further supplemented with a shifted representation for the random matrices. Unfortunately, none of these modifications generate a substantial improvement on the complex Langevin evolution and the final results still do not agree with the analytical predictions.

  19. Construction of phylogenetic trees by kernel-based comparative analysis of metabolic networks.

    PubMed

    Oh, S June; Joung, Je-Gun; Chang, Jeong-Ho; Zhang, Byoung-Tak

    2006-06-06

    information. This method may yield further information about biological evolution, such as the history of horizontal transfer of each gene, by studying the detailed structure of the phylogenetic tree constructed by the kernel-based method.

  20. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. SEMI-SUPERVISED OBJECT RECOGNITION USING STRUCTURE KERNEL

    PubMed Central

    Wang, Botao; Xiong, Hongkai; Jiang, Xiaoqian; Ling, Fan

    2013-01-01

    Object recognition is a fundamental problem in computer vision. Part-based models offer a sparse, flexible representation of objects, but suffer from difficulties in training and often use standard kernels. In this paper, we propose a positive definite kernel called “structure kernel”, which measures the similarity of two part-based represented objects. The structure kernel has three terms: 1) the global term that measures the global visual similarity of two objects; 2) the part term that measures the visual similarity of corresponding parts; 3) the spatial term that measures the spatial similarity of geometric configuration of parts. The contribution of this paper is to generalize the discriminant capability of local kernels to complex part-based object models. Experimental results show that the proposed kernel exhibit higher accuracy than state-of-art approaches using standard kernels. PMID:23666108

  2. The pre-image problem in kernel methods.

    PubMed

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  3. Exploiting graph kernels for high performance biomedical relation extraction.

    PubMed

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  4. Correlation and classification of single kernel fluorescence hyperspectral data with aflatoxin concentration in corn kernels inoculated with Aspergillus flavus spores.

    PubMed

    Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D

    2010-05-01

    The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to

  5. Adaptive kernel function using line transect sampling

    NASA Astrophysics Data System (ADS)

    Albadareen, Baker; Ismail, Noriszura

    2018-04-01

    The estimation of f(0) is crucial in the line transect method which is used for estimating population abundance in wildlife survey's. The classical kernel estimator of f(0) has a high negative bias. Our study proposes an adaptation in the kernel function which is shown to be more efficient than the usual kernel estimator. A simulation study is adopted to compare the performance of the proposed estimators with the classical kernel estimators.

  6. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  7. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will not...

  8. Robotic Intelligence Kernel: Driver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.

  9. Bell nozzle kernel analysis program

    NASA Technical Reports Server (NTRS)

    Elliot, J. J.; Stromstra, R. R.

    1969-01-01

    Bell Nozzle Kernel Analysis Program computes and analyzes the supersonic flowfield in the kernel, or initial expansion region, of a bell or conical nozzle. It analyzes both plane and axisymmetric geometrices for specified gas properties, nozzle throat geometry and input line.

  10. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  11. Application of kernel method in fluorescence molecular tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Baikejiang, Reheman; Li, Changqing

    2017-02-01

    Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.

  12. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    PubMed

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  13. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall be...

  14. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, Nadine; Prestel, S.; Ritzmann, M.

    We present the first public implementation of antenna-based QCD initial- and final-state showers. The shower kernels are 2→3 antenna functions, which capture not only the collinear dynamics but also the leading soft (coherent) singularities of QCD matrix elements. We define the evolution measure to be inversely proportional to the leading poles, hence gluon emissions are evolved in a p ⊥ measure inversely proportional to the eikonal, while processes that only contain a single pole (e.g., g → qq¯) are evolved in virtuality. Non-ordered emissions are allowed, suppressed by an additional power of 1/Q 2. Recoils and kinematics are governed bymore » exact on-shell 2 → 3 phase-space factorisations. This first implementation is limited to massless QCD partons and colourless resonances. Tree-level matrix-element corrections are included for QCD up to O(α 4 s) (4 jets), and for Drell–Yan and Higgs production up to O(α 3 s) (V / H + 3 jets). Finally, the resulting algorithm has been made publicly available in Vincia 2.0.« less

  16. VINCIA for hadron colliders

    DOE PAGES

    Fischer, Nadine; Prestel, S.; Ritzmann, M.; ...

    2016-10-28

    We present the first public implementation of antenna-based QCD initial- and final-state showers. The shower kernels are 2→3 antenna functions, which capture not only the collinear dynamics but also the leading soft (coherent) singularities of QCD matrix elements. We define the evolution measure to be inversely proportional to the leading poles, hence gluon emissions are evolved in a p ⊥ measure inversely proportional to the eikonal, while processes that only contain a single pole (e.g., g → qq¯) are evolved in virtuality. Non-ordered emissions are allowed, suppressed by an additional power of 1/Q 2. Recoils and kinematics are governed bymore » exact on-shell 2 → 3 phase-space factorisations. This first implementation is limited to massless QCD partons and colourless resonances. Tree-level matrix-element corrections are included for QCD up to O(α 4 s) (4 jets), and for Drell–Yan and Higgs production up to O(α 3 s) (V / H + 3 jets). Finally, the resulting algorithm has been made publicly available in Vincia 2.0.« less

  17. Nucleon QCD sum rules in the instanton medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryskin, M. G.; Drukarev, E. G., E-mail: drukarev@pnpi.spb.ru; Sadovnikova, V. A.

    2015-09-15

    We try to find grounds for the standard nucleon QCD sum rules, based on a more detailed description of the QCD vacuum. We calculate the polarization operator of the nucleon current in the instanton medium. The medium (QCD vacuum) is assumed to be a composition of the small-size instantons and some long-wave gluon fluctuations. We solve the corresponding QCD sum rule equations and demonstrate that there is a solution with the value of the nucleon mass close to the physical one if the fraction of the small-size instantons contribution is w{sub s} ≈ 2/3.

  18. KITTEN Lightweight Kernel 0.1 Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency andmore » scalability than with general purpose OS kernels.« less

  19. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  20. On the small-x behavior of the orbital angular momentum distributions in QCD

    NASA Astrophysics Data System (ADS)

    Hatta, Yoshitaka; Yang, Dong-Jing

    2018-06-01

    We present the numerical solution of the leading order QCD evolution equation for the orbital angular momentum distributions of quarks and gluons and discuss its implications for the nucleon spin sum rule. We observe that at small-x, the gluon helicity and orbital angular momentum distributions are roughly of the same magnitude but with opposite signs, indicating a significant cancellation between them. A similar cancellation occurs also in the quark sector. We explain analytically the reason for this cancellation.

  1. Progress on Complex Langevin simulations of a finite density matrix model for QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bloch, Jacques; Glesaan, Jonas; Verbaarschot, Jacobus

    We study the Stephanov model, which is an RMT model for QCD at finite density, using the Complex Langevin algorithm. Naive implementation of the algorithm shows convergence towards the phase quenched or quenched theory rather than to intended theory with dynamical quarks. A detailed analysis of this issue and a potential resolution of the failure of this algorithm are discussed. We study the effect of gauge cooling on the Dirac eigenvalue distribution and time evolution of the norm for various cooling norms, which were specifically designed to remove the pathologies of the complex Langevin evolution. The cooling is further supplementedmore » with a shifted representation for the random matrices. Unfortunately, none of these modifications generate a substantial improvement on the complex Langevin evolution and the final results still do not agree with the analytical predictions.« less

  2. New Methods in Non-Perturbative QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unsal, Mithat

    2017-01-31

    In this work, we investigate the properties of quantum chromodynamics (QCD), by using newly developing mathematics and physics formalisms. Almost all of the mass in the visible universe emerges from a quantum chromodynamics (QCD), which has a completely negligible microscopic mass content. An intimately related issue in QCD is the quark confinement problem. Answers to non-perturbative questions in QCD remained largely elusive despite much effort over the years. It is also believed that the usual perturbation theory is inadequate to address these kinds of problems. Perturbation theory gives a divergent asymptotic series (even when the theory is properly renormalized), andmore » there are non-perturbative phenomena which never appear at any order in perturbation theory. Recently, a fascinating bridge between perturbation theory and non-perturbative effects has been found: a formalism called resurgence theory in mathematics tells us that perturbative data and non-perturbative data are intimately related. Translating this to the language of quantum field theory, it turns out that non-perturbative information is present in a coded form in perturbation theory and it can be decoded. We take advantage of this feature, which is particularly useful to understand some unresolved mysteries of QCD from first principles. In particular, we use: a) Circle compactifications which provide a semi-classical window to study confinement and mass gap problems, and calculable prototypes of the deconfinement phase transition; b) Resurgence theory and transseries which provide a unified framework for perturbative and non-perturbative expansion; c) Analytic continuation of path integrals and Lefschetz thimbles which may be useful to address sign problem in QCD at finite density.« less

  3. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the use...

  4. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the use...

  5. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Multiple kernels learning-based biological entity relationship extraction method.

    PubMed

    Dongliang, Xu; Jingchang, Pan; Bailing, Wang

    2017-09-20

    Automatic extracting protein entity interaction information from biomedical literature can help to build protein relation network and design new drugs. There are more than 20 million literature abstracts included in MEDLINE, which is the most authoritative textual database in the field of biomedicine, and follow an exponential growth over time. This frantic expansion of the biomedical literature can often be difficult to absorb or manually analyze. Thus efficient and automated search engines are necessary to efficiently explore the biomedical literature using text mining techniques. The P, R, and F value of tag graph method in Aimed corpus are 50.82, 69.76, and 58.61%, respectively. The P, R, and F value of tag graph kernel method in other four evaluation corpuses are 2-5% higher than that of all-paths graph kernel. And The P, R and F value of feature kernel and tag graph kernel fuse methods is 53.43, 71.62 and 61.30%, respectively. The P, R and F value of feature kernel and tag graph kernel fuse methods is 55.47, 70.29 and 60.37%, respectively. It indicated that the performance of the two kinds of kernel fusion methods is better than that of simple kernel. In comparison with the all-paths graph kernel method, the tag graph kernel method is superior in terms of overall performance. Experiments show that the performance of the multi-kernels method is better than that of the three separate single-kernel method and the dual-mutually fused kernel method used hereof in five corpus sets.

  7. Remarks on the Phase Transition in QCD

    NASA Astrophysics Data System (ADS)

    Wilczek, Frank

    The significance of the question of the order of the phase transition in QCD, and recent evidence that real-world QCD is probably close to having a single second order transition as a function of temperature, is reviewed. Although this circumstance seems to remove the possibility that the QCD transition during the big bang might have had spectacular cosmological consequences, there is some good news: it allows highly non-trivial yet reliable quantitative predictions to be made for the behavior near the transition. These predictions can be tested in numerical simulations and perhaps even eventually in heavy ion collisions. The present paper is a very elementary discussion of the relevant concepts, meant to be an accessible introduction for those innocent of the renormalization group approach to critical phenomena and/or the details of QCD.

  8. Towards understanding Regge trajectories in holographic QCD

    NASA Astrophysics Data System (ADS)

    Catà, Oscar

    2007-05-01

    We reassess a work done by Migdal on the spectrum of low-energy vector mesons in QCD in the light of the anti-de Sitter (AdS)-QCD correspondence. Recently, a tantalizing parallelism was suggested between Migdal’s work and a family of holographic duals of QCD. Despite the intriguing similarities, both approaches face a major drawback: the spectrum is in conflict with well-tested Regge scaling. However, it has recently been shown that holographic duals can be modified to accommodate Regge behavior. Therefore, it is interesting to understand whether Regge behavior can also be achieved in Migdal’s approach. In this paper we investigate this issue. We find that Migdal’s approach, which is based on a modified Padé approximant, is closely related to the issue of quark-hadron duality breakdown in QCD.

  9. Dyonic Flux Tube Structure of Nonperturbative QCD Vacuum

    NASA Astrophysics Data System (ADS)

    Chandola, H. C.; Pandey, H. C.

    We study the flux tube structure of the nonperturbative QCD vacuum in terms of its dyonic excitations by using an infrared effective Lagrangian and show that the dyonic condensation of QCD vacuum has a close connection with the process of color confinement. Using the fiber bundle formulation of QCD, the magnetic symmetry condition is presented in a gauge covariant form and the gauge potential has been constructed in terms of the magnetic vectors on global sections. The dynamical breaking of the magnetic symmetry has been shown to lead the dyonic condensation of QCD vacuum in the infrared energy sector. Deriving the asymptotic solutions of the field equations in the dynamically broken phase, the dyonic flux tube structure of QCD vacuum is explored which has been shown to lead the confinement parameters in terms of the vector and scalar mass modes of the condensed vacuum. Evaluating the charge quantum numbers and energy associated with the dyonic flux tube solutions, the effect of electric excitation of monopole is analyzed using the Regge slope parameter (as an input parameter) and an enhancement in the dyonic pair correlations and the confining properties of QCD vacuum in its dyonically condensed mode has been demonstrated.

  10. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... generally conforms to the “light” or “light amber” classification, that color classification may be used to... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be...

  11. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... generally conforms to the “light” or “light amber” classification, that color classification may be used to... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be...

  12. QCD tests in $$p\\bar{p}$$ collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huth, John E.; Mangano, Michelangelo L.

    1993-02-01

    We review the status of QCD tests in high energy p-pbar collisions. Contents: i) Introduction ii) QCD in Hadronic Collisions iii) Jet Production iv) Heavy Flavour Production v) W and Z Production vi) Direct Photons.

  13. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  14. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  15. Vortical susceptibility of finite-density QCD matter

    DOE PAGES

    Aristova, A.; Frenklakh, D.; Gorsky, A.; ...

    2016-10-07

    Here, the susceptibility of finite-density QCD matter to vorticity is introduced, as an analog of magnetic susceptibility. It describes the spin polarization of quarks and antiquarks in finite-density QCD matter induced by rotation. We estimate this quantity in the chirally broken phase using the mixed gauge-gravity anomaly at finite baryon density. It is proposed that the vortical susceptibility of QCD matter is responsible for the polarization of Λ and Λ¯ hyperons observed recently in heavy ion collisions at RHIC by the STAR collaboration.

  16. THERMOS. 30-Group ENDF/B Scattered Kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCrosson, F.J.; Finch, D.R.

    1973-12-01

    These data are 30-group THERMOS thermal scattering kernels for P0 to P5 Legendre orders for every temperature of every material from s(alpha,beta) data stored in the ENDF/B library. These scattering kernels were generated using the FLANGE2 computer code. To test the kernels, the integral properties of each set of kernels were determined by a precision integration of the diffusion length equation and compared to experimental measurements of these properties. In general, the agreement was very good. Details of the methods used and results obtained are contained in the reference. The scattering kernels are organized into a two volume magnetic tapemore » library from which they may be retrieved easily for use in any 30-group THERMOS library.« less

  17. The generalized scheme-independent Crewther relation in QCD

    NASA Astrophysics Data System (ADS)

    Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang; Brodsky, Stanley J.

    2017-07-01

    The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton-nucleon scattering times the Adler function, defined from the cross section for electron-positron annihilation into hadrons, has no pQCD radiative corrections. The ;Generalized Crewther Relation; relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (Dns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (CBjp) at leading twist. A scheme-dependent ΔCSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both Dns and the inverse coefficient CBjp-1 have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, αˆd (Q) =∑i≥1 αˆg1 i (Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on the choice of the initial scale is negligible. Similar

  18. AdS/QCD and Light Front Holography: A New Approximation to QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, Stanley J.; de Teramond, Guy

    2010-02-15

    The combination of Anti-de Sitter space (AdS) methods with light-front holography leads to a semi-classical first approximation to the spectrum and wavefunctions of meson and baryon light-quark bound states. Starting from the bound-state Hamiltonian equation of motion in QCD, we derive relativistic light-front wave equations in terms of an invariant impact variable {zeta} which measures the separation of the quark and gluonic constituents within the hadron at equal light-front time. These equations of motion in physical space-time are equivalent to the equations of motion which describe the propagation of spin-J modes in anti-de Sitter (AdS) space. Its eigenvalues give themore » hadronic spectrum, and its eigenmodes represent the probability distribution of the hadronic constituents at a given scale. Applications to the light meson and baryon spectra are presented. The predicted meson spectrum has a string-theory Regge form M{sup 2} = 4{kappa}{sup 2}(n+L+S/2); i.e., the square of the eigenmass is linear in both L and n, where n counts the number of nodes of the wavefunction in the radial variable {zeta}. The space-like pion form factor is also well reproduced. One thus obtains a remarkable connection between the description of hadronic modes in AdS space and the Hamiltonian formulation of QCD in physical space-time quantized on the light-front at fixed light-front time {tau}. The model can be systematically improved by using its complete orthonormal solutions to diagonalize the full QCD light-front Hamiltonian or by applying the Lippmann-Schwinger method in order to systematically include the QCD interaction terms.« less

  19. Lattice analysis for the energy scale of QCD phenomena.

    PubMed

    Yamamoto, Arata; Suganuma, Hideo

    2008-12-12

    We formulate a new framework in lattice QCD to study the relevant energy scale of QCD phenomena. By considering the Fourier transformation of link variable, we can investigate the intrinsic energy scale of a physical quantity nonperturbatively. This framework is broadly available for all lattice QCD calculations. We apply this framework for the quark-antiquark potential and meson masses in quenched lattice QCD. The gluonic energy scale relevant for the confinement is found to be less than 1 GeV in the Landau or Coulomb gauge.

  20. θ and the η ' in large N supersymmetric QCD

    DOE PAGES

    Dine, Michael; Draper, Patrick; Stephenson-Haskins, Laurel; ...

    2017-05-22

    Here, we study the large N θ dependence and the η' potential in supersymmetric QCD with small soft SUSY-breaking terms. Known exact results in SUSY QCD are found to reflect a variety of expectations from large N perturbation theory, including the presence of branches and the behavior of theories with matter (both with N f << N and N f ~ N ). But, there are also striking departures from ordinary QCD and the conventional large N description: instanton effects, when under control, are not exponentially suppressed at large N , and branched structure in supersymmetric QCD is always associatedmore » with approximate discrete symmetries. We suggest that these differences motivate further study of large N QCD on the lattice.« less

  1. The Classification of Diabetes Mellitus Using Kernel k-means

    NASA Astrophysics Data System (ADS)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  2. Update on ɛK with lattice QCD inputs

    NASA Astrophysics Data System (ADS)

    Jang, Yong-Chull; Lee, Weonjong; Lee, Sunkyu; Leem, Jaehoon

    2018-03-01

    We report updated results for ɛK, the indirect CP violation parameter in neutral kaons, which is evaluated directly from the standard model with lattice QCD inputs. We use lattice QCD inputs to fix B\\hatk,|Vcb|,ξ0,ξ2,|Vus|, and mc(mc). Since Lattice 2016, the UTfit group has updated the Wolfenstein parameters in the angle-only-fit method, and the HFLAV group has also updated |Vcb|. Our results show that the evaluation of ɛK with exclusive |Vcb| (lattice QCD inputs) has 4.0σ tension with the experimental value, while that with inclusive |Vcb| (heavy quark expansion based on OPE and QCD sum rules) shows no tension.

  3. Brain tumor image segmentation using kernel dictionary learning.

    PubMed

    Jeon Lee; Seung-Jun Kim; Rong Chen; Herskovits, Edward H

    2015-08-01

    Automated brain tumor image segmentation with high accuracy and reproducibility holds a big potential to enhance the current clinical practice. Dictionary learning (DL) techniques have been applied successfully to various image processing tasks recently. In this work, kernel extensions of the DL approach are adopted. Both reconstructive and discriminative versions of the kernel DL technique are considered, which can efficiently incorporate multi-modal nonlinear feature mappings based on the kernel trick. Our novel discriminative kernel DL formulation allows joint learning of a task-driven kernel-based dictionary and a linear classifier using a K-SVD-type algorithm. The proposed approaches were tested using real brain magnetic resonance (MR) images of patients with high-grade glioma. The obtained preliminary performances are competitive with the state of the art. The discriminative kernel DL approach is seen to reduce computational burden without much sacrifice in performance.

  4. Pion distribution amplitude from lattice QCD.

    PubMed

    Cloët, I C; Chang, L; Roberts, C D; Schmidt, S M; Tandy, P C

    2013-08-30

    A method is explained through which a pointwise accurate approximation to the pion's valence-quark distribution amplitude (PDA) may be obtained from a limited number of moments. In connection with the single nontrivial moment accessible in contemporary simulations of lattice-regularized QCD, the method yields a PDA that is a broad concave function whose pointwise form agrees with that predicted by Dyson-Schwinger equation analyses of the pion. Under leading-order evolution, the PDA remains broad to energy scales in excess of 100 GeV, a feature which signals persistence of the influence of dynamical chiral symmetry breaking. Consequently, the asymptotic distribution φπ(asy)(x) is a poor approximation to the pion's PDA at all such scales that are either currently accessible or foreseeable in experiments on pion elastic and transition form factors. Thus, related expectations based on φ φπ(asy)(x) should be revised.

  5. Development of a kernel function for clinical data.

    PubMed

    Daemen, Anneleen; De Moor, Bart

    2009-01-01

    For most diseases and examinations, clinical data such as age, gender and medical history guides clinical management, despite the rise of high-throughput technologies. To fully exploit such clinical information, appropriate modeling of relevant parameters is required. As the widely used linear kernel function has several disadvantages when applied to clinical data, we propose a new kernel function specifically developed for this data. This "clinical kernel function" more accurately represents similarities between patients. Evidently, three data sets were studied and significantly better performances were obtained with a Least Squares Support Vector Machine when based on the clinical kernel function compared to the linear kernel function.

  6. Towards the Geometry of Reproducing Kernels

    NASA Astrophysics Data System (ADS)

    Galé, J. E.

    2010-11-01

    It is shown here how one is naturally led to consider a category whose objects are reproducing kernels of Hilbert spaces, and how in this way a differential geometry for such kernels may be settled down.

  7. Deep Sequencing of RNA from Ancient Maize Kernels

    PubMed Central

    Rasmussen, Morten; Cappellini, Enrico; Romero-Navarro, J. Alberto; Wales, Nathan; Alquezar-Planas, David E.; Penfield, Steven; Brown, Terence A.; Vielle-Calzada, Jean-Philippe; Montiel, Rafael; Jørgensen, Tina; Odegaard, Nancy; Jacobs, Michael; Arriaza, Bernardo; Higham, Thomas F. G.; Ramsey, Christopher Bronk; Willerslev, Eske; Gilbert, M. Thomas P.

    2013-01-01

    The characterization of biomolecules from ancient samples can shed otherwise unobtainable insights into the past. Despite the fundamental role of transcriptomal change in evolution, the potential of ancient RNA remains unexploited – perhaps due to dogma associated with the fragility of RNA. We hypothesize that seeds offer a plausible refuge for long-term RNA survival, due to the fundamental role of RNA during seed germination. Using RNA-Seq on cDNA synthesized from nucleic acid extracts, we validate this hypothesis through demonstration of partial transcriptomal recovery from two sources of ancient maize kernels. The results suggest that ancient seed transcriptomics may offer a powerful new tool with which to study plant domestication. PMID:23326310

  8. Recent development in lattice QCD studies for three-nucleon forces

    NASA Astrophysics Data System (ADS)

    Doi, Takumi; HAL QCD Collaboration

    2014-09-01

    The direct determination of nuclear forces from QCD has been one of the most desirable challenges in nuclear physics. Recently, a first-principles lattice QCD determination is becoming possible by a novel theoretical method, HAL QCD method, in which Nambu-Bethe-Salpeter (NBS) wave functions are utilized. In this talk, I will focus on the study of three-nucleon forces in HAL QCD method by presenting the recent theoretical/numerical development.

  9. Kernel-PCA data integration with enhanced interpretability

    PubMed Central

    2014-01-01

    Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge. PMID:25032747

  10. Gaussian mass optimization for kernel PCA parameters

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Wang, Zulin

    2011-10-01

    This paper proposes a novel kernel parameter optimization method based on Gaussian mass, which aims to overcome the current brute force parameter optimization method in a heuristic way. Generally speaking, the choice of kernel parameter should be tightly related to the target objects while the variance between the samples, the most commonly used kernel parameter, doesn't possess much features of the target, which gives birth to Gaussian mass. Gaussian mass defined in this paper has the property of the invariance of rotation and translation and is capable of depicting the edge, topology and shape information. Simulation results show that Gaussian mass leads a promising heuristic optimization boost up for kernel method. In MNIST handwriting database, the recognition rate improves by 1.6% compared with common kernel method without Gaussian mass optimization. Several promising other directions which Gaussian mass might help are also proposed at the end of the paper.

  11. Design of CT reconstruction kernel specifically for clinical lung imaging

    NASA Astrophysics Data System (ADS)

    Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.

    2005-04-01

    In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.

  12. Quality changes in macadamia kernel between harvest and farm-gate.

    PubMed

    Walton, David A; Wallace, Helen M

    2011-02-01

    Macadamia integrifolia, Macadamia tetraphylla and their hybrids are cultivated for their edible kernels. After harvest, nuts-in-shell are partially dried on-farm and sorted to eliminate poor-quality kernels before consignment to a processor. During these operations, kernel quality may be lost. In this study, macadamia nuts-in-shell were sampled at five points of an on-farm postharvest handling chain from dehusking to the final storage silo to assess quality loss prior to consignment. Shoulder damage, weight of pieces and unsound kernel were assessed for raw kernels, and colour, mottled colour and surface damage for roasted kernels. Shoulder damage, weight of pieces and unsound kernel for raw kernels increased significantly between the dehusker and the final silo. Roasted kernels displayed a significant increase in dark colour, mottled colour and surface damage during on-farm handling. Significant loss of macadamia kernel quality occurred on a commercial farm during sorting and storage of nuts-in-shell before nuts were consigned to a processor. Nuts-in-shell should be dried as quickly as possible and on-farm handling minimised to maintain optimum kernel quality. 2010 Society of Chemical Industry.

  13. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Quantum kernel applications in medicinal chemistry.

    PubMed

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.

  15. Lattice QCD Studies of Transverse Momentum-Dependent Parton Distribution Functions

    NASA Astrophysics Data System (ADS)

    Engelhardt, M.; Musch, B.; Hägler, P.; Negele, J.; Schäfer, A.

    2015-09-01

    Transverse momentum-dependent parton distributions (TMDs) relevant for semi-inclusive deep inelastic scattering and the Drell-Yan process can be defined in terms of matrix elements of a quark bilocal operator containing a staple-shaped gauge link. Such a definition opens the possibility of evaluating TMDs within lattice QCD. By parametrizing the aforementioned matrix elements in terms of invariant amplitudes, the problem can be cast in a Lorentz frame suited for the lattice calculation. Results for selected TMD observables are presented, including a particular focus on their dependence on a Collins-Soper-type evolution parameter, which quantifies proximity of the staple-shaped gauge links to the light cone.

  16. QCD on the BlueGene/L Supercomputer

    NASA Astrophysics Data System (ADS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  17. Dimensional Transmutation by Monopole Condensation in QCD

    NASA Astrophysics Data System (ADS)

    Cho, Y. M.

    2015-01-01

    The dimensional transmutation by the monopole condensation in QCD is reviewed. Using Abelian projection of the gauge potential which projects out the monopole potential gauge independently, we we show that there are two types of gluons: the color neutral binding gluons which plays the role of the confining agent and the colored valence gluons which become confined prisoners. With this we calculate the one-loop QCD effective potential and show the monopole condensation becomes the true vacuum of QCD. We propose to test the existence of two types of gluons experimentally by re-analyzing the existing gluon jets data.

  18. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  19. The generalized scheme-independent Crewther relation in QCD

    DOE PAGES

    Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang; ...

    2017-05-10

    The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton–nucleon scattering times the Adler function, defined from the cross section for electron–positron annihilation into hadrons, has no pQCD radiative corrections. The “Generalized Crewther Relation” relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (D ns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (C Bjp) at leading twist. A scheme-dependent Δ CSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both D ns and the inverse coefficient Cmore » $$-1\\atop{Bjp}$$ have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, $$\\hat{α}$$ d(Q)=Σ i≥1$$\\hat{α}^i\\atop{g1}$$(Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on

  20. The generalized scheme-independent Crewther relation in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang

    The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton–nucleon scattering times the Adler function, defined from the cross section for electron–positron annihilation into hadrons, has no pQCD radiative corrections. The “Generalized Crewther Relation” relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (D ns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (C Bjp) at leading twist. A scheme-dependent Δ CSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both D ns and the inverse coefficient Cmore » $$-1\\atop{Bjp}$$ have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, $$\\hat{α}$$ d(Q)=Σ i≥1$$\\hat{α}^i\\atop{g1}$$(Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on

  1. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  3. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  4. Going Beyond QCD in Lattice Gauge Theory

    NASA Astrophysics Data System (ADS)

    Fleming, G. T.

    2011-01-01

    Strongly coupled gauge theories (SCGT's) have been studied theoretically for many decades using numerous techniques. The obvious motivation for these efforts stemmed from a desire to understand the source of the strong nuclear force: Quantum Chromo-dynamics (QCD). Guided by experimental results, theorists generally consider QCD to be a well-understood SCGT. Unfortunately, it is not clear how to extend the lessons learned from QCD to other SCGT's. Particularly urgent motivators for new studies of other SCGT's are the ongoing searches for physics beyond the standard model (BSM) at the Large Hadron Collider (LHC) and the Tevatron. Lattice gauge theory (LGT) is a technique for systematically-improvable calculations in many SCGT's. It has become the standard for non-perturbative calculations in QCD and it is widely believed that it may be useful for study of other SCGT's in the realm of BSM physics. We will discuss the prospects and potential pitfalls for these LGT studies, focusing primarily on the flavor dependence of SU(3) gauge theory.

  5. The Top Quark, QCD, And New Physics.

    DOE R&D Accomplishments Database

    Dawson, S.

    2002-06-01

    The role of the top quark in completing the Standard Model quark sector is reviewed, along with a discussion of production, decay, and theoretical restrictions on the top quark properties. Particular attention is paid to the top quark as a laboratory for perturbative QCD. As examples of the relevance of QCD corrections in the top quark sector, the calculation of e{sup+}e{sup -}+ t{bar t} at next-to-leading-order QCD using the phase space slicing algorithm and the implications of a precision measurement of the top quark mass are discussed in detail. The associated production of a t{bar t} pair and a Higgs boson in either e{sup+}e{sup -} or hadronic collisions is presented at next-to-leading-order QCD and its importance for a measurement of the top quark Yulrawa coupling emphasized. Implications of the heavy top quark mass for model builders are briefly examined, with the minimal supersymmetric Standard Model and topcolor discussed as specific examples.

  6. An SVM model with hybrid kernels for hydrological time series

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  7. Graph wavelet alignment kernels for drug virtual screening.

    PubMed

    Smalter, Aaron; Huan, Jun; Lushington, Gerald

    2009-06-01

    In this paper, we introduce a novel statistical modeling technique for target property prediction, with applications to virtual screening and drug design. In our method, we use graphs to model chemical structures and apply a wavelet analysis of graphs to summarize features capturing graph local topology. We design a novel graph kernel function to utilize the topology features to build predictive models for chemicals via Support Vector Machine classifier. We call the new graph kernel a graph wavelet-alignment kernel. We have evaluated the efficacy of the wavelet-alignment kernel using a set of chemical structure-activity prediction benchmarks. Our results indicate that the use of the kernel function yields performance profiles comparable to, and sometimes exceeding that of the existing state-of-the-art chemical classification approaches. In addition, our results also show that the use of wavelet functions significantly decreases the computational costs for graph kernel computation with more than ten fold speedup.

  8. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  9. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  10. Recent QCD Studies at the Tevatron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Group, Robert Craig

    2008-04-01

    Since the beginning of Run II at the Fermilab Tevatron the QCD physics groups of the CDF and D0 experiments have worked to reach unprecedented levels of precision for many QCD observables. Thanks to the large dataset--over 3 fb{sup -1} of integrated luminosity recorded by each experiment--important new measurements have recently been made public and will be summarized in this paper.

  11. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  12. Enhanced gluten properties in soft kernel durum wheat

    USDA-ARS?s Scientific Manuscript database

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  13. Accelerating the Original Profile Kernel.

    PubMed

    Hamp, Tobias; Goldberg, Tatyana; Rost, Burkhard

    2013-01-01

    One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel.

  14. Kaon-Nucleon potential from lattice QCD

    NASA Astrophysics Data System (ADS)

    Ikeda, Y.; Aoki, S.; Doi, T.; Hatsuda, T.; Inoue, T.; Ishii, N.; Murano, K.; Nemura, H.; Sasaki, K.

    2010-04-01

    We study the K N interactions in the I(Jπ) = 0(1/2-) and 1(1/2-) channels and associated exotic state Θ+ from 2+1 flavor full lattice QCD simulation for relatively heavy quark mass corresponding to mπ = 871 MeV. The s-wave K N potentials are obtained from the Bethe-Salpeter wave function by using the method recently developed by HAL QCD (Hadrons to Atomic nuclei from Lattice QCD) Collaboration. Potentials in both channels reveal short range repulsions: Strength of the repulsion is stronger in the I = 1 potential, which is consistent with the prediction of the Tomozawa-Weinberg term. The I = 0 potential is found to have attractive well at mid range. From these potentials, the K N scattering phase shifts are calculated and compared with the experimental data.

  15. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing, manufacturing, packing, processing, preparing, treating...

  16. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  17. End-use quality of soft kernel durum wheat

    USDA-ARS?s Scientific Manuscript database

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  18. QCD PHASE TRANSITIONS-VOLUME 15.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SCHAFER,T.

    1998-11-04

    The title of the workshop, ''The QCD Phase Transitions'', in fact happened to be too narrow for its real contents. It would be more accurate to say that it was devoted to different phases of QCD and QCD-related gauge theories, with strong emphasis on discussion of the underlying non-perturbative mechanisms which manifest themselves as all those phases. Before we go to specifics, let us emphasize one important aspect of the present status of non-perturbative Quantum Field Theory in general. It remains true that its studies do not get attention proportional to the intellectual challenge they deserve, and that the theoristsmore » working on it remain very fragmented. The efforts to create Theory of Everything including Quantum Gravity have attracted the lion share of attention and young talent. Nevertheless, in the last few years there was also a tremendous progress and even some shift of attention toward emphasis on the unity of non-perturbative phenomena. For example, we have seen some. efforts to connect the lessons from recent progress in Supersymmetric theories with that in QCD, as derived from phenomenology and lattice. Another example is Maldacena conjecture and related development, which connect three things together, string theory, super-gravity and the (N=4) supersymmetric gauge theory. Although the progress mentioned is remarkable by itself, if we would listen to each other more we may have chance to strengthen the field and reach better understanding of the spectacular non-perturbative physics.« less

  19. QCD Phase Transitions, Volume 15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefer, T.; Shuryak, E.

    1999-03-20

    The title of the workshop, ''The QCD Phase Transitions'', in fact happened to be too narrow for its real contents. It would be more accurate to say that it was devoted to different phases of QCD and QCD-related gauge theories, with strong emphasis on discussion of the underlying non-perturbative mechanisms which manifest themselves as all those phases. Before we go to specifics, let us emphasize one important aspect of the present status of non-perturbative Quantum Field Theory in general. It remains true that its studies do not get attention proportional to the intellectual challenge they deserve, and that the theoristsmore » working on it remain very fragmented. The efforts to create Theory of Everything including Quantum Gravity have attracted the lion share of attention and young talent. Nevertheless, in the last few years there was also a tremendous progress and even some shift of attention toward emphasis on the unity of non-perturbative phenomena. For example, we have seen some efforts to connect the lessons from recent progress in Supersymmetric theories with that in QCD, as derived from phenomenology and lattice. Another example is Maldacena conjecture and related development, which connect three things together, string theory, super-gravity and the (N=4) supersymmetric gauge theory. Although the progress mentioned is remarkable by itself, if we would listen to each other more we may have chance to strengthen the field and reach better understanding of the spectacular non-perturbative physics.« less

  20. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    PubMed

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  1. Improved modeling of clinical data with kernel methods.

    PubMed

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  2. Triso coating development progress for uranium nitride kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jolly, Brian C.; Lindemer, Terrence; Terrani, Kurt A.

    2015-08-01

    In support of fully ceramic matrix (FCM) fuel development [1-2], coating development work is ongoing at the Oak Ridge National Laboratory (ORNL) to produce tri-structural isotropic (TRISO) coated fuel particles with UN kernels [3]. The nitride kernels are used to increase fissile density in these SiC-matrix fuel pellets with details described elsewhere [4]. The advanced gas reactor (AGR) program at ORNL used fluidized bed chemical vapor deposition (FBCVD) techniques for TRISO coating of UCO (two phase mixture of UO2 and UCx) kernels [5]. Similar techniques were employed for coating of the UN kernels, however significant changes in processing conditions weremore » required to maintain acceptable coating properties due to physical property and dimensional differences between the UCO and UN kernels (Table 1).« less

  3. QCD inequalities for hadron interactions.

    PubMed

    Detmold, William

    2015-06-05

    We derive generalizations of the Weingarten-Witten QCD mass inequalities for particular multihadron systems. For systems of any number of identical pseudoscalar mesons of maximal isospin, these inequalities prove that near threshold interactions between the constituent mesons must be repulsive and that no bound states can form in these channels. Similar constraints in less symmetric systems are also extracted. These results are compatible with experimental results (where known) and recent lattice QCD calculations, and also lead to a more stringent bound on the nucleon mass than previously derived, m_{N}≥3/2m_{π}.

  4. Hadron scattering, resonances, and QCD

    NASA Astrophysics Data System (ADS)

    Briceño, R. A.

    2016-11-01

    The non-perturbative nature of quantum chromodynamics (QCD) has historically left a gap in our understanding of the connection between the fundamental theory of the strong interactions and the rich structure of experimentally observed phenomena. For the simplest properties of stable hadrons, this is now circumvented with the use of lattice QCD (LQCD). In this talk I discuss a path towards a rigorous determination of few-hadron observables from LQCD. I illustrate the power of the methodology by presenting recently determined scattering amplitudes in the light-meson sector and their resonance content.

  5. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  6. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  7. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  8. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  9. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  10. Gravitation waves from QCD and electroweak phase transitions

    NASA Astrophysics Data System (ADS)

    Chen, Yidian; Huang, Mei; Yan, Qi-Shu

    2018-05-01

    We investigate the gravitation waves produced from QCD and electroweak phase transitions in the early universe by using a 5-dimension holographic QCD model and a holographic technicolor model. The dynamical holographic QCD model is to describe the pure gluon system, where a first order confinement-deconfinement phase transition can happen at the critical temperature around 250 MeV. The minimal holographic technicolor model is introduced to model the strong dynamics of electroweak, it can give a first order electroweak phase transition at the critical temperature around 100-360 GeV. We find that for both GW signals produced from QCD and EW phase transitions, in the peak frequency region, the dominant contribution comes from the sound waves, while away from the peak frequency region the contribution from the bubble collision is dominant. The peak frequency of gravitation wave determined by the QCD phase transition is located around 10-7 Hz which is within the detectability of FAST and SKA, and the peak frequency of gravitational wave predicted by EW phase transition is located at 0.002 - 0.007 Hz, which might be detectable by BBO, DECIGO, LISA and ELISA.

  11. Disconnected Diagrams in Lattice QCD

    NASA Astrophysics Data System (ADS)

    Gambhir, Arjun Singh

    In this work, we present state-of-the-art numerical methods and their applications for computing a particular class of observables using lattice quantum chromodynamics (Lattice QCD), a discretized version of the fundamental theory of quarks and gluons. These observables require calculating so called "disconnected diagrams" and are important for understanding many aspects of hadron structure, such as the strange content of the proton. We begin by introducing the reader to the key concepts of Lattice QCD and rigorously define the meaning of disconnected diagrams through an example of the Wick contractions of the nucleon. Subsequently, the calculation of observables requiring disconnected diagrams is posed as the computationally challenging problem of finding the trace of the inverse of an incredibly large, sparse matrix. This is followed by a brief primer of numerical sparse matrix techniques that overviews broadly used methods in Lattice QCD and builds the background for the novel algorithm presented in this work. We then introduce singular value deflation as a method to improve convergence of trace estimation and analyze its effects on matrices from a variety of fields, including chemical transport modeling, magnetohydrodynamics, and QCD. Finally, we apply this method to compute observables such as the strange axial charge of the proton and strange sigma terms in light nuclei. The work in this thesis is innovative for four reasons. First, we analyze the effects of deflation with a model that makes qualitative predictions about its effectiveness, taking only the singular value spectrum as input, and compare deflated variance with different types of trace estimator noise. Second, the synergy between probing methods and deflation is investigated both experimentally and theoretically. Third, we use the synergistic combination of deflation and a graph coloring algorithm known as hierarchical probing to conduct a lattice calculation of light disconnected matrix elements

  12. A dynamic kernel modifier for linux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minnich, R. G.

    2002-09-03

    Dynamic Kernel Modifier, or DKM, is a kernel module for Linux that allows user-mode programs to modify the execution of functions in the kernel without recompiling or modifying the kernel source in any way. Functions may be traced, either function entry only or function entry and exit; nullified; or replaced with some other function. For the tracing case, function execution results in the activation of a watchpoint. When the watchpoint is activated, the address of the function is logged in a FIFO buffer that is readable by external applications. The watchpoints are time-stamped with the resolution of the processor highmore » resolution timers, which on most modem processors are accurate to a single processor tick. DKM is very similar to earlier systems such as the SunOS trace device or Linux TT. Unlike these two systems, and other similar systems, DKM requires no kernel modifications. DKM allows users to do initial probing of the kernel to look for performance problems, or even to resolve potential problems by turning functions off or replacing them. DKM watchpoints are not without cost: it takes about 200 nanoseconds to make a log entry on an 800 Mhz Pentium-Ill. The overhead numbers are actually competitive with other hardware-based trace systems, although it has less 'Los Alamos National Laboratory is operated by the University of California for the National Nuclear Security Administration of the United States Department of Energy under contract W-7405-ENG-36. accuracy than an In-Circuit Emulator such as the American Arium. Once the user has zeroed in on a problem, other mechanisms with a higher degree of accuracy can be used.« less

  13. Independent genetic control of maize (Zea mays L.) kernel weight determination and its phenotypic plasticity.

    PubMed

    Alvarez Prado, Santiago; Sadras, Víctor O; Borrás, Lucas

    2014-08-01

    Maize kernel weight (KW) is associated with the duration of the grain-filling period (GFD) and the rate of kernel biomass accumulation (KGR). It is also related to the dynamics of water and hence is physiologically linked to the maximum kernel water content (MWC), kernel desiccation rate (KDR), and moisture concentration at physiological maturity (MCPM). This work proposed that principles of phenotypic plasticity can help to consolidated the understanding of the environmental modulation and genetic control of these traits. For that purpose, a maize population of 245 recombinant inbred lines (RILs) was grown under different environmental conditions. Trait plasticity was calculated as the ratio of the variance of each RIL to the overall phenotypic variance of the population of RILs. This work found a hierarchy of plasticities: KDR ≈ GFD > MCPM > KGR > KW > MWC. There was no phenotypic and genetic correlation between traits per se and trait plasticities. MWC, the trait with the lowest plasticity, was the exception because common quantitative trait loci were found for the trait and its plasticity. Independent genetic control of a trait per se and genetic control of its plasticity is a condition for the independent evolution of traits and their plasticities. This allows breeders potentially to select for high or low plasticity in combination with high or low values of economically relevant traits. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  14. First Renormalized Parton Distribution Functions from Lattice QCD

    NASA Astrophysics Data System (ADS)

    Lin, Huey-Wen; LP3 Collaboration

    2017-09-01

    We present the first lattice-QCD results on the nonperturbatively renormalized parton distribution functions (PDFs). Using X.D. Ji's large-momentum effective theory (LaMET) framework, lattice-QCD hadron structure calculations are able to overcome the longstanding problem of determining the Bjorken- x dependence of PDFs. This has led to numerous additional theoretical works and exciting progress. In this talk, we will address a recent development that implements a step missing from prior lattice-QCD calculations: renormalization, its effects on the nucleon matrix elements, and the resultant changes to the calculated distributions.

  15. The CP-PACS Project and Lattice QCD Results

    NASA Astrophysics Data System (ADS)

    Iwasaki, Y.

    The aim of the CP-PACS project was to develop a massively parallel computer for performing numerical research in computational physics with primary emphasis on lattice QCD. The CP-PACS computer with a peak speed of 614 GFLOPS with 2048 processors was completed in September 1996, and has been in full operation since October 1996. We present an overview of the CP-PACS project and describe characteristics of the CP-PACS computer. The CP-PACS has been mainly used for hadron spectroscopy studies in lattice QCD. Main results in lattice QCD simulations are given.

  16. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  17. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  18. Moriond QCD 2013 Experimental Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denisov, Dmitri

    2013-06-28

    The article presents experimental highlights of Moriond 2013 QCD conference. This was fantastic conference and the first Moriond QCD since the discovery of the Higgs boson. Many new results about its properties have been presented at the conference with Higgs-like particle becoming a Higgs as it properties match expected for the Higgs boson pretty well. There were many new results presented in all experimental areas including QCD, elecroweak, studies of the top, bottom and charm quarks, searches for physics beyond Standard Model as well as studies of the heavy ion collisions. 56 experimental talks have been presented at the conferencemore » and it is impossible to cover each result in the summary, so highlights are limited to what I was able to present in my summary talk presented on March 16 2013. The proceedings of the conference cover in depth all talks presented and I urge you to get familiar with all of them. Theoretical Summary of the conference was given by Michelangelo Mangano, so theory talks are not covered in the article.« less

  19. Aflatoxin contamination of developing corn kernels.

    PubMed

    Amer, M A

    2005-01-01

    Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.

  20. Parton distributions and lattice QCD calculations: A community white paper

    NASA Astrophysics Data System (ADS)

    Lin, Huey-Wen; Nocera, Emanuele R.; Olness, Fred; Orginos, Kostas; Rojo, Juan; Accardi, Alberto; Alexandrou, Constantia; Bacchetta, Alessandro; Bozzi, Giuseppe; Chen, Jiunn-Wei; Collins, Sara; Cooper-Sarkar, Amanda; Constantinou, Martha; Del Debbio, Luigi; Engelhardt, Michael; Green, Jeremy; Gupta, Rajan; Harland-Lang, Lucian A.; Ishikawa, Tomomi; Kusina, Aleksander; Liu, Keh-Fei; Liuti, Simonetta; Monahan, Christopher; Nadolsky, Pavel; Qiu, Jian-Wei; Schienbein, Ingo; Schierholz, Gerrit; Thorne, Robert S.; Vogelsang, Werner; Wittig, Hartmut; Yuan, C.-P.; Zanotti, James

    2018-05-01

    In the framework of quantum chromodynamics (QCD), parton distribution functions (PDFs) quantify how the momentum and spin of a hadron are divided among its quark and gluon constituents. Two main approaches exist to determine PDFs. The first approach, based on QCD factorization theorems, realizes a QCD analysis of a suitable set of hard-scattering measurements, often using a variety of hadronic observables. The second approach, based on first-principle operator definitions of PDFs, uses lattice QCD to compute directly some PDF-related quantities, such as their moments. Motivated by recent progress in both approaches, in this document we present an overview of lattice-QCD and global-analysis techniques used to determine unpolarized and polarized proton PDFs and their moments. We provide benchmark numbers to validate present and future lattice-QCD calculations and we illustrate how they could be used to reduce the PDF uncertainties in current unpolarized and polarized global analyses. This document represents a first step towards establishing a common language between the two communities, to foster dialogue and to further improve our knowledge of PDFs.

  1. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  2. Performance Characteristics of a Kernel-Space Packet Capture Module

    DTIC Science & Technology

    2010-03-01

    Defense, or the United States Government . AFIT/GCO/ENG/10-03 PERFORMANCE CHARACTERISTICS OF A KERNEL-SPACE PACKET CAPTURE MODULE THESIS Presented to the...3.1.2.3 Prototype. The proof of concept for this research is the design, development, and comparative performance analysis of a kernel level N2d capture...changes to kernel code 5. Can be used for both user-space and kernel-space capture applications in order to control comparative performance analysis to

  3. Exploring Partonic Structure of Hadrons Using ab initio Lattice QCD Calculations.

    PubMed

    Ma, Yan-Qing; Qiu, Jian-Wei

    2018-01-12

    Following our previous proposal, we construct a class of good "lattice cross sections" (LCSs), from which we can study the partonic structure of hadrons from ab initio lattice QCD calculations. These good LCSs, on the one hand, can be calculated directly in lattice QCD, and on the other hand, can be factorized into parton distribution functions (PDFs) with calculable coefficients, in the same way as QCD factorization for factorizable hadronic cross sections. PDFs could be extracted from QCD global analysis of the lattice QCD generated data of LCSs. We also show that the proposed functions for lattice QCD calculation of PDFs in the literature are special cases of these good LCSs.

  4. Thermal behavior of Charmonium in the vector channel from QCD sum rules

    NASA Astrophysics Data System (ADS)

    Dominguez, C. A.; Loewe, M.; Rojas, J. C.; Zhang, Y.

    2010-11-01

    The thermal evolution of the hadronic parameters of charmonium in the vector channel, i.e. the J/Ψ resonance mass, coupling (leptonic decay constant), total width, and continuum threshold are analyzed in the framework of thermal Hilbert moment QCD sum rules. The continuum threshold s0 has the same behavior as in all other hadronic channels, i.e. it decreases with increasing temperature until the PQCD threshold s0 = 4mQ2 is reached at T≃1.22Tc (mQ is the charm quark mass). The other hadronic parameters behave in a very different way from those of light-light and heavy-light quark systems. The J/Ψ mass is essentially constant in a wide range of temperatures, while the total width grows with temperature up to T≃1.04Tc beyond which it decreases sharply with increasing T. The resonance coupling is also initially constant beginning to increase monotonically around T≃Tc. This behavior of the total width and of the leptonic decay constant is a strong indication that the J/Ψ resonance might survive beyond the critical temperature for deconfinement, in agreement with some recent lattice QCD results.

  5. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  6. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  7. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  8. Embedded real-time operating system micro kernel design

    NASA Astrophysics Data System (ADS)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  9. Most Strange Dibaryon from Lattice QCD

    NASA Astrophysics Data System (ADS)

    Gongyo, Shinya; Sasaki, Kenji; Aoki, Sinya; Doi, Takumi; Hatsuda, Tetsuo; Ikeda, Yoichi; Inoue, Takashi; Iritani, Takumi; Ishii, Noriyoshi; Miyamoto, Takaya; Nemura, Hidekatsu; HAL QCD Collaboration

    2018-05-01

    The Ω Ω system in the 1S0 channel (the most strange dibaryon) is studied on the basis of the (2 +1 )-flavor lattice QCD simulations with a large volume (8.1 fm )3 and nearly physical pion mass mπ≃146 MeV at a lattice spacing of a ≃0.0846 fm . We show that lattice QCD data analysis by the HAL QCD method leads to the scattering length a0=4.6 (6 )(-0.5+1.2) fm , the effective range reff=1.27 (3 )(-0.03+0.06) fm , and the binding energy BΩ Ω=1.6 (6 )(-0.6+0.7) MeV . These results indicate that the Ω Ω system has an overall attraction and is located near the unitary regime. Such a system can be best searched experimentally by the pair-momentum correlation in relativistic heavy-ion collisions.

  10. Kernel Temporal Differences for Neural Decoding

    PubMed Central

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  11. Online selective kernel-based temporal difference learning.

    PubMed

    Chen, Xingguo; Gao, Yang; Wang, Ruili

    2013-12-01

    In this paper, an online selective kernel-based temporal difference (OSKTD) learning algorithm is proposed to deal with large scale and/or continuous reinforcement learning problems. OSKTD includes two online procedures: online sparsification and parameter updating for the selective kernel-based value function. A new sparsification method (i.e., a kernel distance-based online sparsification method) is proposed based on selective ensemble learning, which is computationally less complex compared with other sparsification methods. With the proposed sparsification method, the sparsified dictionary of samples is constructed online by checking if a sample needs to be added to the sparsified dictionary. In addition, based on local validity, a selective kernel-based value function is proposed to select the best samples from the sample dictionary for the selective kernel-based value function approximator. The parameters of the selective kernel-based value function are iteratively updated by using the temporal difference (TD) learning algorithm combined with the gradient descent technique. The complexity of the online sparsification procedure in the OSKTD algorithm is O(n). In addition, two typical experiments (Maze and Mountain Car) are used to compare with both traditional and up-to-date O(n) algorithms (GTD, GTD2, and TDC using the kernel-based value function), and the results demonstrate the effectiveness of our proposed algorithm. In the Maze problem, OSKTD converges to an optimal policy and converges faster than both traditional and up-to-date algorithms. In the Mountain Car problem, OSKTD converges, requires less computation time compared with other sparsification methods, gets a better local optima than the traditional algorithms, and converges much faster than the up-to-date algorithms. In addition, OSKTD can reach a competitive ultimate optima compared with the up-to-date algorithms.

  12. Influence of wheat kernel physical properties on the pulverizing process.

    PubMed

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  13. Kernel-based Linux emulation for Plan 9.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minnich, Ronald G.

    2010-09-01

    CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9.more » In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.« less

  14. Gradient-based adaptation of general gaussian kernels.

    PubMed

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  15. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power

  16. Symmetric and anti-symmetric LS hyperon potentials from lattice QCD

    NASA Astrophysics Data System (ADS)

    Ishii, Noriyoshi; Murano, Keiko; Nemura, Hidekatsu; Sasaki, Kenji; Inoue, Takashi; HAL QCD Collaboration

    2014-09-01

    We present recent results of odd-parity hyperon-hyperon potentials from lattice QCD. By using HAL QCD method, we generate hyperon-hyperon potentials from Nambu-Bethe-Salpeter (NBS) wave functions generated by lattice QCD simulation in the flavor SU(3) limit. Potentials in the irreducible flavor SU(3) representations are combined to make a Lambda-N potential which has a strong symmetric LS potential and a weak anti-symmetric LS potential. We discuss a possible cancellation between symmetric and anti-symmetric LS (Lambda-N) potentials after the coupled Sigma-N sector is integrated out. We present recent results of odd-parity hyperon-hyperon potentials from lattice QCD. By using HAL QCD method, we generate hyperon-hyperon potentials from Nambu-Bethe-Salpeter (NBS) wave functions generated by lattice QCD simulation in the flavor SU(3) limit. Potentials in the irreducible flavor SU(3) representations are combined to make a Lambda-N potential which has a strong symmetric LS potential and a weak anti-symmetric LS potential. We discuss a possible cancellation between symmetric and anti-symmetric LS (Lambda-N) potentials after the coupled Sigma-N sector is integrated out. This work is supported by JSPS KAKENHI Grant Number 25400244.

  17. Genetic dissection of the maize kernel development process via conditional QTL mapping for three developing kernel-related traits in an immortalized F2 population.

    PubMed

    Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua

    2016-02-01

    Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.

  18. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.

    2017-05-01

    Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.

  20. Most Strange Dibaryon from Lattice QCD.

    PubMed

    Gongyo, Shinya; Sasaki, Kenji; Aoki, Sinya; Doi, Takumi; Hatsuda, Tetsuo; Ikeda, Yoichi; Inoue, Takashi; Iritani, Takumi; Ishii, Noriyoshi; Miyamoto, Takaya; Nemura, Hidekatsu

    2018-05-25

    The ΩΩ system in the ^{1}S_{0} channel (the most strange dibaryon) is studied on the basis of the (2+1)-flavor lattice QCD simulations with a large volume (8.1  fm)^{3} and nearly physical pion mass m_{π}≃146  MeV at a lattice spacing of a≃0.0846  fm. We show that lattice QCD data analysis by the HAL QCD method leads to the scattering length a_{0}=4.6(6)(_{-0.5}^{+1.2})  fm, the effective range r_{eff}=1.27(3)(_{-0.03}^{+0.06})  fm, and the binding energy B_{ΩΩ}=1.6(6)(_{-0.6}^{+0.7})  MeV. These results indicate that the ΩΩ system has an overall attraction and is located near the unitary regime. Such a system can be best searched experimentally by the pair-momentum correlation in relativistic heavy-ion collisions.

  1. Disconnected Diagrams in Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gambhir, Arjun

    In this work, we present state-of-the-art numerical methods and their applications for computing a particular class of observables using lattice quantum chromodynamics (Lattice QCD), a discretized version of the fundamental theory of quarks and gluons. These observables require calculating so called \\disconnected diagrams" and are important for understanding many aspects of hadron structure, such as the strange content of the proton. We begin by introducing the reader to the key concepts of Lattice QCD and rigorously define the meaning of disconnected diagrams through an example of the Wick contractions of the nucleon. Subsequently, the calculation of observables requiring disconnected diagramsmore » is posed as the computationally challenging problem of finding the trace of the inverse of an incredibly large, sparse matrix. This is followed by a brief primer of numerical sparse matrix techniques that overviews broadly used methods in Lattice QCD and builds the background for the novel algorithm presented in this work. We then introduce singular value deflation as a method to improve convergence of trace estimation and analyze its effects on matrices from a variety of fields, including chemical transport modeling, magnetohydrodynamics, and QCD. Finally, we apply this method to compute observables such as the strange axial charge of the proton and strange sigma terms in light nuclei. The work in this thesis is innovative for four reasons. First, we analyze the effects of deflation with a model that makes qualitative predictions about its effectiveness, taking only the singular value spectrum as input, and compare deflated variance with different types of trace estimator noise. Second, the synergy between probing methods and deflation is investigated both experimentally and theoretically. Third, we use the synergistic combination of deflation and a graph coloring algorithm known as hierarchical probing to conduct a lattice calculation of light disconnected matrix

  2. On microscopic structure of the QCD vacuum

    NASA Astrophysics Data System (ADS)

    Pak, D. G.; Lee, Bum-Hoon; Kim, Youngman; Tsukioka, Takuya; Zhang, P. M.

    2018-05-01

    We propose a new class of regular stationary axially symmetric solutions in a pure QCD which correspond to monopole-antimonopole pairs at macroscopic scale. The solutions represent vacuum field configurations which are locally stable against quantum gluon fluctuations in any small space-time vicinity. This implies that the monopole-antimonopole pair can serve as a structural element in microscopic description of QCD vacuum formation.

  3. Detection of maize kernels breakage rate based on K-means clustering

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping

    2017-04-01

    In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.

  4. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  5. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    PubMed

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Reliable semiclassical computations in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dine, Michael; Department of Physics, Stanford University Stanford, California 94305-4060; Festuccia, Guido

    We revisit the question of whether or not one can perform reliable semiclassical QCD computations at zero temperature. We study correlation functions with no perturbative contributions, and organize the problem by means of the operator product expansion, establishing a precise criterion for the validity of a semiclassical calculation. For N{sub f}>N, a systematic computation is possible; for N{sub f}QCD lattice gauge theory computations in the chiral limit.

  7. Stochastic subset selection for learning with kernel machines.

    PubMed

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  8. RTOS kernel in portable electrocardiograph

    NASA Astrophysics Data System (ADS)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  9. A Robustness Testing Campaign for IMA-SP Partitioning Kernels

    NASA Astrophysics Data System (ADS)

    Grixti, Stephen; Lopez Trecastro, Jorge; Sammut, Nicholas; Zammit-Mangion, David

    2015-09-01

    With time and space partitioned architectures becoming increasingly appealing to the European space sector, the dependability of partitioning kernel technology is a key factor to its applicability in European Space Agency projects. This paper explores the potential of the data type fault model, which injects faults through the Application Program Interface, in partitioning kernel robustness testing. This fault injection methodology has been tailored to investigate its relevance in uncovering vulnerabilities within partitioning kernels and potentially contributing towards fault removal campaigns within this domain. This is demonstrated through a robustness testing case study of the XtratuM partitioning kernel for SPARC LEON3 processors. The robustness campaign exposed a number of vulnerabilities in XtratuM, exhibiting the potential benefits of using such a methodology for the robustness assessment of partitioning kernels.

  10. Searching for efficient Markov chain Monte Carlo proposal kernels

    PubMed Central

    Yang, Ziheng; Rodríguez, Carlos E.

    2013-01-01

    Markov chain Monte Carlo (MCMC) or the Metropolis–Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis–Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals. PMID:24218600

  11. Defect Analysis Of Quality Palm Kernel Meal Using Statistical Quality Control In Kernels Factory

    NASA Astrophysics Data System (ADS)

    Sembiring, M. T.; Marbun, N. J.

    2018-04-01

    The production quality has an important impact retain the totality of characteristics of a product or service to pay attention to its capabilities to meet the needs that have been established. Quality criteria Palm Kernel Meal (PKM) set Factory kernel is as follows: oil content: max 8.50%, water content: max 12,00% and impurity content: max 4.00% While the average quality of the oil content of 8.94%, the water content of 5.51%, and 8.45% impurity content. To identify the defective product quality PKM produced, then used a method of analysis using Statistical Quality Control (SQC). PKM Plant Quality Kernel shows the oil content was 0.44% excess of a predetermined maximum value, and 4.50% impurity content. With excessive PKM content of oil and dirt cause disability content of production for oil, amounted to 854.6078 kg PKM and 8643.193 kg impurity content of PKM. Analysis of the results of cause and effect diagram and SQC, the factors that lead to poor quality of PKM is Ampere second press oil expeller and hours second press oil expeller.

  12. Scheme Variations of the QCD Coupling and Hadronic τ Decays

    NASA Astrophysics Data System (ADS)

    Boito, Diogo; Jamin, Matthias; Miravitllas, Ramon

    2016-10-01

    The quantum chromodynamics (QCD) coupling αs is not a physical observable of the theory, since it depends on conventions related to the renormalization procedure. We introduce a definition of the QCD coupling, denoted by α^s, whose running is explicitly renormalization scheme invariant. The scheme dependence of the new coupling α^s is parametrized by a single parameter C , related to transformations of the QCD scale Λ . It is demonstrated that appropriate choices of C can lead to substantial improvements in the perturbative prediction of physical observables. As phenomenological applications, we study e+e- scattering and decays of the τ lepton into hadrons, both being governed by the QCD Adler function.

  13. Scuba: scalable kernel-based gene prioritization.

    PubMed

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  14. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects. Copyright © 2016 Crop Science Society of America.

  15. Accurate determinations of alpha(s) from realistic lattice QCD.

    PubMed

    Mason, Q; Trottier, H D; Davies, C T H; Foley, K; Gray, A; Lepage, G P; Nobes, M; Shigemitsu, J

    2005-07-29

    We obtain a new value for the QCD coupling constant by combining lattice QCD simulations with experimental data for hadron masses. Our lattice analysis is the first to (1) include vacuum polarization effects from all three light-quark flavors (using MILC configurations), (2) include third-order terms in perturbation theory, (3) systematically estimate fourth and higher-order terms, (4) use an unambiguous lattice spacing, and (5) use an [symbol: see text](a2)-accurate QCD action. We use 28 different (but related) short-distance quantities to obtain alpha((5)/(MS))(M(Z)) = 0.1170(12).

  16. Control of Early Flame Kernel Growth by Multi-Wavelength Laser Pulses for Enhanced Ignition

    DOE PAGES

    Dumitrache, Ciprian; VanOsdol, Rachel; Limbach, Christopher M.; ...

    2017-08-31

    The present contribution examines the impact of plasma dynamics and plasma-driven fluid dynamics on the flame growth of laser ignited mixtures and shows that a new dual-pulse scheme can be used to control the kernel formation process in ways that extend the lean ignition limit. We do this by performing a comparative study between (conventional) single-pulse laser ignition (λ = 1064 nm) and a novel dual-pulse method based on combining an ultraviolet (UV) pre-ionization pulse (λ = 266 nm) with an overlapped near-infrared (NIR) energy addition pulse (λ = 1064 nm). We employ OH* chemiluminescence to visualize the evolution ofmore » the early flame kernel. For single-pulse laser ignition at lean conditions, the flame kernel separates through third lobe detachment, corresponding to high strain rates that extinguish the flame. In this work, we investigate the capabilities of the dual-pulse to control the plasma-driven fluid dynamics by adjusting the axial offset of the two focal points. In particular, we find there exists a beam waist offset whereby the resulting vorticity suppresses formation of the third lobe, consequently reducing flame stretch. With this approach, we demonstrate that the dual-pulse method enables reduced flame speeds (at early times), an extended lean limit, increased combustion efficiency, and decreased laser energy requirements.« less

  17. Control of Early Flame Kernel Growth by Multi-Wavelength Laser Pulses for Enhanced Ignition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumitrache, Ciprian; VanOsdol, Rachel; Limbach, Christopher M.

    The present contribution examines the impact of plasma dynamics and plasma-driven fluid dynamics on the flame growth of laser ignited mixtures and shows that a new dual-pulse scheme can be used to control the kernel formation process in ways that extend the lean ignition limit. We do this by performing a comparative study between (conventional) single-pulse laser ignition (λ = 1064 nm) and a novel dual-pulse method based on combining an ultraviolet (UV) pre-ionization pulse (λ = 266 nm) with an overlapped near-infrared (NIR) energy addition pulse (λ = 1064 nm). We employ OH* chemiluminescence to visualize the evolution ofmore » the early flame kernel. For single-pulse laser ignition at lean conditions, the flame kernel separates through third lobe detachment, corresponding to high strain rates that extinguish the flame. In this work, we investigate the capabilities of the dual-pulse to control the plasma-driven fluid dynamics by adjusting the axial offset of the two focal points. In particular, we find there exists a beam waist offset whereby the resulting vorticity suppresses formation of the third lobe, consequently reducing flame stretch. With this approach, we demonstrate that the dual-pulse method enables reduced flame speeds (at early times), an extended lean limit, increased combustion efficiency, and decreased laser energy requirements.« less

  18. Control of Early Flame Kernel Growth by Multi-Wavelength Laser Pulses for Enhanced Ignition.

    PubMed

    Dumitrache, Ciprian; VanOsdol, Rachel; Limbach, Christopher M; Yalin, Azer P

    2017-08-31

    The present contribution examines the impact of plasma dynamics and plasma-driven fluid dynamics on the flame growth of laser ignited mixtures and shows that a new dual-pulse scheme can be used to control the kernel formation process in ways that extend the lean ignition limit. We perform a comparative study between (conventional) single-pulse laser ignition (λ = 1064 nm) and a novel dual-pulse method based on combining an ultraviolet (UV) pre-ionization pulse (λ = 266 nm) with an overlapped near-infrared (NIR) energy addition pulse (λ = 1064 nm). We employ OH* chemiluminescence to visualize the evolution of the early flame kernel. For single-pulse laser ignition at lean conditions, the flame kernel separates through third lobe detachment, corresponding to high strain rates that extinguish the flame. In this work, we investigate the capabilities of the dual-pulse to control the plasma-driven fluid dynamics by adjusting the axial offset of the two focal points. In particular, we find there exists a beam waist offset whereby the resulting vorticity suppresses formation of the third lobe, consequently reducing flame stretch. With this approach, we demonstrate that the dual-pulse method enables reduced flame speeds (at early times), an extended lean limit, increased combustion efficiency, and decreased laser energy requirements.

  19. Sepsis mortality prediction with the Quotient Basis Kernel.

    PubMed

    Ribas Ripoll, Vicent J; Vellido, Alfredo; Romero, Enrique; Ruiz-Rodríguez, Juan Carlos

    2014-05-01

    This paper presents an algorithm to assess the risk of death in patients with sepsis. Sepsis is a common clinical syndrome in the intensive care unit (ICU) that can lead to severe sepsis, a severe state of septic shock or multi-organ failure. The proposed algorithm may be implemented as part of a clinical decision support system that can be used in combination with the scores deployed in the ICU to improve the accuracy, sensitivity and specificity of mortality prediction for patients with sepsis. In this paper, we used the Simplified Acute Physiology Score (SAPS) for ICU patients and the Sequential Organ Failure Assessment (SOFA) to build our kernels and algorithms. In the proposed method, we embed the available data in a suitable feature space and use algorithms based on linear algebra, geometry and statistics for inference. We present a simplified version of the Fisher kernel (practical Fisher kernel for multinomial distributions), as well as a novel kernel that we named the Quotient Basis Kernel (QBK). These kernels are used as the basis for mortality prediction using soft-margin support vector machines. The two new kernels presented are compared against other generative kernels based on the Jensen-Shannon metric (centred, exponential and inverse) and other widely used kernels (linear, polynomial and Gaussian). Clinical relevance is also evaluated by comparing these results with logistic regression and the standard clinical prediction method based on the initial SAPS score. As described in this paper, we tested the new methods via cross-validation with a cohort of 400 test patients. The results obtained using our methods compare favourably with those obtained using alternative kernels (80.18% accuracy for the QBK) and the standard clinical prediction method, which are based on the basal SAPS score or logistic regression (71.32% and 71.55%, respectively). The QBK presented a sensitivity and specificity of 79.34% and 83.24%, which outperformed the other kernels

  20. Kernel Methods for Mining Instance Data in Ontologies

    NASA Astrophysics Data System (ADS)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  1. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays.

  2. Critical opalescence in baryonic QCD matter.

    PubMed

    Antoniou, N G; Diakonos, F K; Kapoyannis, A S; Kousouris, K S

    2006-07-21

    We show that critical opalescence, a clear signature of second-order phase transition in conventional matter, manifests itself as critical intermittency in QCD matter produced in experiments with nuclei. This behavior is revealed in transverse momentum spectra as a pattern of power laws in factorial moments, to all orders, associated with baryon production. This phenomenon together with a similar effect in the isoscalar sector of pions (sigma mode) provide us with a set of observables associated with the search for the QCD critical point in experiments with nuclei at high energies.

  3. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  4. Bose-Fermi degeneracies in large N adjoint QCD

    DOE PAGES

    Basar, Gokce; Cherman, Aleksey; McGady, David

    2015-07-06

    Here, we analyze the large N limit of adjoint QCD, an SU( N) gauge theory with N f flavors of massless adjoint Majorana fermions, compactified on S 3 × S 1. We focus on the weakly-coupled confining small- S 3 regime. If the fermions are given periodic boundary conditions on S 1, we show that there are large cancellations between bosonic and fermionic contributions to the twisted partition function. These cancellations follow a pattern previously seen in the context of misaligned supersymmetry, and lead to the absence of Hagedorn instabilities for any S 1 size L, even though the bosonicmore » and fermionic densities of states both have Hagedorn growth. Adjoint QCD stays in the confining phase for any L ~ N 0, explaining how it is able to enjoy large N volume independence for any L. The large N boson-fermion cancellations take place in a setting where adjoint QCD is manifestly non-supersymmetric at any finite N, and are consistent with the recent conjecture that adjoint QCD has emergent fermionic symmetries in the large N limit.« less

  5. Dropping macadamia nuts-in-shell reduces kernel roasting quality.

    PubMed

    Walton, David A; Wallace, Helen M

    2010-10-01

    Macadamia nuts ('nuts-in-shell') are subjected to many impacts from dropping during postharvest handling, resulting in damage to the raw kernel. The effect of dropping on roasted kernel quality is unknown. Macadamia nuts-in-shell were dropped in various combinations of moisture content, number of drops and receiving surface in three experiments. After dropping, samples from each treatment and undropped controls were dry oven-roasted for 20 min at 130 °C, and kernels were assessed for colour, mottled colour and surface damage. Dropping nuts-in-shell onto a bed of nuts-in-shell at 3% moisture content or 20% moisture content increased the percentage of dark roasted kernels. Kernels from nuts dropped first at 20%, then 10% moisture content, onto a metal plate had increased mottled colour. Dropping nuts-in-shell at 3% moisture content onto nuts-in-shell significantly increased surface damage. Similarly, surface damage increased for kernels dropped onto a metal plate at 20%, then at 10% moisture content. Postharvest dropping of macadamia nuts-in-shell causes concealed cellular damage to kernels, the effects not evident until roasting. This damage provides the reagents needed for non-enzymatic browning reactions. Improvements in handling, such as reducing the number of drops and improving handling equipment, will reduce cellular damage and after-roast darkening. Copyright © 2010 Society of Chemical Industry.

  6. Biochemical and molecular characterization of Avena indolines and their role in kernel texture.

    PubMed

    Gazza, Laura; Taddei, Federica; Conti, Salvatore; Gazzelloni, Gloria; Muccilli, Vera; Janni, Michela; D'Ovidio, Renato; Alfieri, Michela; Redaelli, Rita; Pogna, Norberto E

    2015-02-01

    Among cereals, Avena sativa is characterized by an extremely soft endosperm texture, which leads to some negative agronomic and technological traits. On the basis of the well-known softening effect of puroindolines in wheat kernel texture, in this study, indolines and their encoding genes are investigated in Avena species at different ploidy levels. Three novel 14 kDa proteins, showing a central hydrophobic domain with four tryptophan residues and here named vromindoline (VIN)-1,2 and 3, were identified. Each VIN protein in diploid oat species was found to be synthesized by a single Vin gene whereas, in hexaploid A. sativa, three Vin-1, three Vin-2 and two Vin-3 genes coding for VIN-1, VIN-2 and VIN-3, respectively, were described and assigned to the A, C or D genomes based on similarity to their counterparts in diploid species. Expression of oat vromindoline transgenes in the extra-hard durum wheat led to accumulation of vromindolines in the endosperm and caused an approximate 50 % reduction of grain hardness, suggesting a central role for vromindolines in causing the extra-soft texture of oat grain. Further, hexaploid oats showed three orthologous genes coding for avenoindolines A and B, with five or three tryptophan residues, respectively, but very low amounts of avenoindolines were found in mature kernels. The present results identify a novel protein family affecting cereal kernel texture and would further elucidate the phylogenetic evolution of Avena genus.

  7. Compound analysis via graph kernels incorporating chirality.

    PubMed

    Brown, J B; Urata, Takashi; Tamura, Takeyuki; Arai, Midori A; Kawabata, Takeo; Akutsu, Tatsuya

    2010-12-01

    High accuracy is paramount when predicting biochemical characteristics using Quantitative Structural-Property Relationships (QSPRs). Although existing graph-theoretic kernel methods combined with machine learning techniques are efficient for QSPR model construction, they cannot distinguish topologically identical chiral compounds which often exhibit different biological characteristics. In this paper, we propose a new method that extends the recently developed tree pattern graph kernel to accommodate stereoisomers. We show that Support Vector Regression (SVR) with a chiral graph kernel is useful for target property prediction by demonstrating its application to a set of human vitamin D receptor ligands currently under consideration for their potential anti-cancer effects.

  8. Kernel-aligned multi-view canonical correlation analysis for image recognition

    NASA Astrophysics Data System (ADS)

    Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao

    2016-09-01

    Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.

  9. A kernel adaptive algorithm for quaternion-valued inputs.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  10. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  11. Nature and composition of fat bloom from palm kernel stearin and hydrogenated palm kernel stearin compound chocolates.

    PubMed

    Smith, Kevin W; Cain, Fred W; Talbot, Geoff

    2004-08-25

    Palm kernel stearin and hydrogenated palm kernel stearin can be used to prepare compound chocolate bars or coatings. The objective of this study was to characterize the chemical composition, polymorphism, and melting behavior of the bloom that develops on bars of compound chocolate prepared using these fats. Bars were stored for 1 year at 15, 20, or 25 degrees C. At 15 and 20 degrees C the bloom was enriched in cocoa butter triacylglycerols, with respect to the main fat phase, whereas at 25 degrees C the enrichment was with palm kernel triacylglycerols. The bloom consisted principally of solid fat and was sharper melting than was the fat in the chocolate. Polymorphic transitions from the initial beta' phase to the beta phase accompanied the formation of bloom at all temperatures.

  12. QCD dirac operator at nonzero chemical potential: lattice data and matrix model.

    PubMed

    Akemann, Gernot; Wettig, Tilo

    2004-03-12

    Recently, a non-Hermitian chiral random matrix model was proposed to describe the eigenvalues of the QCD Dirac operator at nonzero chemical potential. This matrix model can be constructed from QCD by mapping it to an equivalent matrix model which has the same symmetries as QCD with chemical potential. Its microscopic spectral correlations are conjectured to be identical to those of the QCD Dirac operator. We investigate this conjecture by comparing large ensembles of Dirac eigenvalues in quenched SU(3) lattice QCD at a nonzero chemical potential to the analytical predictions of the matrix model. Excellent agreement is found in the two regimes of weak and strong non-Hermiticity, for several different lattice volumes.

  13. Matrix theory for baryons: an overview of holographic QCD for nuclear physics.

    PubMed

    Aoki, Sinya; Hashimoto, Koji; Iizuka, Norihiro

    2013-10-01

    We provide, for non-experts, a brief overview of holographic QCD (quantum chromodynamics) and a review of the recent proposal (Hashimoto et al 2010 (arXiv:1003.4988[hep-th])) of a matrix-like description of multi-baryon systems in holographic QCD. Based on the matrix model, we derive the baryon interaction at short distances in multi-flavor holographic QCD. We show that there is a very universal repulsive core of inter-baryon forces for a generic number of flavors. This is consistent with a recent lattice QCD analysis for Nf = 2, 3 where the repulsive core looks universal. We also provide a comparison of our results with the lattice QCD and the operator product expansion analysis.

  14. Exploring Partonic Structure of Hadrons Using ab initio Lattice QCD Calculations

    DOE PAGES

    Ma, Yan-Qing; Qiu, Jian-Wei

    2018-01-10

    Following our previous proposal, we construct a class of good "lattice cross sections" (LCSs), from which we can study the partonic structure of hadrons from ab initio lattice QCD calculations. These good LCSs, on the one hand, can be calculated directly in lattice QCD, and on the other hand, can be factorized into parton distribution functions (PDFs) with calculable coefficients, in the same way as QCD factorization for factorizable hadronic cross sections. PDFs could be extracted from QCD global analysis of the lattice QCD generated data of LCSs. In conclusion, we also show that the proposed functions for lattice QCDmore » calculation of PDFs in the literature are special cases of these good LCSs.« less

  15. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  16. Bottomonium suppression using a lattice QCD vetted potential

    NASA Astrophysics Data System (ADS)

    Krouppa, Brandon; Rothkopf, Alexander; Strickland, Michael

    2018-01-01

    We estimate bottomonium yields in relativistic heavy-ion collisions using a lattice QCD vetted, complex-valued, heavy-quark potential embedded in a realistic, hydrodynamically evolving medium background. We find that the lattice-vetted functional form and temperature dependence of the proper heavy-quark potential dramatically reduces the dependence of the yields on parameters other than the temperature evolution, strengthening the picture of bottomonium as QGP thermometer. Our results also show improved agreement between computed yields and experimental data produced in RHIC 200 GeV /nucleon collisions. For LHC 2.76 TeV /nucleon collisions, the excited states, whose suppression has been used as a vital sign for quark-gluon-plasma production in a heavy-ion collision, are reproduced better than previous perturbatively-motivated potential models; however, at the highest LHC energies our estimates for bottomonium suppression begin to underestimate the data. Possible paths to remedy this situation are discussed.

  17. Kernel analysis of partial least squares (PLS) regression models.

    PubMed

    Shinzawa, Hideyuki; Ritthiruangdej, Pitiporn; Ozaki, Yukihiro

    2011-05-01

    An analytical technique based on kernel matrix representation is demonstrated to provide further chemically meaningful insight into partial least squares (PLS) regression models. The kernel matrix condenses essential information about scores derived from PLS or principal component analysis (PCA). Thus, it becomes possible to establish the proper interpretation of the scores. A PLS model for the total nitrogen (TN) content in multiple Thai fish sauces is built with a set of near-infrared (NIR) transmittance spectra of the fish sauce samples. The kernel analysis of the scores effectively reveals that the variation of the spectral feature induced by the change in protein content is substantially associated with the total water content and the protein hydration. Kernel analysis is also carried out on a set of time-dependent infrared (IR) spectra representing transient evaporation of ethanol from a binary mixture solution of ethanol and oleic acid. A PLS model to predict the elapsed time is built with the IR spectra and the kernel matrix is derived from the scores. The detailed analysis of the kernel matrix provides penetrating insight into the interaction between the ethanol and the oleic acid.

  18. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  19. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  20. Corn kernel oil and corn fiber oil

    USDA-ARS?s Scientific Manuscript database

    Unlike most edible plant oils that are obtained directly from oil-rich seeds by either pressing or solvent extraction, corn seeds (kernels) have low levels of oil (4%) and commercial corn oil is obtained from the corn germ (embryo) which is an oil-rich portion of the kernel. Commercial corn oil cou...

  1. Parton distributions and lattice QCD calculations: A community white paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Huey-Wen; Nocera, Emanuele R.; Olness, Fred

    In the framework of quantum chromodynamics (QCD), parton distribution functions (PDFs) quantify how the momentum and spin of a hadron are divided among its quark and gluon constituents. Two main approaches exist to determine PDFs. The first approach, based on QCD factorization theorems, realizes a QCD analysis of a suitable set of hard-scattering measurements, often using a variety of hadronic observables. The second approach, based on first-principle operator definitions of PDFs, uses lattice QCD to compute directly some PDF-related quantities, such as their moments. Motivated by recent progress in both approaches, in this paper we present an overview of lattice-QCDmore » and global-analysis techniques used to determine unpolarized and polarized proton PDFs and their moments. We provide benchmark numbers to validate present and future lattice-QCD calculations and we illustrate how they could be used to reduce the PDF uncertainties in current unpolarized and polarized global analyses. Finally, this document represents a first step towards establishing a common language between the two communities, to foster dialogue and to further improve our knowledge of PDFs.« less

  2. Parton distributions and lattice QCD calculations: A community white paper

    DOE PAGES

    Lin, Huey-Wen; Nocera, Emanuele R.; Olness, Fred; ...

    2018-01-31

    In the framework of quantum chromodynamics (QCD), parton distribution functions (PDFs) quantify how the momentum and spin of a hadron are divided among its quark and gluon constituents. Two main approaches exist to determine PDFs. The first approach, based on QCD factorization theorems, realizes a QCD analysis of a suitable set of hard-scattering measurements, often using a variety of hadronic observables. The second approach, based on first-principle operator definitions of PDFs, uses lattice QCD to compute directly some PDF-related quantities, such as their moments. Motivated by recent progress in both approaches, in this paper we present an overview of lattice-QCDmore » and global-analysis techniques used to determine unpolarized and polarized proton PDFs and their moments. We provide benchmark numbers to validate present and future lattice-QCD calculations and we illustrate how they could be used to reduce the PDF uncertainties in current unpolarized and polarized global analyses. Finally, this document represents a first step towards establishing a common language between the two communities, to foster dialogue and to further improve our knowledge of PDFs.« less

  3. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  4. Multiscale Monte Carlo equilibration: Two-color QCD with two fermion flavors

    DOE PAGES

    Detmold, William; Endres, Michael G.

    2016-12-02

    In this study, we demonstrate the applicability of a recently proposed multiscale thermalization algorithm to two-color quantum chromodynamics (QCD) with two mass-degenerate fermion flavors. The algorithm involves refining an ensemble of gauge configurations that had been generated using a renormalization group (RG) matched coarse action, thereby producing a fine ensemble that is close to the thermalized distribution of a target fine action; the refined ensemble is subsequently rethermalized using conventional algorithms. Although the generalization of this algorithm from pure Yang-Mills theory to QCD with dynamical fermions is straightforward, we find that in the latter case, the method is susceptible tomore » numerical instabilities during the initial stages of rethermalization when using the hybrid Monte Carlo algorithm. We find that these instabilities arise from large fermion forces in the evolution, which are attributed to an accumulation of spurious near-zero modes of the Dirac operator. We propose a simple strategy for curing this problem, and demonstrate that rapid thermalization--as probed by a variety of gluonic and fermionic operators--is possible with the use of this solution. Also, we study the sensitivity of rethermalization rates to the RG matching of the coarse and fine actions, and identify effective matching conditions based on a variety of measured scales.« less

  5. QCD and Asymptotic Freedom:. Perspectives and Prospects

    NASA Astrophysics Data System (ADS)

    Wilczek, Frank

    QCD is now a mature theory, and it is possible to begin to view its place in the conceptual universe of physics with an appropriate perspective. There is a certain irony in the achievements of QCD. For the problems which initially drove its development — specifically, the desire to understand in detail the force that holds atomic nuclei together, and later the desire to calculate the spectrum of hadrons and their interactions — only limited insight has been achieved. However, I shall argue that QCD is actually more special and important a theory than one had any right to anticipate. In many ways, the importance of the solution transcends that of the original motivating problems. After elaborating on these quasiphilosophical remarks, I discuss two current frontiers of physics that illustrate the continuing vitality of the ideas. The recent wealth of beautiful precision experiments measuring the parameters of the standard model have made it possible to consider the unification of couplings in unprecedented quantitative detail. One central result emerging from these developments is a tantalizing hint of virtual supersymmetry. The possibility of phase transitions in matter at temperatures of order ~102 MeV, governed by QCD dynamics, is of interest from several points of view. Besides having a certain intrinsic grandeur, the question “Does the nature of matter change qualitatively, as it is radically heated?” is important for cosmology, relevant to planned high-energy heavy-ion collision experiments, and provides a promising arena for numerical simulations of QCD. Recent numerical work seems to be consistent with expectations suggested by renormalization group analysis of the potential universality classes of the QCD chiral phase transition; specifically, that the transition is second-order for two species of massless quarks but first order otherwise. There is an interesting possibility of long-range correlations in heavy ion collisions due to the creation of

  6. Unsupervised multiple kernel learning for heterogeneous data integration.

    PubMed

    Mariette, Jérôme; Villa-Vialaneix, Nathalie

    2018-03-15

    Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.

  7. Protein fold recognition using geometric kernel data fusion.

    PubMed

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  8. Proteome analysis of the almond kernel (Prunus dulcis).

    PubMed

    Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu

    2016-08-01

    Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  9. Control Transfer in Operating System Kernels

    DTIC Science & Technology

    1994-05-13

    microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating

  10. QCD: Quantum Chromodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lincoln, Don

    The strongest force in the universe is the strong nuclear force and it governs the behavior of quarks and gluons inside protons and neutrons. The name of the theory that governs this force is quantum chromodynamics, or QCD. In this video, Fermilab’s Dr. Don Lincoln explains the intricacies of this dominant component of the Standard Model.

  11. Experimental study of turbulent flame kernel propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve

    2008-07-15

    Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{submore » j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)« less

  12. Bivariate discrete beta Kernel graduation of mortality data.

    PubMed

    Mazza, Angelo; Punzo, Antonio

    2015-07-01

    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.

  13. A Linear Kernel for Co-Path/Cycle Packing

    NASA Astrophysics Data System (ADS)

    Chen, Zhi-Zhong; Fellows, Michael; Fu, Bin; Jiang, Haitao; Liu, Yang; Wang, Lusheng; Zhu, Binhai

    Bounded-Degree Vertex Deletion is a fundamental problem in graph theory that has new applications in computational biology. In this paper, we address a special case of Bounded-Degree Vertex Deletion, the Co-Path/Cycle Packing problem, which asks to delete as few vertices as possible such that the graph of the remaining (residual) vertices is composed of disjoint paths and simple cycles. The problem falls into the well-known class of 'node-deletion problems with hereditary properties', is hence NP-complete and unlikely to admit a polynomial time approximation algorithm with approximation factor smaller than 2. In the framework of parameterized complexity, we present a kernelization algorithm that produces a kernel with at most 37k vertices, improving on the super-linear kernel of Fellows et al.'s general theorem for Bounded-Degree Vertex Deletion. Using this kernel,and the method of bounded search trees, we devise an FPT algorithm that runs in time O *(3.24 k ). On the negative side, we show that the problem is APX-hard and unlikely to have a kernel smaller than 2k by a reduction from Vertex Cover.

  14. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... classifications provided in this section. When the color of kernels in a lot generally conforms to the “light” or “light amber” classification, that color classification may be used to describe the lot in connection with the grade. (1) “Light” means that the outer surface of the kernel is mostly golden color or...

  15. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... classifications provided in this section. When the color of kernels in a lot generally conforms to the “light” or “light amber” classification, that color classification may be used to describe the lot in connection with the grade. (1) “Light” means that the outer surface of the kernel is mostly golden color or...

  16. Lepton-rich cold QCD matter in protoneutron stars

    NASA Astrophysics Data System (ADS)

    Jiménez, J. C.; Fraga, E. S.

    2018-05-01

    We investigate protoneutron star matter using the state-of-the-art perturbative equation of state for cold and dense QCD in the presence of a fixed lepton fraction in which both electrons and neutrinos are included. Besides computing the modifications in the equation of state due to the presence of trapped neutrinos, we show that stable strange quark matter has a more restricted parameter space. We also study the possibility of nucleation of unpaired quark matter in the core of protoneutron stars by matching the lepton-rich QCD pressure onto a hadronic equation of state, namely TM1 with trapped neutrinos. Using the inherent dependence of perturbative QCD on the renormalization scale parameter, we provide a measure of the uncertainty in the observables we compute.

  17. Transverse momentum-dependent parton distribution functions from lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael Engelhardt, Philipp Haegler, Bernhard Musch, John Negele, Andreas Schaefer

    Transverse momentum-dependent parton distributions (TMDs) relevant for semi-inclusive deep inelastic scattering (SIDIS) and the Drell-Yan process can be defined in terms of matrix elements of a quark bilocal operator containing a staple-shaped Wilson connection. Starting from such a definition, a scheme to determine TMDs in lattice QCD is developed and explored. Parametrizing the aforementioned matrix elements in terms of invariant amplitudes permits a simple transformation of the problem to a Lorentz frame suited for the lattice calculation. Results for the Sivers and Boer-Mulders transverse momentum shifts are obtained using ensembles at the pion masses 369MeV and 518MeV, focusing in particularmore » on the dependence of these shifts on the staple extent and a Collins-Soper-type evolution parameter quantifying proximity of the staples to the light cone.« less

  18. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Massive QCD Amplitudes at Higher Orders

    NASA Astrophysics Data System (ADS)

    Moch, S.; Mitov, A.

    2007-11-01

    We consider the factorisation properties of on-shell QCD amplitudes with massive partons in the limit when all kinematical invariants are large compared to the parton mass and discuss the structure of their infrared singularities. The dimensionally regulated soft poles and the large collinear logarithms of the parton masses exponentiate to all orders. Based on this factorisation a simple relation between massless and massive scattering amplitudes in gauge theories can be established. We present recent applications of this relation for the calculation of the two-loop virtual QCD corrections to the hadro-production of heavy quarks.

  20. QCD Physics with the CMS Experiment

    NASA Astrophysics Data System (ADS)

    Cerci, S.

    2017-12-01

    Jets which are the signatures of quarks and gluons in the detector can be described by Quantum Chromodynamics (QCD) in terms of parton-parton scattering. Jets are abundantly produced at the LHC's high energy scales. Measurements of inclusive jets, dijets and multijets can be used to test perturbative QCD predictions and to constrain parton distribution functions (PDF), as well as to measure the strong coupling constant αS . The measurements use the samples of proton-proton collisions collected with the CMS detector at the LHC at various center-of-mass energies of 7, 8 and 13 TeV.

  1. Hadron mass spectrum from lattice QCD.

    PubMed

    Majumder, Abhijit; Müller, Berndt

    2010-12-17

    Finite temperature lattice simulations of quantum chromodynamics (QCD) are sensitive to the hadronic mass spectrum for temperatures below the "critical" temperature T(c) ≈ 160 MeV. We show that a recent precision determination of the QCD trace anomaly shows evidence for the existence of a large number of hadron states beyond those known from experiment. The lattice results are well represented by an exponentially growing mass spectrum up to a temperature T=155 MeV. Using simple parametrizations of the hadron mass spectrum we show how one may estimate the total spectral weight in these yet undermined states.

  2. Relationship of source and sink in determining kernel composition of maize

    PubMed Central

    Seebauer, Juliann R.; Singletary, George W.; Krumpelman, Paulette M.; Ruffo, Matías L.; Below, Frederick E.

    2010-01-01

    The relative role of the maternal source and the filial sink in controlling the composition of maize (Zea mays L.) kernels is unclear and may be influenced by the genotype and the N supply. The objective of this study was to determine the influence of assimilate supply from the vegetative source and utilization of assimilates by the grain sink on the final composition of maize kernels. Intermated B73×Mo17 recombinant inbred lines (IBM RILs) which displayed contrasting concentrations of endosperm starch were grown in the field with deficient or sufficient N, and the source supply altered by ear truncation (45% reduction) at 15 d after pollination (DAP). The assimilate supply into the kernels was determined at 19 DAP using the agar trap technique, and the final kernel composition was measured. The influence of N supply and kernel ear position on final kernel composition was also determined for a commercial hybrid. Concentrations of kernel protein and starch could be altered by genotype or the N supply, but remained fairly constant along the length of the ear. Ear truncation also produced a range of variation in endosperm starch and protein concentrations. The C/N ratio of the assimilate supply at 19 DAP was directly related to the final kernel composition, with an inverse relationship between the concentrations of starch and protein in the mature endosperm. The accumulation of kernel starch and protein in maize is uniform along the ear, yet adaptable within genotypic limits, suggesting that kernel composition is source limited in maize. PMID:19917600

  3. Resummed memory kernels in generalized system-bath master equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavros, Michael G.; Van Voorhis, Troy, E-mail: tvan@mit.edu

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between themore » two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.« less

  4. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    PubMed

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  5. Conformal Symmetry as a Template for QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, S

    2004-08-04

    Conformal symmetry is broken in physical QCD; nevertheless, one can use conformal symmetry as a template, systematically correcting for its nonzero {beta} function as well as higher-twist effects. For example, commensurate scale relations which relate QCD observables to each other, such as the generalized Crewther relation, have no renormalization scale or scheme ambiguity and retain a convergent perturbative structure which reflects the underlying conformal symmetry of the classical theory. The ''conformal correspondence principle'' also dictates the form of the expansion basis for hadronic distribution amplitudes. The AdS/CFT correspondence connecting superstring theory to superconformal gauge theory has important implications for hadronmore » phenomenology in the conformal limit, including an all-orders demonstration of counting rules for hard exclusive processes as well as determining essential aspects of hadronic light-front wavefunctions. Theoretical and phenomenological evidence is now accumulating that QCD couplings based on physical observables such as {tau} decay become constant at small virtuality; i.e., effective charges develop an infrared fixed point in contradiction to the usual assumption of singular growth in the infrared. The near-constant behavior of effective couplings also suggests that QCD can be approximated as a conformal theory even at relatively small momentum transfer. The importance of using an analytic effective charge such as the pinch scheme for unifying the electroweak and strong couplings and forces is also emphasized.« less

  6. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  7. Conjecture about the 2-Flavour QCD Phase Diagram

    NASA Astrophysics Data System (ADS)

    Nava Blanco, M. A.; Bietenholz, W.; Fernández Téllez, A.

    2017-10-01

    The QCD phase diagram, in particular its sector of high baryon density, is one of the most prominent outstanding mysteries within the Standard Model of particle physics. We sketch a project how to arrive at a conjecture for the case of two massless quark flavours. The pattern of spontaneous chiral symmetry breaking is isomorphic to the spontaneous magnetisation in an O(4) non-linear σ-model, which can be employed as a low-energy effective theory to study the critical behaviour. We focus on the 3d O(4) model, where the configurations are divided into topological sectors, as in QCD. A topological winding with minimal Euclidean action is denoted as a skyrmion, and the topological charge corresponds to the QCD baryon number. This effective model can be simulated on a lattice with a powerful cluster algorithm, which should allow us to identify the features of the critical temperature, as we proceed from low to high baryon density. In this sense, this projected numerical study has the potential to provide us with a conjecture about the phase diagram of QCD with two massless quark flavours.

  8. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    PubMed

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P < 0.05) but the unbroken kernels became significantly harder. Moisture content and moisture uptake rate were positively correlated, and cooked rice hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  9. A new discrete dipole kernel for quantitative susceptibility mapping.

    PubMed

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Genetic Analysis of Kernel Traits in Maize-Teosinte Introgression Populations.

    PubMed

    Liu, Zhengbin; Garcia, Arturo; McMullen, Michael D; Flint-Garcia, Sherry A

    2016-08-09

    Seed traits have been targeted by human selection during the domestication of crop species as a way to increase the caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is most directly related to overall grain yield. Additional seed traits involved in seed shape may have also contributed to larger grain. Maize (Zea mays ssp. mays) kernel weight has increased more than 10-fold in the 9000 years since domestication from its wild ancestor, teosinte (Z. mays ssp. parviglumis). In order to study how size and shape affect kernel weight, we analyzed kernel morphometric traits in a set of 10 maize-teosinte introgression populations using digital imaging software. We identified quantitative trait loci (QTL) for kernel area and length with moderate allelic effects that colocalize with kernel weight QTL. Several genomic regions with strong effects during maize domestication were detected, and a genetic framework for kernel traits was characterized by complex pleiotropic interactions. Our results both confirm prior reports of kernel domestication loci and identify previously uncharacterized QTL with a range of allelic effects, enabling future research into the genetic basis of these traits. Copyright © 2016 Liu et al.

  11. Genetic Analysis of Kernel Traits in Maize-Teosinte Introgression Populations

    PubMed Central

    Liu, Zhengbin; Garcia, Arturo; McMullen, Michael D.; Flint-Garcia, Sherry A.

    2016-01-01

    Seed traits have been targeted by human selection during the domestication of crop species as a way to increase the caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is most directly related to overall grain yield. Additional seed traits involved in seed shape may have also contributed to larger grain. Maize (Zea mays ssp. mays) kernel weight has increased more than 10-fold in the 9000 years since domestication from its wild ancestor, teosinte (Z. mays ssp. parviglumis). In order to study how size and shape affect kernel weight, we analyzed kernel morphometric traits in a set of 10 maize-teosinte introgression populations using digital imaging software. We identified quantitative trait loci (QTL) for kernel area and length with moderate allelic effects that colocalize with kernel weight QTL. Several genomic regions with strong effects during maize domestication were detected, and a genetic framework for kernel traits was characterized by complex pleiotropic interactions. Our results both confirm prior reports of kernel domestication loci and identify previously uncharacterized QTL with a range of allelic effects, enabling future research into the genetic basis of these traits. PMID:27317774

  12. Higher order corrections to mixed QCD-EW contributions to Higgs boson production in gluon fusion

    NASA Astrophysics Data System (ADS)

    Bonetti, Marco; Melnikov, Kirill; Tancredi, Lorenzo

    2018-03-01

    We present an estimate of the next-to-leading-order (NLO) QCD corrections to mixed QCD-electroweak contributions to the Higgs boson production cross section in gluon fusion, combining the recently computed three-loop virtual corrections and the approximate treatment of real emission in the soft approximation. We find that the NLO QCD corrections to the mixed QCD-electroweak contributions are nearly identical to NLO QCD corrections to QCD Higgs production. Our result confirms an earlier estimate of these O (α αs2) effects by Anastasiou et al. [J. High Energy Phys. 04 (2009) 003, 10.1088/1126-6708/2009/04/003] and provides further support for the factorization approximation of QCD and electroweak corrections.

  13. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    PubMed Central

    Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.

    2013-01-01

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we

  14. Observables of QCD diffraction

    NASA Astrophysics Data System (ADS)

    Mieskolainen, Mikael; Orava, Risto

    2017-03-01

    A new combinatorial vector space measurement model is introduced for soft QCD diffraction. The model independent mathematical construction resolves experimental complications; the theoretical framework of the approach includes the Good-Walker view of diffraction, Regge phenomenology together with AGK cutting rules and random fluctuations.

  15. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  16. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  17. Renormalization scheme dependence of high-order perturbative QCD predictions

    NASA Astrophysics Data System (ADS)

    Ma, Yang; Wu, Xing-Gang

    2018-02-01

    Conventionally, one adopts typical momentum flow of a physical observable as the renormalization scale for its perturbative QCD (pQCD) approximant. This simple treatment leads to renormalization scheme-and-scale ambiguities due to the renormalization scheme and scale dependence of the strong coupling and the perturbative coefficients do not exactly cancel at any fixed order. It is believed that those ambiguities will be softened by including more higher-order terms. In the paper, to show how the renormalization scheme dependence changes when more loop terms have been included, we discuss the sensitivity of pQCD prediction on the scheme parameters by using the scheme-dependent {βm ≥2}-terms. We adopt two four-loop examples, e+e-→hadrons and τ decays into hadrons, for detailed analysis. Our results show that under the conventional scale setting, by including more-and-more loop terms, the scheme dependence of the pQCD prediction cannot be reduced as efficiently as that of the scale dependence. Thus a proper scale-setting approach should be important to reduce the scheme dependence. We observe that the principle of minimum sensitivity could be such a scale-setting approach, which provides a practical way to achieve optimal scheme and scale by requiring the pQCD approximate be independent to the "unphysical" theoretical conventions.

  18. QCD: Quantum Chromodynamics

    ScienceCinema

    Lincoln, Don

    2018-01-16

    The strongest force in the universe is the strong nuclear force and it governs the behavior of quarks and gluons inside protons and neutrons. The name of the theory that governs this force is quantum chromodynamics, or QCD. In this video, Fermilab’s Dr. Don Lincoln explains the intricacies of this dominant component of the Standard Model.

  19. Hadron interactions and exotic hadrons from lattice QCD

    NASA Astrophysics Data System (ADS)

    Ikeda, Yoichi

    2014-09-01

    One of the interesting subjects in hadron physics is to look for the multiquark configurations. One of candidates is the H-dibaryon (udsuds), and the possibility of the bound H-dibaryon has been recently studied from lattice QCD. We also extend the HAL QCD method to define potentials on the lattice between baryons to meson-meson systems including charm quarks to search for the bound tetraquark Tcc (ud c c) and Tcs (ud c s). In the presentation, after reviewing the HAL QCD method, we report the results on the H-dibaryon, the tetraquark Tcc (ud c c) and Tcs (ud c s), where we have employed the relativistic heavy quark action to treat the charm quark dynamics with pion masses, mπ = 410, 570, 700 MeV.

  20. Hyperspectral Image Classification via Kernel Sparse Representation

    DTIC Science & Technology

    2013-01-01

    classification algorithms. Moreover, the spatial coherency across neighboring pixels is also incorporated through a kernelized joint sparsity model , where...joint sparsity model , where all of the pixels within a small neighborhood are jointly represented in the feature space by selecting a few common training...hyperspectral imagery, joint spar- sity model , kernel methods, sparse representation. I. INTRODUCTION HYPERSPECTRAL imaging sensors capture images

  1. Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.

    PubMed

    Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I

    2016-03-01

    The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor.

  2. Progress in vacuum susceptibilities and their applications to the chiral phase transition of QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Zhu-Fang, E-mail: phycui@nju.edu.cn; State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, CAS, Beijing, 100190; Hou, Feng-Yao

    2015-07-15

    The QCD vacuum condensates and various vacuum susceptibilities are all important parameters which characterize the nonperturbative properties of the QCD vacuum. In the QCD sum rules external field formula, various QCD vacuum susceptibilities play important roles in determining the properties of hadrons. In this paper, we review the recent progress in studies of vacuum susceptibilities together with their applications to the chiral phase transition of QCD. The results of the tensor, the vector, the axial–vector, the scalar, and the pseudo-scalar vacuum susceptibilities are shown in detail in the framework of Dyson–Schwinger equations.

  3. Using the Intel Math Kernel Library on Peregrine | High-Performance

    Science.gov Websites

    Computing | NREL the Intel Math Kernel Library on Peregrine Using the Intel Math Kernel Library on Peregrine Learn how to use the Intel Math Kernel Library (MKL) with Peregrine system software. MKL architectures. Core math functions in MKL include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier

  4. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    PubMed

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Sivers and Boer-Mulders observables from lattice QCD

    NASA Astrophysics Data System (ADS)

    Musch, B. U.; Hägler, Ph.; Engelhardt, M.; Negele, J. W.; Schäfer, A.

    2012-05-01

    We present a first calculation of transverse momentum-dependent nucleon observables in dynamical lattice QCD employing nonlocal operators with staple-shaped, “process-dependent” Wilson lines. The use of staple-shaped Wilson lines allows us to link lattice simulations to TMD effects determined from experiment, and, in particular, to access nonuniversal, naively time-reversal odd TMD observables. We present and discuss results for the generalized Sivers and Boer-Mulders transverse momentum shifts for the SIDIS and DY cases. The effect of staple-shaped Wilson lines on T-even observables is studied for the generalized tensor charge and a generalized transverse shift related to the worm-gear function g1T. We emphasize the dependence of these observables on the staple extent and the Collins-Soper evolution parameter. Our numerical calculations use an nf=2+1 mixed action scheme with domain wall valence fermions on an Asqtad sea and pion masses 369 MeV as well as 518 MeV.

  6. QCD unitarity constraints on Reggeon Field Theory

    NASA Astrophysics Data System (ADS)

    Kovner, Alex; Levin, Eugene; Lublinsky, Michael

    2016-08-01

    We point out that the s-channel unitarity of QCD imposes meaningful constraints on a possible form of the QCD Reggeon Field Theory. We show that neither the BFKL nor JIMWLK nor Braun's Hamiltonian satisfy the said constraints. In a toy, zero transverse dimensional case we construct a model that satisfies the analogous constraint and show that at infinite energy it indeed tends to a "black disk limit" as opposed to the model with triple Pomeron vertex only, routinely used as a toy model in the literature.

  7. Hybrid baryons in QCD

    DOE PAGES

    Dudek, Jozef J.; Edwards, Robert G.

    2012-03-21

    In this study, we present the first comprehensive study of hybrid baryons using lattice QCD methods. Using a large basis of composite QCD interpolating fields we extract an extensive spectrum of baryon states and isolate those of hybrid character using their relatively large overlap onto operators which sample gluonic excitations. We consider the spectrum of Nucleon and Delta states at several quark masses finding a set of positive parity hybrid baryons with quantum numbersmore » $$N_{1/2^+},\\,N_{1/2^+},\\,N_{3/2^+},\\, N_{3/2^+},\\,N_{5/2^+},\\,$$ and $$\\Delta_{1/2^+},\\, \\Delta_{3/2^+}$$ at an energy scale above the first band of `conventional' excited positive parity baryons. This pattern of states is compatible with a color octet gluonic excitation having $$J^{P}=1^{+}$$ as previously reported in the hybrid meson sector and with a comparable energy scale for the excitation, suggesting a common bound-state construction for hybrid mesons and baryons.« less

  8. High speed sorting of Fusarium-damaged wheat kernels

    USDA-ARS?s Scientific Manuscript database

    Recent studies have found that resistance to Fusarium fungal infection can be inherited in wheat from one generation to another. However, there is not yet available a cost effective method to separate Fusarium-damaged wheat kernels from undamaged kernels so that wheat breeders can take advantage of...

  9. CW-SSIM kernel based random forest for image classification

    NASA Astrophysics Data System (ADS)

    Fan, Guangzhe; Wang, Zhou; Wang, Jiheng

    2010-07-01

    Complex wavelet structural similarity (CW-SSIM) index has been proposed as a powerful image similarity metric that is robust to translation, scaling and rotation of images, but how to employ it in image classification applications has not been deeply investigated. In this paper, we incorporate CW-SSIM as a kernel function into a random forest learning algorithm. This leads to a novel image classification approach that does not require a feature extraction or dimension reduction stage at the front end. We use hand-written digit recognition as an example to demonstrate our algorithm. We compare the performance of the proposed approach with random forest learning based on other kernels, including the widely adopted Gaussian and the inner product kernels. Empirical evidences show that the proposed method is superior in its classification power. We also compared our proposed approach with the direct random forest method without kernel and the popular kernel-learning method support vector machine. Our test results based on both simulated and realworld data suggest that the proposed approach works superior to traditional methods without the feature selection procedure.

  10. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    PubMed Central

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  11. Flux tubes in the QCD vacuum

    NASA Astrophysics Data System (ADS)

    Cea, Paolo; Cosmai, Leonardo; Cuteri, Francesca; Papa, Alessandro

    2017-06-01

    The hypothesis that the QCD vacuum can be modeled as a dual superconductor is a powerful tool to describe the distribution of the color field generated by a quark-antiquark static pair and, as such, can provide useful clues for the understanding of confinement. In this work we investigate, by lattice Monte Carlo simulations of the S U (3 ) pure gauge theory and of (2 +1 )-flavor QCD with physical mass settings, some properties of the chromoelectric flux tube at zero temperature and their dependence on the physical distance between the static sources. We draw some conclusions about the validity domain of the dual superconductor picture.

  12. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  13. Event-by-event picture for the medium-induced jet evolution

    NASA Astrophysics Data System (ADS)

    Escobedo, Miguel A.; Iancu, Edmond

    2017-08-01

    We discuss the evolution of an energetic jet which propagates through a dense quark-gluon plasma and radiates gluons due to its interactions with the medium. Within perturbative QCD, this evolution can be described as a stochastic branching process, that we have managed to solve exactly. We present exact, analytic, results for the gluon spectrum (the average gluon distribution) and for the higher n-point functions, which describe correlations and fluctuations. Using these results, we construct the event-by-event picture of the gluon distribution produced via medium-induced gluon branching. In contrast to what happens in a usual QCD cascade in vacuum, the medium-induced branchings are quasi-democratic, with offspring gluons carrying sizable fractions of the energy of their parent parton. We find large fluctuations in the energy loss and in the multiplicity of soft gluons. The multiplicity distribution is predicted to exhibit KNO (Koba-Nielsen-Olesen) scaling. These predictions can be tested in Pb+Pb collisions at the LHC, via event-by-event measurements of the di-jet asymmetry. Based on [1, 2].

  14. Event-by-event picture for the medium-induced jet evolution

    NASA Astrophysics Data System (ADS)

    Escobedo, Miguel A.; Iancu, Edmond

    2017-03-01

    We discuss the evolution of an energetic jet which propagates through a dense quark-gluon plasma and radiates gluons due to its interactions with the medium. Within perturbative QCD, this evolution can be described as a stochastic branching process, that we have managed to solve exactly. We present exact, analytic, results for the gluon spectrum (the average gluon distribution) and for the higher n-point functions, which describe correlations and fluctuations. Using these results, we construct the event-by-event picture of the gluon distribution produced via medium-induced gluon branching. In contrast to what happens in a usual QCD cascade in vacuum, the medium-induced branchings are quasi-democratic, with offspring gluons carrying sizable fractions of the energy of their parent parton. We find large fluctuations in the energy loss and in the multiplicity of soft gluons. The multiplicity distribution is predicted to exhibit KNO (Koba-Nielsen-Olesen) scaling. These predictions can be tested in Pb+Pb collisions at the LHC, via event-by-event measurements of the di-jet asymmetry. Based on [1, 2].

  15. The Light-Front Schrödinger Equation and Determination of the Perturbative QCD Scale from Color Confinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, Stanley J.; de Teramond, Guy F.; Deur, Alexandre P.

    2015-09-01

    The valence Fock-state wavefunctions of the light-front QCD Hamiltonian satisfy a relativistic equation of motion with an effective confining potential U which systematically incorporates the effects of higher quark and gluon Fock states. If one requires that the effective action which underlies the QCD Lagrangian remains conformally invariant and extends the formalism of de Alfaro, Fubini and Furlan to light front Hamiltonian theory, the potential U has a unique form of a harmonic oscillator potential, and a mass gap arises. The result is a nonperturbative relativistic light-front quantum mechanical wave equation which incorporates color confinement and other essential spectroscopic andmore » dynamical features of hadron physics, including a massless pion for zero quark mass and linear Regge trajectories with the same slope in the radial quantum number n and orbital angular momentum L. Only one mass parameter κ appears. Light-front holography thus provides a precise relation between the bound-state amplitudes in the fifth dimension of AdS space and the boost-invariant light-front wavefunctions describing the internal structure of hadrons in physical space-time. We also show how the mass scale κ underlying confinement and hadron masses determines the scale Λ {ovr MS} controlling the evolution of the perturbative QCD coupling. The relation between scales is obtained by matching the nonperturbative dynamics, as described by an effective conformal theory mapped to the light-front and its embedding in AdS space, to the perturbative QCD regime computed to four-loop order. The result is an effective coupling defined at all momenta. The predicted value Λ {ovr MS}=0.328±0.034 GeV is in agreement with the world average 0.339±0.010 GeV. The analysis applies to any renormalization scheme.« less

  16. Visualization Tools for Lattice QCD - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massimo Di Pierro

    2012-03-15

    Our research project is about the development of visualization tools for Lattice QCD. We developed various tools by extending existing libraries, adding new algorithms, exposing new APIs, and creating web interfaces (including the new NERSC gauge connection web site). Our tools cover the full stack of operations from automating download of data, to generating VTK files (topological charge, plaquette, Polyakov lines, quark and meson propagators, currents), to turning the VTK files into images, movies, and web pages. Some of the tools have their own web interfaces. Some Lattice QCD visualization have been created in the past but, to our knowledge,more » our tools are the only ones of their kind since they are general purpose, customizable, and relatively easy to use. We believe they will be valuable to physicists working in the field. They can be used to better teach Lattice QCD concepts to new graduate students; they can be used to observe the changes in topological charge density and detect possible sources of bias in computations; they can be used to observe the convergence of the algorithms at a local level and determine possible problems; they can be used to probe heavy-light mesons with currents and determine their spatial distribution; they can be used to detect corrupted gauge configurations. There are some indirect results of this grant that will benefit a broader audience than Lattice QCD physicists.« less

  17. Lattice QCD and nucleon resonances

    NASA Astrophysics Data System (ADS)

    Edwards, R. G.; Fiebig, H. R.; Fleming, G.; Richards, D. G.; LHP Collaboration

    2004-06-01

    Lattice calculations provide an ab initio means for the study of QCD. Recent progress at understanding the spectrum and structure of nucleons from lattice QCD studies is reviewed. Measurements of the masses of the lightest particles for the lowest spin values are described and related to predictions of the quark model. Measurements of the mass of the first radial excitation of the nucleon, the so-called Roper resonance, obtained using Bayesian statistical analyses, are detailed. The need to perform calculations at realistically light values of the pion mass is emphasised, and the exciting progress at attaining such masses is outlined. The talk concludes with future prospects, emphasising the importance of constructing a basis of interpolating operators that is sensitive to three-quark states, to multi-quark states, and to excited glue.

  18. Symmetry Transition Preserving Chirality in QCD: A Versatile Random Matrix Model

    NASA Astrophysics Data System (ADS)

    Kanazawa, Takuya; Kieburg, Mario

    2018-06-01

    We consider a random matrix model which interpolates between the chiral Gaussian unitary ensemble and the Gaussian unitary ensemble while preserving chiral symmetry. This ensemble describes flavor symmetry breaking for staggered fermions in 3D QCD as well as in 4D QCD at high temperature or in 3D QCD at a finite isospin chemical potential. Our model is an Osborn-type two-matrix model which is equivalent to the elliptic ensemble but we consider the singular value statistics rather than the complex eigenvalue statistics. We report on exact results for the partition function and the microscopic level density of the Dirac operator in the ɛ regime of QCD. We compare these analytical results with Monte Carlo simulations of the matrix model.

  19. Spin dynamics of qqq wave function on light front in high momentum limit of QCD: Role of qqq force

    NASA Astrophysics Data System (ADS)

    Mitra, A. N.

    2008-04-01

    The contribution of a spin-rich qqq force (in conjunction with pairwise qq forces) to the analytical structure of the qqq wave function is worked out in the high momentum regime of QCD where the confining interaction may be ignored, so that the dominant effect is Coulombic. A distinctive feature of this study is that the spin-rich qqq force is generated by a ggg vertex (a genuine part of the QCD Lagrangian) wherein the 3 radiating gluon lines end on as many quark lines, giving rise to a (Mercedes-Benz type) Y-shaped diagram. The dynamics is that of a Salpeter-like equation (3D support for the kernel) formulated covariantly on the light front, a la Markov-Yukawa Transversality Principle (MYTP) which warrants a 2-way interconnection between the 3D and 4D Bethe-Salpeter (BSE) forms for 2 as well as 3 fermion quarks. With these ingredients, the differential equation for the 3D wave function ϕ receives well-defined contributions from the qq and qqq forces. In particular a negative eigenvalue of the spin operator iσ1 · σ2 × σ3 which is an integral part of the qqq force, causes a characteristic singularity in the differential equation, signalling the dynamical effect of a spin-rich qqq force not yet considered in the literature. The potentially crucial role of this interesting effect vis-a-vis the so-called 'spin anomaly' of the proton, is a subject of considerable physical interest.

  20. A new kernel-based fuzzy level set method for automated segmentation of medical images in the presence of intensity inhomogeneity.

    PubMed

    Rastgarpour, Maryam; Shanbehzadeh, Jamshid

    2014-01-01

    Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation.

  1. New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.

    PubMed

    Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto

    2017-06-21

    We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.

  2. Quasi-Dual-Packed-Kerneled Au49 (2,4-DMBT)27 Nanoclusters and the Influence of Kernel Packing on the Electrochemical Gap.

    PubMed

    Liao, Lingwen; Zhuang, Shengli; Wang, Pu; Xu, Yanan; Yan, Nan; Dong, Hongwei; Wang, Chengming; Zhao, Yan; Xia, Nan; Li, Jin; Deng, Haiteng; Pei, Yong; Tian, Shi-Kai; Wu, Zhikun

    2017-10-02

    Although face-centered cubic (fcc), body-centered cubic (bcc), hexagonal close-packed (hcp), and other structured gold nanoclusters have been reported, it was unclear whether gold nanoclusters with mix-packed (fcc and non-fcc) kernels exist, and the correlation between kernel packing and the properties of gold nanoclusters is unknown. A Au 49 (2,4-DMBT) 27 nanocluster with a shell electron count of 22 has now been been synthesized and structurally resolved by single-crystal X-ray crystallography, which revealed that Au 49 (2,4-DMBT) 27 contains a unique Au 34 kernel consisting of one quasi-fcc-structured Au 21 and one non-fcc-structured Au 13 unit (where 2,4-DMBTH=2,4-dimethylbenzenethiol). Further experiments revealed that the kernel packing greatly influences the electrochemical gap (EG) and the fcc structure has a larger EG than the investigated non-fcc structure. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. TMD splitting functions in [Formula: see text] factorization: the real contribution to the gluon-to-gluon splitting.

    PubMed

    Hentschinski, M; Kusina, A; Kutak, K; Serino, M

    2018-01-01

    We calculate the transverse momentum dependent gluon-to-gluon splitting function within [Formula: see text]-factorization, generalizing the framework employed in the calculation of the quark splitting functions in Hautmann et al. (Nucl Phys B 865:54-66, arXiv:1205.1759, 2012), Gituliar et al. (JHEP 01:181, arXiv:1511.08439, 2016), Hentschinski et al. (Phys Rev D 94(11):114013, arXiv:1607.01507, 2016) and demonstrate at the same time the consistency of the extended formalism with previous results. While existing versions of [Formula: see text] factorized evolution equations contain already a gluon-to-gluon splitting function i.e. the leading order Balitsky-Fadin-Kuraev-Lipatov (BFKL) kernel or the Ciafaloni-Catani-Fiorani-Marchesini (CCFM) kernel, the obtained splitting function has the important property that it reduces both to the leading order BFKL kernel in the high energy limit, to the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) gluon-to-gluon splitting function in the collinear limit as well as to the CCFM kernel in the soft limit. At the same time we demonstrate that this splitting kernel can be obtained from a direct calculation of the QCD Feynman diagrams, based on a combined implementation of the Curci-Furmanski-Petronzio formalism for the calculation of the collinear splitting functions and the framework of high energy factorization.

  4. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  5. Wavelet SVM in Reproducing Kernel Hilbert Space for hyperspectral remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Du, Peijun; Tan, Kun; Xing, Xiaoshi

    2010-12-01

    Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.

  6. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature

    PubMed Central

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838

  7. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature.

    PubMed

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.

  8. Genome-wide Association Analysis of Kernel Weight in Hard Winter Wheat

    USDA-ARS?s Scientific Manuscript database

    Wheat kernel weight is an important and heritable component of wheat grain yield and a key predictor of flour extraction. Genome-wide association analysis was conducted to identify genomic regions associated with kernel weight and kernel weight environmental response in 8 trials of 299 hard winter ...

  9. QCD for Postgraduates (4/5)

    ScienceCinema

    Zanderighi, Giulia

    2018-05-23

    Modern QCD - Lecture 4. We will consider some processes of interest at the LHC and will discuss the main elements of their cross-section calculations. We will also summarize the current status of higher order calculations.

  10. Evidence-Based Kernels: Fundamental Units of Behavioral Influence

    ERIC Educational Resources Information Center

    Embry, Dennis D.; Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…

  11. Quark–hadron phase structure, thermodynamics, and magnetization of QCD matter

    NASA Astrophysics Data System (ADS)

    Nasser Tawfik, Abdel; Magied Diab, Abdel; Hussein, M. T.

    2018-05-01

    The SU(3) Polyakov linear-sigma model (PLSM) is systematically implemented to characterize the quark-hadron phase structure and to determine various thermodynamic quantities and the magnetization of quantum chromodynamic (QCD) matter. Using mean-field approximation, the dependence of the chiral order parameter on a finite magnetic field is also calculated. Under a wide range of temperatures and magnetic field strengths, various thermodynamic quantities including trace anomaly, speed of sound squared, entropy density, and specific heat are presented, and some magnetic properties are described as well. Where available these results are compared to recent lattice QCD calculations. The temperature dependence of these quantities confirms our previous finding that the transition temperature is reduced with the increase in the magnetic field strength, i.e. QCD matter is characterized by an inverse magnetic catalysis. Furthermore, the temperature dependence of the magnetization showing that QCD matter has paramagnetic properties slightly below and far above the pseudo-critical temperature is confirmed as well. The excellent agreement with recent lattice calculations proves that our QCD-like approach (PLSM) seems to possess the correct degrees of freedom in both the hadronic and partonic phases and describes well the dynamics deriving confined hadrons to deconfined quark-gluon plasma.

  12. NLO evolution of color dipoles in N=4 SYM

    DOE PAGES

    Chirilli, Giovanni A.; Balitsky, Ian

    2009-07-04

    Here, high-energy behavior of amplitudes in a gauge theory can be reformulated in terms of the evolution of Wilson-line operators. In the leading logarithmic approximation it is given by the conformally invariant BK equation for the evolution of color dipoles. In QCD, the next-to-leading order BK equation has both conformal and non-conformal parts, the latter providing the running of the coupling constant. To separate the conformally invariant effects from the running-coupling effects, we calculate the NLO evolution of the color dipoles in the conformalmore » $${\\cal N}$$=4 SYM theory. We define the "composite dipole operator" with the rapidity cutoff preserving conformal invariance.« less

  13. Noise kernels of stochastic gravity in conformally-flat spacetimes

    NASA Astrophysics Data System (ADS)

    Cho, H. T.; Hu, B. L.

    2015-03-01

    The central object in the theory of semiclassical stochastic gravity is the noise kernel, which is the symmetric two point correlation function of the stress-energy tensor. Using the corresponding Wightman functions in Minkowski, Einstein and open Einstein spaces, we construct the noise kernels of a conformally coupled scalar field in these spacetimes. From them we show that the noise kernels in conformally-flat spacetimes, including the Friedmann-Robertson-Walker universes, can be obtained in closed analytic forms by using a combination of conformal and coordinate transformations.

  14. Travel-time sensitivity kernels in long-range propagation.

    PubMed

    Skarsoulis, E K; Cornuelle, B D; Dzieciuch, M A

    2009-11-01

    Wave-theoretic travel-time sensitivity kernels (TSKs) are calculated in two-dimensional (2D) and three-dimensional (3D) environments and their behavior with increasing propagation range is studied and compared to that of ray-theoretic TSKs and corresponding Fresnel-volumes. The differences between the 2D and 3D TSKs average out when horizontal or cross-range marginals are considered, which indicates that they are not important in the case of range-independent sound-speed perturbations or perturbations of large scale compared to the lateral TSK extent. With increasing range, the wave-theoretic TSKs expand in the horizontal cross-range direction, their cross-range extent being comparable to that of the corresponding free-space Fresnel zone, whereas they remain bounded in the vertical. Vertical travel-time sensitivity kernels (VTSKs)-one-dimensional kernels describing the effect of horizontally uniform sound-speed changes on travel-times-are calculated analytically using a perturbation approach, and also numerically, as horizontal marginals of the corresponding TSKs. Good agreement between analytical and numerical VTSKs, as well as between 2D and 3D VTSKs, is found. As an alternative method to obtain wave-theoretic sensitivity kernels, the parabolic approximation is used; the resulting TSKs and VTSKs are in good agreement with normal-mode results. With increasing range, the wave-theoretic VTSKs approach the corresponding ray-theoretic sensitivity kernels.

  15. Validation of Born Traveltime Kernels

    NASA Astrophysics Data System (ADS)

    Baig, A. M.; Dahlen, F. A.; Hung, S.

    2001-12-01

    Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.

  16. End-use quality of soft kernel durum wheat

    USDA-ARS?s Scientific Manuscript database

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat is known for its very hard texture, which influences how it is milled and for what products it is well suited. We developed soft kernel durum wheat lines via Ph1b-mediated homoeologous recombination with Dr. Leonard Joppa...

  17. The cosmic QCD phase transition with dense matter and its gravitational waves from holography

    NASA Astrophysics Data System (ADS)

    Ahmadvand, M.; Bitaghsir Fadafan, K.

    2018-04-01

    Consistent with cosmological constraints, there are scenarios with the large lepton asymmetry which can lead to the finite baryochemical potential at the cosmic QCD phase transition scale. In this paper, we investigate this possibility in the holographic models. Using the holographic renormalization method, we find the first order Hawking-Page phase transition, between the Reissner-Nordström AdS black hole and thermal charged AdS space, corresponding to the de/confinement phase transition. We obtain the gravitational wave spectra generated during the evolution of bubbles for a range of the bubble wall velocity and examine the reliability of the scenarios and consequent calculations by gravitational wave experiments.

  18. Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.

    PubMed

    Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K

    2016-03-01

    Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.

  19. QCD-Electroweak First-Order Phase Transition in a Supercooled Universe.

    PubMed

    Iso, Satoshi; Serpico, Pasquale D; Shimada, Kengo

    2017-10-06

    If the electroweak sector of the standard model is described by classically conformal dynamics, the early Universe evolution can be substantially altered. It is already known that-contrarily to the standard model case-a first-order electroweak phase transition may occur. Here we show that, depending on the model parameters, a dramatically different scenario may happen: A first-order, six massless quark QCD phase transition occurs first, which then triggers the electroweak symmetry breaking. We derive the necessary conditions for this dynamics to occur, using the specific example of the classically conformal B-L model. In particular, relatively light weakly coupled particles are predicted, with implications for collider searches. This scenario is also potentially rich in cosmological consequences, such as renewed possibilities for electroweak baryogenesis, altered dark matter production, and gravitational wave production, as we briefly comment upon.

  20. QCD-Electroweak First-Order Phase Transition in a Supercooled Universe

    NASA Astrophysics Data System (ADS)

    Iso, Satoshi; Serpico, Pasquale D.; Shimada, Kengo

    2017-10-01

    If the electroweak sector of the standard model is described by classically conformal dynamics, the early Universe evolution can be substantially altered. It is already known that—contrarily to the standard model case—a first-order electroweak phase transition may occur. Here we show that, depending on the model parameters, a dramatically different scenario may happen: A first-order, six massless quark QCD phase transition occurs first, which then triggers the electroweak symmetry breaking. We derive the necessary conditions for this dynamics to occur, using the specific example of the classically conformal B -L model. In particular, relatively light weakly coupled particles are predicted, with implications for collider searches. This scenario is also potentially rich in cosmological consequences, such as renewed possibilities for electroweak baryogenesis, altered dark matter production, and gravitational wave production, as we briefly comment upon.

  1. Electroweak Higgs production with HiggsPO at NLO QCD

    NASA Astrophysics Data System (ADS)

    Greljo, Admir; Isidori, Gino; Lindert, Jonas M.; Marzocca, David; Zhang, Hantian

    2017-12-01

    We present the HiggsPO UFO model for Monte Carlo event generation of electroweak VH and VBF Higgs production processes at NLO in QCD in the formalism of Higgs pseudo-observables (PO). We illustrate the use of this tool by studying the QCD corrections, matched to a parton shower, for several benchmark points in the Higgs PO parameter space. We find that, while being sizable and thus important to be considered in realistic experimental analyses, the QCD higher-order corrections largely factorize. As an additional finding, based on the NLO results, we advocate to consider 2D distributions of the two-jet azimuthal-angle difference and the leading jet p_T for new physics searches in VBF Higgs production. The HiggsPO UFO model is publicly available.

  2. Prioritizing individual genetic variants after kernel machine testing using variable selection.

    PubMed

    He, Qianchuan; Cai, Tianxi; Liu, Yang; Zhao, Ni; Harmon, Quaker E; Almli, Lynn M; Binder, Elisabeth B; Engel, Stephanie M; Ressler, Kerry J; Conneely, Karen N; Lin, Xihong; Wu, Michael C

    2016-12-01

    Kernel machine learning methods, such as the SNP-set kernel association test (SKAT), have been widely used to test associations between traits and genetic polymorphisms. In contrast to traditional single-SNP analysis methods, these methods are designed to examine the joint effect of a set of related SNPs (such as a group of SNPs within a gene or a pathway) and are able to identify sets of SNPs that are associated with the trait of interest. However, as with many multi-SNP testing approaches, kernel machine testing can draw conclusion only at the SNP-set level, and does not directly inform on which one(s) of the identified SNP set is actually driving the associations. A recently proposed procedure, KerNel Iterative Feature Extraction (KNIFE), provides a general framework for incorporating variable selection into kernel machine methods. In this article, we focus on quantitative traits and relatively common SNPs, and adapt the KNIFE procedure to genetic association studies and propose an approach to identify driver SNPs after the application of SKAT to gene set analysis. Our approach accommodates several kernels that are widely used in SNP analysis, such as the linear kernel and the Identity by State (IBS) kernel. The proposed approach provides practically useful utilities to prioritize SNPs, and fills the gap between SNP set analysis and biological functional studies. Both simulation studies and real data application are used to demonstrate the proposed approach. © 2016 WILEY PERIODICALS, INC.

  3. Multitasking kernel for the C and Fortran programming languages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooks, E.D. III

    1984-09-01

    A multitasking kernel for the C and Fortran programming languages which runs on the Unix operating system is presented. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the coding, debugging and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessors. The performance evaluation features require no changes in the source code of the application and are implemented as a set of compile and run time options in the kernel.

  4. Deep kernel learning method for SAR image target recognition

    NASA Astrophysics Data System (ADS)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  5. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H; Williams, Samuel; Datta, Kaushik

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we developmore » a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.« less

  6. Soft evolution of multi-jet final states

    DOE PAGES

    Gerwick, Erik; Schumann, Steffen; Höche, Stefan; ...

    2015-02-16

    We present a new framework for computing resummed and matched distributions in processes with many hard QCD jets. The intricate color structure of soft gluon emission at large angles renders resummed calculations highly non-trivial in this case. We automate all ingredients necessary for the color evolution of the soft function at next-to-leading-logarithmic accuracy, namely the selection of the color bases and the projections of color operators and Born amplitudes onto those bases. Explicit results for all QCD processes with up to 2 → 5 partons are given. We also devise a new tree-level matching scheme for resummed calculations which exploitsmore » a quasi-local subtraction based on the Catani-Seymour dipole formalism. We implement both resummation and matching in the Sherpa event generator. As a proof of concept, we compute the resummed and matched transverse-thrust distribution for hadronic collisions.« less

  7. Strangeness S =-1 hyperon-nucleon interactions: Chiral effective field theory versus lattice QCD

    NASA Astrophysics Data System (ADS)

    Song, Jing; Li, Kai-Wen; Geng, Li-Sheng

    2018-06-01

    Hyperon-nucleon interactions serve as basic inputs to studies of hypernuclear physics and dense (neutron) stars. Unfortunately, a precise understanding of these important quantities has lagged far behind that of the nucleon-nucleon interaction due to lack of high-precision experimental data. Historically, hyperon-nucleon interactions are either formulated in quark models or meson exchange models. In recent years, lattice QCD simulations and chiral effective field theory approaches start to offer new insights from first principles. In the present work, we contrast the state-of-the-art lattice QCD simulations with the latest chiral hyperon-nucleon forces and show that the leading order relativistic chiral results can already describe the lattice QCD data reasonably well. Given the fact that the lattice QCD simulations are performed with pion masses ranging from the (almost) physical point to 700 MeV, such studies provide a useful check on both the chiral effective field theory approaches as well as lattice QCD simulations. Nevertheless more precise lattice QCD simulations are eagerly needed to refine our understanding of hyperon-nucleon interactions.

  8. Resonant conversions of QCD axions into hidden axions and suppressed isocurvature perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitajima, Naoya; Takahashi, Fuminobu, E-mail: kitajima@tuhep.phys.tohoku.ac.jp, E-mail: fumi@tuhep.phys.tohoku.ac.jp

    2015-01-01

    We study in detail MSW-like resonant conversions of QCD axions into hidden axions, including cases where the adiabaticity condition is only marginally satisfied, and where anharmonic effects are non-negligible. When the resonant conversion is efficient, the QCD axion abundance is suppressed by the hidden and QCD axion mass ratio. We find that, when the resonant conversion is incomplete due to a weak violation of the adiabaticity, the CDM isocurvature perturbations can be significantly suppressed, while non-Gaussianity of the isocurvature perturbations generically remain unsuppressed. The isocurvature bounds on the inflation scale can therefore be relaxed by the partial resonant conversion ofmore » the QCD axions into hidden axions.« less

  9. An Ensemble Approach to Building Mercer Kernels with Prior Information

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  10. A survey of kernel-type estimators for copula and their applications

    NASA Astrophysics Data System (ADS)

    Sumarjaya, I. W.

    2017-10-01

    Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.

  11. Evolution of phenotypic clusters through competition and local adaptation along an environmental gradient.

    PubMed

    Leimar, Olof; Doebeli, Michael; Dieckmann, Ulf

    2008-04-01

    We have analyzed the evolution of a quantitative trait in populations that are spatially extended along an environmental gradient, with gene flow between nearby locations. In the absence of competition, there is stabilizing selection toward a locally best-adapted trait that changes gradually along the gradient. According to traditional ideas, gradual spatial variation in environmental conditions is expected to lead to gradual variation in the evolved trait. A contrasting possibility is that the trait distribution instead breaks up into discrete clusters. Doebeli and Dieckmann (2003) argued that competition acting locally in trait space and geographical space can promote such clustering. We have investigated this possibility using deterministic population dynamics for asexual populations, analyzing our model numerically and through an analytical approximation. We examined how the evolution of clusters is affected by the shape of competition kernels, by the presence of Allee effects, and by the strength of gene flow along the gradient. For certain parameter ranges clustering was a robust outcome, and for other ranges there was no clustering. Our analysis shows that the shape of competition kernels is important for clustering: the sign structure of the Fourier transform of a competition kernel determines whether the kernel promotes clustering. Also, we found that Allee effects promote clustering, whereas gene flow can have a counteracting influence. In line with earlier findings, we could demonstrate that phenotypic clustering was favored by gradients of intermediate slope.

  12. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies

    PubMed Central

    Manitz, Juliane; Burger, Patricia; Amos, Christopher I.; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility. PMID:28785300

  13. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies.

    PubMed

    Friedrichs, Stefanie; Manitz, Juliane; Burger, Patricia; Amos, Christopher I; Risch, Angela; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike; Hofner, Benjamin

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility.

  14. Oil point and mechanical behaviour of oil palm kernels in linear compression

    NASA Astrophysics Data System (ADS)

    Kabutey, Abraham; Herak, David; Choteborsky, Rostislav; Mizera, Čestmír; Sigalingging, Riswanti; Akangbe, Olaosebikan Layi

    2017-07-01

    The study described the oil point and mechanical properties of roasted and unroasted bulk oil palm kernels under compression loading. The literature information available is very limited. A universal compression testing machine and vessel diameter of 60 mm with a plunger were used by applying maximum force of 100 kN and speed ranging from 5 to 25 mm min-1. The initial pressing height of the bulk kernels was measured at 40 mm. The oil point was determined by a litmus test for each deformation level of 5, 10, 15, 20, and 25 mm at a minimum speed of 5 mmmin-1. The measured parameters were the deformation, deformation energy, oil yield, oil point strain and oil point pressure. Clearly, the roasted bulk kernels required less deformation energy compared to the unroasted kernels for recovering the kernel oil. However, both kernels were not permanently deformed. The average oil point strain was determined at 0.57. The study is an essential contribution to pursuing innovative methods for processing palm kernel oil in rural areas of developing countries.

  15. Pressure Sensitivity Kernels Applied to Time-reversal Acoustics

    DTIC Science & Technology

    2009-06-29

    experimental data, along with an internal wave model, using various metrics. The linear limitations of the kernels are explored in the context of time...Acknowledgments . . . . . . . . . . . . . . . . . . . . . . 82 3.A Internal wave modeling . . . . . . . . . . . . . . . . . . . 82 Bibliography...multipaths corresponding to direct path, single surface/bottom bounce, double bounce off the surface and bot- tom, Bottom: Time-domain sensitivity kernel for

  16. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  17. Unconventional Signal Processing Using the Cone Kernel Time-Frequency Representation.

    DTIC Science & Technology

    1992-10-30

    Wigner - Ville distribution ( WVD ), the Choi- Williams distribution , and the cone kernel distribution were compared with the spectrograms. Results were...ambiguity function. Figures A-18(c) and (d) are the Wigner - Ville Distribution ( WVD ) and CK-TFR Doppler maps. In this noiseless case all three exhibit...kernel is the basis for the well known Wigner - Ville distribution . In A-9(2), the cone kernel defined by Zhao, Atlas and Marks [21 is described

  18. Kernel structures for Clouds

    NASA Technical Reports Server (NTRS)

    Spafford, Eugene H.; Mckendry, Martin S.

    1986-01-01

    An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.

  19. The quark condensate in multi-flavour QCD – planar equivalence confronting lattice simulations

    DOE PAGES

    Armoni, Adi; Shifman, Mikhail; Shore, Graham; ...

    2015-02-01

    Planar equivalence between the large N limits of N=1 Super Yang–Mills (SYM) theory and a variant of QCD with fermions in the antisymmetric representation is a powerful tool to obtain analytic non-perturbative results in QCD itself. In particular, it allows the quark condensate for N=3 QCD with quarks in the fundamental representation to be inferred from exact calculations of the gluino condensate in N=1 SYM. In this paper, we review and refine our earlier predictions for the quark condensate in QCD with a general number nf of flavours and confront these with lattice results.

  20. Additional strange hadrons from QCD thermodynamics and strangeness freezeout in heavy ion collisions.

    PubMed

    Bazavov, A; Ding, H-T; Hegde, P; Kaczmarek, O; Karsch, F; Laermann, E; Maezawa, Y; Mukherjee, Swagato; Ohno, H; Petreczky, P; Schmidt, C; Sharma, S; Soeldner, W; Wagner, M

    2014-08-15

    We compare lattice QCD results for appropriate combinations of net strangeness fluctuations and their correlations with net baryon number fluctuations with predictions from two hadron resonance gas (HRG) models having different strange hadron content. The conventionally used HRG model based on experimentally established strange hadrons fails to describe the lattice QCD results in the hadronic phase close to the QCD crossover. Supplementing the conventional HRG with additional, experimentally uncharted strange hadrons predicted by quark model calculations and observed in lattice QCD spectrum calculations leads to good descriptions of strange hadron thermodynamics below the QCD crossover. We show that the thermodynamic presence of these additional states gets imprinted in the yields of the ground-state strange hadrons leading to a systematic 5-8 MeV decrease of the chemical freeze-out temperatures of ground-state strange baryons.

  1. Search for the pentaquark resonance signature in lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    B. G. Lasscock; J. Hedditch; Derek Leinweber

    2005-02-01

    Claims concerning the possible discovery of the {Theta}{sup +} pentaquark, with minimal quark content uudd{bar s}, have motivated our comprehensive study into possible pentaquark states using lattice QCD. We review various pentaquark interpolating fields in the literature and create a new candidate ideal for lattice QCD simulations. Using these interpolating fields we attempt to isolate a signal for a five-quark resonance. Calculations are performed using improved actions on a large 20{sup 3} x 40 lattice in the quenched approximation. The standard lattice resonance signal of increasing attraction between baryon constituents for increasing quark mass is not observed for spin-1/2 pentaquarkmore » states. We conclude that evidence supporting the existence of a spin-1/2 pentaquark resonance does not exist in quenched QCD.« less

  2. Exposing the QCD Splitting Function with CMS Open Data.

    PubMed

    Larkoski, Andrew; Marzani, Simone; Thaler, Jesse; Tripathee, Aashish; Xue, Wei

    2017-09-29

    The splitting function is a universal property of quantum chromodynamics (QCD) which describes how energy is shared between partons. Despite its ubiquitous appearance in many QCD calculations, the splitting function cannot be measured directly, since it always appears multiplied by a collinear singularity factor. Recently, however, a new jet substructure observable was introduced which asymptotes to the splitting function for sufficiently high jet energies. This provides a way to expose the splitting function through jet substructure measurements at the Large Hadron Collider. In this Letter, we use public data released by the CMS experiment to study the two-prong substructure of jets and test the 1→2 splitting function of QCD. To our knowledge, this is the first ever physics analysis based on the CMS Open Data.

  3. TICK: Transparent Incremental Checkpointing at Kernel Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrini, Fabrizio; Gioiosa, Roberto

    2004-10-25

    TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5

  4. Selected inversion as key to a stable Langevin evolution across the QCD phase boundary

    NASA Astrophysics Data System (ADS)

    Bloch, Jacques; Schenk, Olaf

    2018-03-01

    We present new results of full QCD at nonzero chemical potential. In PRD 92, 094516 (2015) the complex Langevin method was shown to break down when the inverse coupling decreases and enters the transition region from the deconfined to the confined phase. We found that the stochastic technique used to estimate the drift term can be very unstable for indefinite matrices. This may be avoided by using the full inverse of the Dirac operator, which is, however, too costly for four-dimensional lattices. The major breakthrough in this work was achieved by realizing that the inverse elements necessary for the drift term can be computed efficiently using the selected inversion technique provided by the parallel sparse direct solver package PARDISO. In our new study we show that no breakdown of the complex Langevin method is encountered and that simulations can be performed across the phase boundary.

  5. Phenolic constituents of shea (Vitellaria paradoxa) kernels.

    PubMed

    Maranz, Steven; Wiesman, Zeev; Garti, Nissim

    2003-10-08

    Analysis of the phenolic constituents of shea (Vitellaria paradoxa) kernels by LC-MS revealed eight catechin compounds-gallic acid, catechin, epicatechin, epicatechin gallate, gallocatechin, epigallocatechin, gallocatechin gallate, and epigallocatechin gallate-as well as quercetin and trans-cinnamic acid. The mean kernel content of the eight catechin compounds was 4000 ppm (0.4% of kernel dry weight), with a 2100-9500 ppm range. Comparison of the profiles of the six major catechins from 40 Vitellaria provenances from 10 African countries showed that the relative proportions of these compounds varied from region to region. Gallic acid was the major phenolic compound, comprising an average of 27% of the measured total phenols and exceeding 70% in some populations. Colorimetric analysis (101 samples) of total polyphenols extracted from shea butter into hexane gave an average of 97 ppm, with the values for different provenances varying between 62 and 135 ppm of total polyphenols.

  6. Occurrence of 'super soft' wheat kernel texture in hexaploid and tetraploid wheats

    USDA-ARS?s Scientific Manuscript database

    Wheat kernel texture is a key trait that governs milling performance, flour starch damage, flour particle size, flour hydration properties, and baking quality. Kernel texture is commonly measured using the Perten Single Kernel Characterization System (SKCS). The SKCS returns texture values (Hardness...

  7. Finite-frequency sensitivity kernels for head waves

    NASA Astrophysics Data System (ADS)

    Zhang, Zhigang; Shen, Yang; Zhao, Li

    2007-11-01

    Head waves are extremely important in determining the structure of the predominantly layered Earth. While several recent studies have shown the diffractive nature and the 3-D Fréchet kernels of finite-frequency turning waves, analogues of head waves in a continuous velocity structure, the finite-frequency effects and sensitivity kernels of head waves are yet to be carefully examined. We present the results of a numerical study focusing on the finite-frequency effects of head waves. Our model has a low-velocity layer over a high-velocity half-space and a cylindrical-shaped velocity perturbation placed beneath the interface at different locations. A 3-D finite-difference method is used to calculate synthetic waveforms. Traveltime and amplitude anomalies are measured by the cross-correlation of synthetic seismograms from models with and without the velocity perturbation and are compared to the 3-D sensitivity kernels constructed from full waveform simulations. The results show that the head wave arrival-time and amplitude are influenced by the velocity structure surrounding the ray path in a pattern that is consistent with the Fresnel zones. Unlike the `banana-doughnut' traveltime sensitivity kernels of turning waves, the traveltime sensitivity of the head wave along the ray path below the interface is weak, but non-zero. Below the ray path, the traveltime sensitivity reaches the maximum (absolute value) at a depth that depends on the wavelength and propagation distance. The sensitivity kernels vary with the vertical velocity gradient in the lower layer, but the variation is relatively small at short propagation distances when the vertical velocity gradient is within the range of the commonly accepted values. Finally, the depression or shoaling of the interface results in increased or decreased sensitivities, respectively, beneath the interface topography.

  8. Kernel spectral clustering with memory effect

    NASA Astrophysics Data System (ADS)

    Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.

    2013-05-01

    Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.

  9. DNA sequence+shape kernel enables alignment-free modeling of transcription factor binding.

    PubMed

    Ma, Wenxiu; Yang, Lin; Rohs, Remo; Noble, William Stafford

    2017-10-01

    Transcription factors (TFs) bind to specific DNA sequence motifs. Several lines of evidence suggest that TF-DNA binding is mediated in part by properties of the local DNA shape: the width of the minor groove, the relative orientations of adjacent base pairs, etc. Several methods have been developed to jointly account for DNA sequence and shape properties in predicting TF binding affinity. However, a limitation of these methods is that they typically require a training set of aligned TF binding sites. We describe a sequence + shape kernel that leverages DNA sequence and shape information to better understand protein-DNA binding preference and affinity. This kernel extends an existing class of k-mer based sequence kernels, based on the recently described di-mismatch kernel. Using three in vitro benchmark datasets, derived from universal protein binding microarrays (uPBMs), genomic context PBMs (gcPBMs) and SELEX-seq data, we demonstrate that incorporating DNA shape information improves our ability to predict protein-DNA binding affinity. In particular, we observe that (i) the k-spectrum + shape model performs better than the classical k-spectrum kernel, particularly for small k values; (ii) the di-mismatch kernel performs better than the k-mer kernel, for larger k; and (iii) the di-mismatch + shape kernel performs better than the di-mismatch kernel for intermediate k values. The software is available at https://bitbucket.org/wenxiu/sequence-shape.git. rohs@usc.edu or william-noble@uw.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  10. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  11. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  12. Semisupervised kernel marginal Fisher analysis for face recognition.

    PubMed

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  13. Global QCD Analysis of the Nucleon Tensor Charge with Lattice QCD Constraints

    NASA Astrophysics Data System (ADS)

    Shows, Harvey, III; Melnitchouk, Wally; Sato, Nobuo

    2017-09-01

    By studying the parton distribution functions (PDFs) of a nucleon, we probe the partonic scale of nature, exploring what it means to be a nucleon. In this study, we are interested in the transversity PDF-the least studied of the three collinear PDFs. By conducting a global analysis on experimental data from semi-inclusive deep inelastic scattering (SIDIS), as well as single-inclusive e+e- annihilation (SIA), we extract the fit parameters needed to describe the transverse moment dependent (TMD) transversity PDF, as well as the Collins fragmentation function. Once the collinear transversity PDF is obtained by integrating the extracted TMD PDF, we wish to resolve discrepancies between lattice QCD calculations and phenomenological extractions of the tensor charge from data. Here we show our results for the transversity distribution and tensor charge. Using our method of iterative Monte Carlo, we now have a more robust understanding of the transversity PDF. With these results we are able to progress in our understanding of TMD PDFs, as well as testify to the efficacy of current lattice QCD calculations. This work is made possible through support from NSF award 1659177 to Old Dominion University.

  14. Searching remote homology with spectral clustering with symmetry in neighborhood cluster kernels.

    PubMed

    Maulik, Ujjwal; Sarkar, Anasua

    2013-01-01

    Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of "recent" paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request. sarkar@labri.fr.

  15. QCDOC: A 10-teraflops scale computer for lattice QCD

    NASA Astrophysics Data System (ADS)

    Chen, D.; Christ, N. H.; Cristian, C.; Dong, Z.; Gara, A.; Garg, K.; Joo, B.; Kim, C.; Levkova, L.; Liao, X.; Mawhinney, R. D.; Ohta, S.; Wettig, T.

    2001-03-01

    The architecture of a new class of computers, optimized for lattice QCD calculations, is described. An individual node is based on a single integrated circuit containing a PowerPC 32-bit integer processor with a 1 Gflops 64-bit IEEE floating point unit, 4 Mbyte of memory, 8 Gbit/sec nearest-neighbor communications and additional control and diagnostic circuitry. The machine's name, QCDOC, derives from "QCD On a Chip".

  16. A dry-inoculation method for nut kernels.

    PubMed

    Blessington, Tyann; Theofel, Christopher G; Harris, Linda J

    2013-04-01

    A dry-inoculation method for almonds and walnuts was developed to eliminate the need for the postinoculation drying required for wet-inoculation methods. The survival of Salmonella enterica Enteritidis PT 30 on wet- and dry-inoculated almond and walnut kernels stored under ambient conditions (average: 23 °C; 41 or 47% RH) was then compared over 14 weeks. For wet inoculation, an aqueous Salmonella preparation was added directly to almond or walnut kernels, which were then dried under ambient conditions (3 or 7 days, respectively) to initial nut moisture levels. For the dry inoculation, liquid inoculum was mixed with sterilized sand and dried for 24 h at 40 °C. The dried inoculated sand was mixed with kernels, and the sand was removed by shaking the mixture in a sterile sieve. Mixing procedures to optimize the bacterial transfer from sand to kernel were evaluated; in general, similar levels were achieved on walnuts (4.8-5.2 log CFU/g) and almonds (4.2-5.1 log CFU/g). The decline of Salmonella Enteritidis populations was similar during ambient storage (98 days) for both wet-and dry-inoculation methods for both almonds and walnuts. The dry-inoculation method mimics some of the suspected routes of contamination for tree nuts and may be appropriate for some postharvest challenge studies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Nuclear physics from lattice QCD at strong coupling.

    PubMed

    de Forcrand, Ph; Fromm, M

    2010-03-19

    We study numerically the strong coupling limit of lattice QCD with one flavor of massless staggered quarks. We determine the complete phase diagram as a function of temperature and chemical potential, including a tricritical point. We clarify the nature of the low temperature dense phase, which is strongly bound "nuclear" matter. This strong binding is explained by the nuclear potential, which we measure. Finally, we determine, from this first-principles limiting case of QCD, the masses of "atomic nuclei" up to A=12 "carbon".

  18. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Chemical components of cold pressed kernel oils from different Torreya grandis cultivars.

    PubMed

    He, Zhiyong; Zhu, Haidong; Li, Wangling; Zeng, Maomao; Wu, Shengfang; Chen, Shangwei; Qin, Fang; Chen, Jie

    2016-10-15

    The chemical compositions of cold pressed kernel oils of seven Torreya grandis cultivars from China were analyzed in this study. The contents of the chemical components of T. grandis kernels and kernel oils varied to different extents with the cultivar. The T. grandis kernels contained relatively high oil and protein content (45.80-53.16% and 10.34-14.29%, respectively). The kernel oils were rich in unsaturated fatty acids including linoleic (39.39-47.77%), oleic (30.47-37.54%) and eicosatrienoic acid (6.78-8.37%). The kernel oils contained some abundant bioactive substances such as tocopherols (0.64-1.77mg/g) consisting of α-, β-, γ- and δ-isomers; sterols including β-sitosterol (0.90-1.29mg/g), campesterol (0.06-0.32mg/g) and stigmasterol (0.04-0.18mg/g) in addition to polyphenols (9.22-22.16μgGAE/g). The results revealed that the T. grandis kernel oils possessed the potentially important nutrition and health benefits and could be used as oils in the human diet or functional ingredients in the food industry. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. QCD with Chiral Imbalance: models vs. lattice

    NASA Astrophysics Data System (ADS)

    Andrianov, Alexander; Andrianov, Vladimir; Espriu, Domenec

    2017-03-01

    In heavy ion collisions (HIC) at high energies there may appear new phases of matter which must be described by QCD. These phases may have different color and flavour symmetries associated with the constituents involved in collisions as well as various space-time symmetries of hadron matter. Properties of the QCD medium in such a matter can be approximately described, in particular, by a number of right-handed (RH) and left-handed (LH) light quarks. The chiral imbalance (ChI) is characterized by the difference between the numbers of RH and LH quarks and supposedly occurs in the fireball after HIC. Accordingly we have to introduce a quark chiral (axial) chemical potential which simulates a ChI emerging in such a phase. In this report we discuss the possibility of a phase with Local spatial Parity Breaking (LPB) in such an environment and outline conceivable signatures for the registration of LPB as well as the appearance of new states in the spectra of scalar, pseudoscalar and vector particles as a consequence of local ChI. The comparison of the results obtained in the effective QCD- motivated models with lattice data is also performed.

  1. Model for nucleon valence structure functions at all x, all p ⊥ and all Q 2 from the correspondence between QCD and DTU

    NASA Astrophysics Data System (ADS)

    Cohen-Tannoudji, G.; El Hassouni, A.; Mantrach, A.; Oudrhiri-Safiani, E. G.

    1982-09-01

    We propose a simple parametrization of the nucleon valence structure functions at all x, all p ⊥ and all Q 2. We use the DTU parton model to fix the parametrization at a reference point ( Q {0/2}=3 GeV2) and we mimic the QCD evolution by replacing the dimensioned parameters of the DTU parton model by functions depending on Q 2. Excellent agreement is obtained with existing data.

  2. Susceptibility of the QCD vacuum to CP-odd electromagnetic background fields.

    PubMed

    D'Elia, Massimo; Mariti, Marco; Negro, Francesco

    2013-02-22

    We investigate two flavor quantum chromodynamics (QCD) in the presence of CP-odd electromagnetic background fields and determine, by means of lattice QCD simulations, the induced effective θ term to first order in E[over →] · B[over →]. We employ a rooted staggered discretization and study lattice spacings down to 0.1 fm and Goldstone pion masses around 480 MeV. In order to deal with a positive measure, we consider purely imaginary electric fields and real magnetic fields, and then exploit the analytic continuation. Our results are relevant to a description of the effective pseudoscalar quantum electrodynamics-QCD interactions.

  3. Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1992-01-01

    Recently, Freund and Nachtigal have proposed a novel polynominal-based iteration, the quasi-minimal residual algorithm (QMR), for solving general nonsingular non-Hermitian linear systems. Motivated by the QMR method, we have introduced the general concept of quasi-kernel polynomials, and we have shown that the QMR algorithm is based on a particular instance of quasi-kernel polynomials. In this paper, we continue our study of quasi-kernel polynomials. In particular, we derive bounds for the norms of quasi-kernel polynomials. These results are then applied to obtain convergence theorems both for the QMR method and for a transpose-free variant of QMR, the TFQMR algorithm.

  4. QCD equation of state to O (μB6) from lattice QCD

    NASA Astrophysics Data System (ADS)

    Bazavov, A.; Ding, H.-T.; Hegde, P.; Kaczmarek, O.; Karsch, F.; Laermann, E.; Maezawa, Y.; Mukherjee, Swagato; Ohno, H.; Petreczky, P.; Sandmeyer, H.; Steinbrecher, P.; Schmidt, C.; Sharma, S.; Soeldner, W.; Wagner, M.

    2017-03-01

    We calculated the QCD equation of state using Taylor expansions that include contributions from up to sixth order in the baryon, strangeness and electric charge chemical potentials. Calculations have been performed with the Highly Improved Staggered Quark action in the temperature range T ∈[135 MeV ,330 MeV ] using up to four different sets of lattice cutoffs corresponding to lattices of size Nσ3×Nτ with aspect ratio Nσ/Nτ=4 and Nτ=6 - 16 . The strange quark mass is tuned to its physical value, and we use two strange to light quark mass ratios ms/ml=20 and 27, which in the continuum limit correspond to a pion mass of about 160 and 140 MeV, respectively. Sixth-order results for Taylor expansion coefficients are used to estimate truncation errors of the fourth-order expansion. We show that truncation errors are small for baryon chemical potentials less then twice the temperature (μB≤2 T ). The fourth-order equation of state thus is suitable for the modeling of dense matter created in heavy ion collisions with center-of-mass energies down to √{sN N}˜12 GeV . We provide a parametrization of basic thermodynamic quantities that can be readily used in hydrodynamic simulation codes. The results on up to sixth-order expansion coefficients of bulk thermodynamics are used for the calculation of lines of constant pressure, energy and entropy densities in the T -μB plane and are compared with the crossover line for the QCD chiral transition as well as with experimental results on freeze-out parameters in heavy ion collisions. These coefficients also provide estimates for the location of a possible critical point. We argue that results on sixth-order expansion coefficients disfavor the existence of a critical point in the QCD phase diagram for μB/T ≤2 and T /Tc(μB=0 )>0.9 .

  5. QCD equation of state to O ( μ B 6 ) from lattice QCD

    DOE PAGES

    Bazavov, A.; Ding, H. -T.; Hegde, P.; ...

    2017-03-07

    In this work, we calculated the QCD equation of state using Taylor expansions that include contributions from up to sixth order in the baryon, strangeness and electric charge chemical potentials. Calculations have been performed with the Highly Improved Staggered Quark action in the temperature range T ϵ [135 MeV, 330 MeV] using up to four different sets of lattice cut-offs corresponding to lattices of size Nmore » $$3\\atop{σ}$$ × N τ with aspect ratio N σ/N τ = 4 and N τ = 6-16. The strange quark mass is tuned to its physical value and we use two strange to light quark mass ratios m s/m l = 20 and 27, which in the continuum limit correspond to a pion mass of about 160 MeV and 140 MeV respectively. Sixth-order results for Taylor expansion coefficients are used to estimate truncation errors of the fourth-order expansion. We show that truncation errors are small for baryon chemical potentials less then twice the temperature (µ B ≤ 2T ). The fourth-order equation of state thus is suitable for √the modeling of dense matter created in heavy ion collisions with center-of-mass energies down to √sNN ~ 12 GeV. We provide a parametrization of basic thermodynamic quantities that can be readily used in hydrodynamic simulation codes. The results on up to sixth order expansion coefficients of bulk thermodynamics are used for the calculation of lines of constant pressure, energy and entropy densities in the T -µ B plane and are compared with the crossover line for the QCD chiral transition as well as with experimental results on freeze-out parameters in heavy ion collisions. These coefficients also provide estimates for the location of a possible critical point. Lastly, we argue that results on sixth order expansion coefficients disfavor the existence of a critical point in the QCD phase diagram for µ B/T ≤ 2 and T/T c(µ B = 0) > 0.9.« less

  6. η and η' mesons from lattice QCD.

    PubMed

    Christ, N H; Dawson, C; Izubuchi, T; Jung, C; Liu, Q; Mawhinney, R D; Sachrajda, C T; Soni, A; Zhou, R

    2010-12-10

    The large mass of the ninth pseudoscalar meson, the η', is believed to arise from the combined effects of the axial anomaly and the gauge field topology present in QCD. We report a realistic, 2+1-flavor, lattice QCD calculation of the η and η' masses and mixing which confirms this picture. The physical eigenstates show small octet-singlet mixing with a mixing angle of θ=-14.1(2.8)°. Extrapolation to the physical light quark mass gives, with statistical errors only, mη=573(6) MeV and mη'=947(142) MeV, consistent with the experimental values of 548 and 958 MeV.

  7. Mapping the QCD Phase Transition with Accreting Compact Stars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaschke, D.; Bogoliubov Laboratory for Theoretical Physics, JINR Dubna, Joliot-Curie str. 6, 141980 Dubna; Poghosyan, G.

    2008-10-29

    We discuss an idea for how accreting millisecond pulsars could contribute to the understanding of the QCD phase transition in the high-density nuclear matter equation of state (EoS). It is based on two ingredients, the first one being a ''phase diagram'' of rapidly rotating compact star configurations in the plane of spin frequency and mass, determined with state-of-the-art hybrid equations of state, allowing for a transition to color superconducting quark matter. The second is the study of spin-up and accretion evolution in this phase diagram. We show that the quark matter phase transition leads to a characteristic line in themore » {omega}-M plane, the phase border between neutron stars and hybrid stars with a quark matter core. Along this line a drop in the pulsar's moment of inertia entails a waiting point phenomenon in the accreting millisecond pulsar (AMXP) evolution: most of these objects should therefore be found along the phase border in the {omega}-M plane, which may be viewed as the AMXP analog of the main sequence in the Hertzsprung-Russell diagram for normal stars. In order to prove the existence of a high-density phase transition in the cores of compact stars we need population statistics for AMXPs with sufficiently accurate determination of their masses, spin frequencies and magnetic fields.« less

  8. Mapping QTLs controlling kernel dimensions in a wheat inter-varietal RIL mapping population.

    PubMed

    Cheng, Ruiru; Kong, Zhongxin; Zhang, Liwei; Xie, Quan; Jia, Haiyan; Yu, Dong; Huang, Yulong; Ma, Zhengqiang

    2017-07-01

    Seven kernel dimension QTLs were identified in wheat, and kernel thickness was found to be the most important dimension for grain weight improvement. Kernel morphology and weight of wheat (Triticum aestivum L.) affect both yield and quality; however, the genetic basis of these traits and their interactions has not been fully understood. In this study, to investigate the genetic factors affecting kernel morphology and the association of kernel morphology traits with kernel weight, kernel length (KL), width (KW) and thickness (KT) were evaluated, together with hundred-grain weight (HGW), in a recombinant inbred line population derived from Nanda2419 × Wangshuibai, with data from five trials (two different locations over 3 years). The results showed that HGW was more closely correlated with KT and KW than with KL. A whole genome scan revealed four QTLs for KL, one for KW and two for KT, distributed on five different chromosomes. Of them, QKl.nau-2D for KL, and QKt.nau-4B and QKt.nau-5A for KT were newly identified major QTLs for the respective traits, explaining up to 32.6 and 41.5% of the phenotypic variations, respectively. Increase of KW and KT and reduction of KL/KT and KW/KT ratios always resulted in significant higher grain weight. Lines combining the Nanda 2419 alleles of the 4B and 5A intervals had wider, thicker, rounder kernels and a 14% higher grain weight in the genotype-based analysis. A strong, negative linear relationship of the KW/KT ratio with grain weight was observed. It thus appears that kernel thickness is the most important kernel dimension factor in wheat improvement for higher yield. Mapping and marker identification of the kernel dimension-related QTLs definitely help realize the breeding goals.

  9. Sivers and Boer-Mulders observables from lattice QCD.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    B.U. Musch, Ph. Hagler, M. Engelhardt, J.W. Negele, A. Schafer

    We present a first calculation of transverse momentum dependent nucleon observables in dynamical lattice QCD employing non-local operators with staple-shaped, 'process-dependent' Wilson lines. The use of staple-shaped Wilson lines allows us to link lattice simulations to TMD effects determined from experiment, and in particular to access non-universal, naively time-reversal odd TMD observables. We present and discuss results for the generalized Sivers and Boer-Mulders transverse momentum shifts for the SIDIS and DY cases. The effect of staple-shaped Wilson lines on T-even observables is studied for the generalized tensor charge and a generalized transverse shift related to the worm gear function g{submore » 1}T. We emphasize the dependence of these observables on the staple extent and the Collins-Soper evolution parameter. Our numerical calculations use an n{sub f} = 2+1 mixed action scheme with domain wall valence fermions on an Asqtad sea and pion masses 369 MeV as well as 518 MeV.« less

  10. Merging weak and QCD showers with matrix elements

    DOE PAGES

    Christiansen, Jesper Roy; Prestel, Stefan

    2016-01-22

    In this study, we present a consistent way of combining associated weak boson radiation in hard dijet events with hard QCD radiation in Drell–Yan-like scatterings. This integrates multiple tree-level calculations with vastly different cross sections, QCD- and electroweak parton-shower resummation into a single framework. The new merging strategy is implemented in the P ythia event generator and predictions are confronted with LHC data. Improvements over the previous strategy are observed. Results of the new electroweak-improved merging at a future 100 TeV proton collider are also investigated.

  11. Merging weak and QCD showers with matrix elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christiansen, Jesper Roy; Prestel, Stefan

    In this study, we present a consistent way of combining associated weak boson radiation in hard dijet events with hard QCD radiation in Drell–Yan-like scatterings. This integrates multiple tree-level calculations with vastly different cross sections, QCD- and electroweak parton-shower resummation into a single framework. The new merging strategy is implemented in the P ythia event generator and predictions are confronted with LHC data. Improvements over the previous strategy are observed. Results of the new electroweak-improved merging at a future 100 TeV proton collider are also investigated.

  12. Weighted Feature Gaussian Kernel SVM for Emotion Recognition

    PubMed Central

    Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443

  13. Searching Remote Homology with Spectral Clustering with Symmetry in Neighborhood Cluster Kernels

    PubMed Central

    Maulik, Ujjwal; Sarkar, Anasua

    2013-01-01

    Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of “recent” paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request. Contact: sarkar@labri.fr. PMID:23457439

  14. Celluclast 1.5L pretreatment enhanced aroma of palm kernels and oil after kernel roasting.

    PubMed

    Zhang, Wencan; Zhao, Fangju; Yang, Tiankui; Zhao, Feifei; Liu, Shaoquan

    2017-12-01

    The aroma of palm kernel oil (PKO) affects its applications. Little information is available on how enzymatic modification of palm kernels (PK) affects PK and PKO aroma after kernel roasting. Celluclast (cellulase) pretreatment of PK resulted in a 2.4-fold increment in the concentration of soluble sugars, with glucose being increased by 6.0-fold. Higher levels of 1.7-, 1.8- and 1.9-fold of O-heterocyclic volatile compounds were found in the treated PK after roasting at 180 °C for 8, 14 and 20 min respectively relative to the corresponding control, with furfural, 5-methyl-2-furancarboxaldehyde, 2-furanmethanol and maltol in particularly higher amounts. Volatile differences between PKOs from control and treated PK were also found, though less obvious owing to the aqueous extraction process. Principal component analysis based on aroma-active compounds revealed that upon the proceeding of roasting, the differentiation between control and treated PK was enlarged while that of corresponding PKOs was less clear-cut. Celluclast pretreatment enabled the medium roasted PK to impart more nutty, roasty and caramelic odor and the corresponding PKO to impart more caramelic but less roasty and burnt notes. Celluclast pretreatment of PK followed by roasting may be a promising new way of improving PKO aroma. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  15. QCD and Multiparticle Production - Proceedings of the XXIX International Symposium on Multiparticle Dynamics

    NASA Astrophysics Data System (ADS)

    Sarcevic, Ina; Tan, Chung-I.

    2000-07-01

    from Dilepton Production in Relativistic Heavy-Ion Collisions * Session Chairman: I. Sarcevic * Transport-Theoretical Analysis of Reaction Dynamics, Particle Production and Freeze-out at RHIC * Inclusive Particle Spectra and Exotic Particle Searches Using STAR * The First Fermi in a High Energy Nuclear Collision * Probing the Space-Time Evolution of Heavy Ion Collisions with Bremsstrahlung * Thursday afternoon session: Hadronic Final States - Conveners: E. de Wolf and J. Gary * Session Chairman: F. Verbeure * QCD with SLD * QCD at LEP II * Multidimensional Analysis of the Bose-Einstein Correlations at DELPHI * Study of Color Singlet with Gluonic Subsinglet by Color Effective Hamiltonian * Correlations and Fluctuations - Conveners: R. Hwa and M. Tannenbaum * Session Chairman: R. C. Hwa -- Fluctuations in Heavy-Ion Collisions * Scale-Local Statistical Measures and the Multiparticle Final State * Centrality and ET Fluctuations from p + Be to Au + Au at AGS Energies * Order Parameter of Single Event * Multiplicities, Transverse Momenta and Their Correlations from Percolating Colour Strings * Probing the QCD Critical Point in Nuclear Collisions * Event-by-Event Fluctuations in Pb + Pb Collisions at the CERN SPS * Friday morning session: High Energy Collisions and Cosmic-Ray/Astrophysics - Conveners: F. Halzen and T. Stanev * Session Chairman: U. Sukhatme * Rethinking the Eikonal Approximation * QCD and Total Cross-Sections * The Role of Multiple Parton Collisions in Hadron Collisions * Effective Cross Sections and Spatial Structure of the Hadrons * Looking for the Odderon * QCD in Embedded Coordinates * Session Chairman: F. Bopp * Extensive Air Sbowers and Hadronic Interaction Models * Penetration of the Earth by Ultrahigh Energy Neutrinos and the Parton Distributions Inside the Nucleon * Comparison of Prompt Muon Observations to Charm Expectations * Friday afternoon session: Recent Developments - Conveners: R. Brower and I. Sarcevic * Session Chairman: G. Guralnik * The

  16. Topological susceptibility in finite temperature (2 +1 )-flavor QCD using gradient flow

    NASA Astrophysics Data System (ADS)

    Taniguchi, Yusuke; Kanaya, Kazuyuki; Suzuki, Hiroshi; Umeda, Takashi; WHOT-QCD Collaboration

    2017-03-01

    We compute the topological charge and its susceptibility in finite temperature (2 +1 )-flavor QCD on the lattice applying a gradient flow method. With the Iwasaki gauge action and nonperturbatively O (a ) -improved Wilson quarks, we perform simulations on a fine lattice with a ≃0.07 fm at a heavy u , d quark mass with mπ/mρ≃0.63 , but approximately physical s quark mass with mηss/mϕ≃0.74 . In a temperature range from T ≃174 MeV (Nt=16 ) to 697 MeV (Nt=4 ), we study two topics on the topological susceptibility. One is a comparison of gluonic and fermionic definitions of the topological susceptibility. Because the two definitions are related by chiral Ward-Takahashi identities, their equivalence is not trivial for lattice quarks which violate the chiral symmetry explicitly at finite lattice spacings. The gradient flow method enables us to compute them without being bothered by the chiral violation. We find a good agreement between the two definitions with Wilson quarks. The other is a comparison with a prediction of the dilute instanton gas approximation, which is relevant in a study of axions as a candidate of the dark matter in the evolution of the Universe. We find that the topological susceptibility shows a decrease in T which is consistent with the predicted χt(T )∝(T /Tpc)-8 for three-flavor QCD even at low temperature Tpc

  17. Multiple kernel learning in protein-protein interaction extraction from biomedical literature.

    PubMed

    Yang, Zhihao; Tang, Nan; Zhang, Xiao; Lin, Hongfei; Li, Yanpeng; Yang, Zhiwei

    2011-03-01

    Knowledge about protein-protein interactions (PPIs) unveils the molecular mechanisms of biological processes. The volume and content of published biomedical literature on protein interactions is expanding rapidly, making it increasingly difficult for interaction database administrators, responsible for content input and maintenance to detect and manually update protein interaction information. The objective of this work is to develop an effective approach to automatic extraction of PPI information from biomedical literature. We present a weighted multiple kernel learning-based approach for automatic PPI extraction from biomedical literature. The approach combines the following kernels: feature-based, tree, graph and part-of-speech (POS) path. In particular, we extend the shortest path-enclosed tree (SPT) and dependency path tree to capture richer contextual information. Our experimental results show that the combination of SPT and dependency path tree extensions contributes to the improvement of performance by almost 0.7 percentage units in F-score and 2 percentage units in area under the receiver operating characteristics curve (AUC). Combining two or more appropriately weighed individual will further improve the performance. Both on the individual corpus and cross-corpus evaluation our combined kernel can achieve state-of-the-art performance with respect to comparable evaluations, with 64.41% F-score and 88.46% AUC on the AImed corpus. As different kernels calculate the similarity between two sentences from different aspects. Our combined kernel can reduce the risk of missing important features. More specifically, we use a weighted linear combination of individual kernels instead of assigning the same weight to each individual kernel, thus allowing the introduction of each kernel to incrementally contribute to the performance improvement. In addition, SPT and dependency path tree extensions can improve the performance by including richer context information

  18. Moving Forward to Constrain the Shear Viscosity of QCD Matter

    DOE PAGES

    Denicol, Gabriel; Monnai, Akihiko; Schenke, Björn

    2016-05-26

    In this work, we demonstrate that measurements of rapidity differential anisotropic flow in heavy-ion collisions can constrain the temperature dependence of the shear viscosity to entropy density ratio η/s of QCD matter. Comparing results from hydrodynamic calculations with experimental data from the RHIC, we find evidence for a small η/s ≈ 0.04 in the QCD crossover region and a strong temperature dependence in the hadronic phase. A temperature independent η/s is disfavored by the data. We further show that measurements of the event-by-event flow as a function of rapidity can be used to independently constrain the initial state fluctuations inmore » three dimensions and the temperature dependent transport properties of QCD matter.« less

  19. Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.

    PubMed

    Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D

    2016-04-01

    Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the

  20. Boundary conditions for gas flow problems from anisotropic scattering kernels

    NASA Astrophysics Data System (ADS)

    To, Quy-Dong; Vu, Van-Huyen; Lauriat, Guy; Léonard, Céline

    2015-10-01

    The paper presents an interface model for gas flowing through a channel constituted of anisotropic wall surfaces. Using anisotropic scattering kernels and Chapman Enskog phase density, the boundary conditions (BCs) for velocity, temperature, and discontinuities including velocity slip and temperature jump at the wall are obtained. Two scattering kernels, Dadzie and Méolans (DM) kernel, and generalized anisotropic Cercignani-Lampis (ACL) are examined in the present paper, yielding simple BCs at the wall fluid interface. With these two kernels, we rigorously recover the analytical expression for orientation dependent slip shown in our previous works [Pham et al., Phys. Rev. E 86, 051201 (2012) and To et al., J. Heat Transfer 137, 091002 (2015)] which is in good agreement with molecular dynamics simulation results. More important, our models include both thermal transpiration effect and new equations for the temperature jump. While the same expression depending on the two tangential accommodation coefficients is obtained for slip velocity, the DM and ACL temperature equations are significantly different. The derived BC equations associated with these two kernels are of interest for the gas simulations since they are able to capture the direction dependent slip behavior of anisotropic interfaces.

  1. Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.

    PubMed

    Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao

    2017-06-21

    In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.

  2. Ambered kernels in stenospermocarpic fruit of eastern black walnut

    Treesearch

    Michele R. Warmund; J.W. Van Sambeek

    2014-01-01

    "Ambers" is a term used to describe poorly filled, shriveled eastern black walnut (Juglans nigra L.) kernels with a dark brown or black-colored pellicle that are unmarketable. Studies were conducted to determine the incidence of ambered black walnut kernels and to ascertain when symptoms were apparent in specific tissues. The occurrence of...

  3. Antioxidant and antimicrobial activities of bitter and sweet apricot (Prunus armeniaca L.) kernels.

    PubMed

    Yiğit, D; Yiğit, N; Mavi, A

    2009-04-01

    The present study describes the in vitro antimicrobial and antioxidant activity of methanol and water extracts of sweet and bitter apricot (Prunus armeniaca L.) kernels. The antioxidant properties of apricot kernels were evaluated by determining radical scavenging power, lipid peroxidation inhibition activity and total phenol content measured with a DPPH test, the thiocyanate method and the Folin method, respectively. In contrast to extracts of the bitter kernels, both the water and methanol extracts of sweet kernels have antioxidant potential. The highest percent inhibition of lipid peroxidation (69%) and total phenolic content (7.9 +/- 0.2 microg/mL) were detected in the methanol extract of sweet kernels (Hasanbey) and in the water extract of the same cultivar, respectively. The antimicrobial activities of the above extracts were also tested against human pathogenic microorganisms using a disc-diffusion method, and the minimal inhibitory concentration (MIC) values of each active extract were determined. The most effective antibacterial activity was observed in the methanol and water extracts of bitter kernels and in the methanol extract of sweet kernels against the Gram-positive bacteria Staphylococcus aureus. Additionally, the methanol extracts of the bitter kernels were very potent against the Gram-negative bacteria Escherichia coli (0.312 mg/mL MIC value). Significant anti-candida activity was also observed with the methanol extract of bitter apricot kernels against Candida albicans, consisting of a 14 mm in diameter of inhibition zone and a 0.625 mg/mL MIC value.

  4. Acute cyanide toxicity caused by apricot kernel ingestion.

    PubMed

    Suchard, J R; Wallace, K L; Gerkin, R D

    1998-12-01

    A 41-year-old woman ingested apricot kernels purchased at a health food store and became weak and dyspneic within 20 minutes. The patient was comatose and hypothermic on presentation but responded promptly to antidotal therapy for cyanide poisoning. She was later treated with a continuous thiosulfate infusion for persistent metabolic acidosis. This is the first reported case of cyanide toxicity from apricot kernel ingestion in the United States since 1979.

  5. Nutrition quality of extraction mannan residue from palm kernel cake on brolier chicken

    NASA Astrophysics Data System (ADS)

    Tafsin, M.; Hanafi, N. D.; Kejora, E.; Yusraini, E.

    2018-02-01

    This study aims to find out the nutrient residue of palm kernel cake from mannan extraction on broiler chicken by evaluating physical quality (specific gravity, bulk density and compacted bulk density), chemical quality (proximate analysis and Van Soest Test) and biological test (metabolizable energy). Treatment composed of T0 : palm kernel cake extracted aquadest (control), T1 : palm kernel cake extracted acetic acid (CH3COOH) 1%, T2 : palm kernel cake extracted aquadest + mannanase enzyme 100 u/l and T3 : palm kernel cake extracted acetic acid (CH3COOH) 1% + enzyme mannanase 100 u/l. The results showed that mannan extraction had significant effect (P<0.05) in improving the quality of physical and numerically increase the value of crude protein and decrease the value of NDF (Neutral Detergent Fiber). Treatments had highly significant influence (P<0.01) on the metabolizable energy value of palm kernel cake residue in broiler chickens. It can be concluded that extraction with aquadest + enzyme mannanase 100 u/l yields the best nutrient quality of palm kernel cake residue for broiler chicken.

  6. Tetraquarks in holographic QCD

    NASA Astrophysics Data System (ADS)

    Gutsche, Thomas; Lyubovitskij, Valery E.; Schmidt, Ivan

    2017-08-01

    Using a soft-wall AdS/QCD approach we derive the Schrödinger-type equation of motion for the tetraquark wave function, which is dual to the dimension-4 AdS bulk profile. The latter coincides with the number of constituents in the leading Fock state of the tetraquark. The obtained equation of motion is solved analytically, providing predictions for both the tetraquark wave function and its mass. A low mass limit for possible tetraquark states is given by M ≥2 κ =1 GeV , where κ =0.5 GeV is the typical value of the scale parameter in soft-wall AdS/QCD. We confirm results of the COMPASS Collaboration recently reported on the discovery of the a1(1414 ) state, interpreted as a tetraquark state composed of light quarks and having JP C=1++. Our prediction for the mass of this state, Ma1=√{2 } GeV ≃1.414 GeV , is in good agreement with the COMPASS result Ma1=1.41 4-0.013+0.015 GeV . Next we included finite quark mass effects, which are essential for the tetraquark states involving heavy quarks.

  7. Topology and evolution of technology innovation networks

    NASA Astrophysics Data System (ADS)

    Valverde, Sergi; Solé, Ricard V.; Bedau, Mark A.; Packard, Norman

    2007-11-01

    The web of relations linking technological innovation can be fairly described in terms of patent citations. The resulting patent citation network provides a picture of the large-scale organization of innovations and its time evolution. Here we study the patterns of change of patents registered by the U.S. Patent and Trademark Office. We show that the scaling behavior exhibited by this network is consistent with a preferential attachment mechanism together with a Weibull-shaped aging term. Such an attachment kernel is shared by scientific citation networks, thus indicating a universal type of mechanism linking ideas and designs and their evolution. The implications for evolutionary theory of innovation are discussed.

  8. Reduction of Aflatoxins in Apricot Kernels by Electronic and Manual Color Sorting.

    PubMed

    Zivoli, Rosanna; Gambacorta, Lucia; Piemontese, Luca; Solfrizzo, Michele

    2016-01-19

    The efficacy of color sorting on reducing aflatoxin levels in shelled apricot kernels was assessed. Naturally-contaminated kernels were submitted to an electronic optical sorter or blanched, peeled, and manually sorted to visually identify and sort discolored kernels (dark and spotted) from healthy ones. The samples obtained from the two sorting approaches were ground, homogenized, and analysed by HPLC-FLD for their aflatoxin content. A mass balance approach was used to measure the distribution of aflatoxins in the collected fractions. Aflatoxin B₁ and B₂ were identified and quantitated in all collected fractions at levels ranging from 1.7 to 22,451.5 µg/kg of AFB₁ + AFB₂, whereas AFG₁ and AFG₂ were not detected. Excellent results were obtained by manual sorting of peeled kernels since the removal of discolored kernels (2.6%-19.9% of total peeled kernels) removed 97.3%-99.5% of total aflatoxins. The combination of peeling and visual/manual separation of discolored kernels is a feasible strategy to remove 97%-99% of aflatoxins accumulated in naturally-contaminated samples. Electronic optical sorter gave highly variable results since the amount of AFB₁ + AFB₂ measured in rejected fractions (15%-18% of total kernels) ranged from 13% to 59% of total aflatoxins. An improved immunoaffinity-based HPLC-FLD method having low limits of detection for the four aflatoxins (0.01-0.05 µg/kg) was developed and used to monitor the occurrence of aflatoxins in 47 commercial products containing apricot kernels and/or almonds commercialized in Italy. Low aflatoxin levels were found in 38% of the tested samples and ranged from 0.06 to 1.50 μg/kg for AFB₁ and from 0.06 to 1.79 μg/kg for total aflatoxins.

  9. Reduction of Aflatoxins in Apricot Kernels by Electronic and Manual Color Sorting

    PubMed Central

    Zivoli, Rosanna; Gambacorta, Lucia; Piemontese, Luca; Solfrizzo, Michele

    2016-01-01

    The efficacy of color sorting on reducing aflatoxin levels in shelled apricot kernels was assessed. Naturally-contaminated kernels were submitted to an electronic optical sorter or blanched, peeled, and manually sorted to visually identify and sort discolored kernels (dark and spotted) from healthy ones. The samples obtained from the two sorting approaches were ground, homogenized, and analysed by HPLC-FLD for their aflatoxin content. A mass balance approach was used to measure the distribution of aflatoxins in the collected fractions. Aflatoxin B1 and B2 were identified and quantitated in all collected fractions at levels ranging from 1.7 to 22,451.5 µg/kg of AFB1 + AFB2, whereas AFG1 and AFG2 were not detected. Excellent results were obtained by manual sorting of peeled kernels since the removal of discolored kernels (2.6%–19.9% of total peeled kernels) removed 97.3%–99.5% of total aflatoxins. The combination of peeling and visual/manual separation of discolored kernels is a feasible strategy to remove 97%–99% of aflatoxins accumulated in naturally-contaminated samples. Electronic optical sorter gave highly variable results since the amount of AFB1 + AFB2 measured in rejected fractions (15%–18% of total kernels) ranged from 13% to 59% of total aflatoxins. An improved immunoaffinity-based HPLC-FLD method having low limits of detection for the four aflatoxins (0.01–0.05 µg/kg) was developed and used to monitor the occurrence of aflatoxins in 47 commercial products containing apricot kernels and/or almonds commercialized in Italy. Low aflatoxin levels were found in 38% of the tested samples and ranged from 0.06 to 1.50 μg/kg for AFB1 and from 0.06 to 1.79 μg/kg for total aflatoxins. PMID:26797635

  10. New QCD sum rules based on canonical commutation relations

    NASA Astrophysics Data System (ADS)

    Hayata, Tomoya

    2012-04-01

    New derivation of QCD sum rules by canonical commutators is developed. It is the simple and straightforward generalization of Thomas-Reiche-Kuhn sum rule on the basis of Kugo-Ojima operator formalism of a non-abelian gauge theory and a suitable subtraction of UV divergences. By applying the method to the vector and axial vector current in QCD, the exact Weinberg’s sum rules are examined. Vector current sum rules and new fractional power sum rules are also discussed.

  11. Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction

    NASA Astrophysics Data System (ADS)

    Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc

    2018-02-01

    Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.

  12. A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.

    PubMed

    Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem

    2018-06-12

    Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.

  13. Heavy-quark production in gluon fusion at two loops in QCD

    NASA Astrophysics Data System (ADS)

    Czakon, M.; Mitov, A.; Moch, S.

    2008-07-01

    We present the two-loop virtual QCD corrections to the production of heavy quarks in gluon fusion. The results are exact in the limit when all kinematical invariants are large compared to the mass of the heavy quark up to terms suppressed by powers of the heavy-quark mass. Our derivation uses a simple relation between massless and massive QCD scattering amplitudes as well as a direct calculation of the massive amplitude at two loops. The results presented here together with those obtained previously for quark-quark scattering form important parts of the next-to-next-to-leading order QCD corrections to heavy-quark production in hadron-hadron collisions.

  14. Higgs boson couplings to bottom quarks: two-loop supersymmetry-QCD corrections.

    PubMed

    Noth, David; Spira, Michael

    2008-10-31

    We present two-loop supersymmetry (SUSY) QCD corrections to the effective bottom Yukawa couplings within the minimal supersymmetric extension of the standard model (MSSM). The effective Yukawa couplings include the resummation of the nondecoupling corrections Deltam_{b} for large values of tanbeta. We have derived the two-loop SUSY-QCD corrections to the leading SUSY-QCD and top-quark-induced SUSY-electroweak contributions to Deltam_{b}. The scale dependence of the resummed Yukawa couplings is reduced from O(10%) to the percent level. These results reduce the theoretical uncertainties of the MSSM Higgs branching ratios to the accuracy which can be achieved at a future linear e;{+}e;{-} collider.

  15. Flavor-singlet baryons in the graded symmetry approach to partially quenched QCD

    NASA Astrophysics Data System (ADS)

    Hall, Jonathan M. M.; Leinweber, Derek B.

    2016-11-01

    Progress in the calculation of the electromagnetic properties of baryon excitations in lattice QCD presents new challenges in the determination of sea-quark loop contributions to matrix elements. A reliable estimation of the sea-quark loop contributions represents a pressing issue in the accurate comparison of lattice QCD results with experiment. In this article, an extension of the graded symmetry approach to partially quenched QCD is presented, which builds on previous theory by explicitly including flavor-singlet baryons in its construction. The formalism takes into account the interactions among both octet and singlet baryons, octet mesons, and their ghost counterparts; the latter enables the isolation of the quark-flow disconnected sea-quark loop contributions. The introduction of flavor-singlet states enables systematic studies of the internal structure of Λ -baryon excitations in lattice QCD, including the topical Λ (1405 ).

  16. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    PubMed Central

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-01-01

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202

  17. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  18. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  19. Notes on a storage manager for the Clouds kernel

    NASA Technical Reports Server (NTRS)

    Pitts, David V.; Spafford, Eugene H.

    1986-01-01

    The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.

  20. Metabolite identification through multiple kernel learning on fragmentation trees.

    PubMed

    Shen, Huibin; Dührkop, Kai; Böcker, Sebastian; Rousu, Juho

    2014-06-15

    Metabolite identification from tandem mass spectrometric data is a key task in metabolomics. Various computational methods have been proposed for the identification of metabolites from tandem mass spectra. Fragmentation tree methods explore the space of possible ways in which the metabolite can fragment, and base the metabolite identification on scoring of these fragmentation trees. Machine learning methods have been used to map mass spectra to molecular fingerprints; predicted fingerprints, in turn, can be used to score candidate molecular structures. Here, we combine fragmentation tree computations with kernel-based machine learning to predict molecular fingerprints and identify molecular structures. We introduce a family of kernels capturing the similarity of fragmentation trees, and combine these kernels using recently proposed multiple kernel learning approaches. Experiments on two large reference datasets show that the new methods significantly improve molecular fingerprint prediction accuracy. These improvements result in better metabolite identification, doubling the number of metabolites ranked at the top position of the candidates list. © The Author 2014. Published by Oxford University Press.

  1. Efficient Multiple Kernel Learning Algorithms Using Low-Rank Representation.

    PubMed

    Niu, Wenjia; Xia, Kewen; Zu, Baokai; Bai, Jianchuan

    2017-01-01

    Unlike Support Vector Machine (SVM), Multiple Kernel Learning (MKL) allows datasets to be free to choose the useful kernels based on their distribution characteristics rather than a precise one. It has been shown in the literature that MKL holds superior recognition accuracy compared with SVM, however, at the expense of time consuming computations. This creates analytical and computational difficulties in solving MKL algorithms. To overcome this issue, we first develop a novel kernel approximation approach for MKL and then propose an efficient Low-Rank MKL (LR-MKL) algorithm by using the Low-Rank Representation (LRR). It is well-acknowledged that LRR can reduce dimension while retaining the data features under a global low-rank constraint. Furthermore, we redesign the binary-class MKL as the multiclass MKL based on pairwise strategy. Finally, the recognition effect and efficiency of LR-MKL are verified on the datasets Yale, ORL, LSVT, and Digit. Experimental results show that the proposed LR-MKL algorithm is an efficient kernel weights allocation method in MKL and boosts the performance of MKL largely.

  2. Background field removal using a region adaptive kernel for quantitative susceptibility mapping of human brain

    NASA Astrophysics Data System (ADS)

    Fang, Jinsheng; Bao, Lijun; Li, Xu; van Zijl, Peter C. M.; Chen, Zhong

    2017-08-01

    Background field removal is an important MR phase preprocessing step for quantitative susceptibility mapping (QSM). It separates the local field induced by tissue magnetic susceptibility sources from the background field generated by sources outside a region of interest, e.g. brain, such as air-tissue interface. In the vicinity of air-tissue boundary, e.g. skull and paranasal sinuses, where large susceptibility variations exist, present background field removal methods are usually insufficient and these regions often need to be excluded by brain mask erosion at the expense of losing information of local field and thus susceptibility measures in these regions. In this paper, we propose an extension to the variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP) background field removal method using a region adaptive kernel (R-SHARP), in which a scalable spherical Gaussian kernel (SGK) is employed with its kernel radius and weights adjustable according to an energy "functional" reflecting the magnitude of field variation. Such an energy functional is defined in terms of a contour and two fitting functions incorporating regularization terms, from which a curve evolution model in level set formation is derived for energy minimization. We utilize it to detect regions of with a large field gradient caused by strong susceptibility variation. In such regions, the SGK will have a small radius and high weight at the sphere center in a manner adaptive to the voxel energy of the field perturbation. Using the proposed method, the background field generated from external sources can be effectively removed to get a more accurate estimation of the local field and thus of the QSM dipole inversion to map local tissue susceptibility sources. Numerical simulation, phantom and in vivo human brain data demonstrate improved performance of R-SHARP compared to V-SHARP and RESHARP (regularization enabled SHARP) methods, even when the whole paranasal sinus regions

  3. Classification of corn kernels contaminated with aflatoxins using fluorescence and reflectance hyperspectral images analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas

    2015-05-01

    Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.

  4. Influence of Kernel Age on Fumonisin B1 Production in Maize by Fusarium moniliforme

    PubMed Central

    Warfield, Colleen Y.; Gilchrist, David G.

    1999-01-01

    Production of fumonisins by Fusarium moniliforme on naturally infected maize ears is an important food safety concern due to the toxic nature of this class of mycotoxins. Assessing the potential risk of fumonisin production in developing maize ears prior to harvest requires an understanding of the regulation of toxin biosynthesis during kernel maturation. We investigated the developmental-stage-dependent relationship between maize kernels and fumonisin B1 production by using kernels collected at the blister (R2), milk (R3), dough (R4), and dent (R5) stages following inoculation in culture at their respective field moisture contents with F. moniliforme. Highly significant differences (P ≤ 0.001) in fumonisin B1 production were found among kernels at the different developmental stages. The highest levels of fumonisin B1 were produced on the dent stage kernels, and the lowest levels were produced on the blister stage kernels. The differences in fumonisin B1 production among kernels at the different developmental stages remained significant (P ≤ 0.001) when the moisture contents of the kernels were adjusted to the same level prior to inoculation. We concluded that toxin production is affected by substrate composition as well as by moisture content. Our study also demonstrated that fumonisin B1 biosynthesis on maize kernels is influenced by factors which vary with the developmental age of the tissue. The risk of fumonisin contamination may begin early in maize ear development and increases as the kernels reach physiological maturity. PMID:10388675

  5. Three-point Green functions in the odd sector of QCD

    NASA Astrophysics Data System (ADS)

    Kadavý, T.; Kampf, K.; Novotný, J.

    2016-11-01

    A review of familiar results of the three-point Green functions of currents in the odd-intrinsic parity sector of QCD is presented. Such Green functions include very well-known examples of VVP, VAS or AAP correlators. We also shortly present some of the new results for VVA and AAA Green functions with a discussion of their high-energy behaviour and its relation to the QCD condensates.

  6. Differential metabolome analysis of field-grown maize kernels in response to drought stress

    USDA-ARS?s Scientific Manuscript database

    Drought stress constrains maize kernel development and can exacerbate aflatoxin contamination. In order to identify drought responsive metabolites and explore pathways involved in kernel responses, a metabolomics analysis was conducted on kernels from a drought tolerant line, Lo964, and a sensitive ...

  7. Considering causal genes in the genetic dissection of kernel traits in common wheat.

    PubMed

    Mohler, Volker; Albrecht, Theresa; Castell, Adelheid; Diethelm, Manuela; Schweizer, Günther; Hartl, Lorenz

    2016-11-01

    Genetic factors controlling thousand-kernel weight (TKW) were characterized for their association with other seed traits, including kernel width, kernel length, ratio of kernel width to kernel length (KW/KL), kernel area, and spike number per m 2 (SN). For this purpose, a genetic map was established utilizing a doubled haploid population derived from a cross between German winter wheat cultivars Pamier and Format. Association studies in a diversity panel of elite cultivars supplemented genetic analysis of kernel traits. In both populations, genomic signatures of 13 candidate genes for TKW and kernel size were analyzed. Major quantitative trait loci (QTL) for TKW were identified on chromosomes 1B, 2A, 2D, and 4D, and their locations coincided with major QTL for kernel size traits, supporting the common belief that TKW is a function of other kernel traits. The QTL on chromosome 2A was associated with TKW candidate gene TaCwi-A1 and the QTL on chromosome 4D was associated with dwarfing gene Rht-D1. A minor QTL for TKW on chromosome 6B coincided with TaGW2-6B. The QTL for kernel dimensions that did not affect TKW were detected on eight chromosomes. A major QTL for KW/KL located at the distal tip of chromosome arm 5AS is being reported for the first time. TaSus1-7A and TaSAP-A1, closely linked to each other on chromosome 7A, could be related to a minor QTL for KW/KL. Genetic analysis of SN confirmed its negative correlation with TKW in this cross. In the diversity panel, TaSus1-7A was associated with TKW. Compared to the Pamier/Format bi-parental population where TaCwi-A1a was associated with higher TKW, the same allele reduced grain yield in the diversity panel, suggesting opposite effects of TaCwi-A1 on these two traits.

  8. Experimenting with Langevin lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavai, R.V.; Potvin, J.; Sanielevici, S.

    1987-05-01

    We report on the status of our investigations of the effects of systematic errors upon the practical merits of Langevin updating in full lattice QCD. We formulate some rules for the safe use of this updating procedure and some observations on problems which may be common to all approximate fermion algorithms.

  9. Kernel machines for epilepsy diagnosis via EEG signal classification: a comparative study.

    PubMed

    Lima, Clodoaldo A M; Coelho, André L V

    2011-10-01

    We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely, Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs

  10. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    DTIC Science & Technology

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model , is able to model the rate of occurrence of...which adds specificity to the model and can make nonlinear data more manageable. Early results show that the 1. REPORT DATE (DD-MM-YYYY) 4. TITLE

  11. QCD in heavy quark production and decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiss, J.

    1997-06-01

    The author discusses how QCD is used to understand the physics of heavy quark production and decay dynamics. His discussion of production dynamics primarily concentrates on charm photoproduction data which are compared to perturbative QCD calculations which incorporate fragmentation effects. He begins his discussion of heavy quark decay by reviewing data on charm and beauty lifetimes. Present data on fully leptonic and semileptonic charm decay are then reviewed. Measurements of the hadronic weak current form factors are compared to the nonperturbative QCD-based predictions of Lattice Gauge Theories. He next discusses polarization phenomena present in charmed baryon decay. Heavy Quark Effectivemore » Theory predicts that the daughter baryon will recoil from the charmed parent with nearly 100% left-handed polarization, which is in excellent agreement with present data. He concludes by discussing nonleptonic charm decay which is traditionally analyzed in a factorization framework applicable to two-body and quasi-two-body nonleptonic decays. This discussion emphasizes the important role of final state interactions in influencing both the observed decay width of various two-body final states as well as modifying the interference between interfering resonance channels which contribute to specific multibody decays. 50 refs., 77 figs.« less

  12. NΩ interaction from two approaches in lattice QCD

    NASA Astrophysics Data System (ADS)

    Etminan, Faisal; Firoozabadi, Mohammad Mehdi

    2014-10-01

    We compare the standard finite volume method by Lüscher with the potential method by HAL QCD collaboration, by calculating the ground state energy of N(nucleon)-Ω(Omega) system in 5 S2 channel. We employ 2+1 flavor full QCD configurations on a (1.9 fm)3×3.8 fm lattice at the lattice spacing a≃0.12 fm, whose ud(s) quark mass corresponds to mπ = 875(1) (mK = 916(1)) MeV. We have found that both methods give reasonably consistent results that there is one NΩ bound state at this parameter.

  13. Exploring Flavor Physics with Lattice QCD

    NASA Astrophysics Data System (ADS)

    Du, Daping; Fermilab/MILC Collaborations Collaboration

    2016-03-01

    The Standard Model has been a very good description of the subatomic particle physics. In the search for physics beyond the Standard Model in the context of flavor physics, it is important to sharpen our probes using some gold-plated processes (such as B rare decays), which requires the knowledge of the input parameters, such as the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements and other nonperturbative quantities, with sufficient precision. Lattice QCD is so far the only first-principle method which could compute these quantities with competitive and systematically improvable precision using the state of the art simulation techniques. I will discuss the recent progress of lattice QCD calculations on some of these nonpurturbative quantities and their applications in flavor physics. I will also discuss the implications and future perspectives of these calculations in flavor physics.

  14. Omnibus Risk Assessment via Accelerated Failure Time Kernel Machine Modeling

    PubMed Central

    Sinnott, Jennifer A.; Cai, Tianxi

    2013-01-01

    Summary Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai et al., 2011). In this paper, we derive testing and prediction methods for KM regression under the accelerated failure time model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. PMID:24328713

  15. Omnibus risk assessment via accelerated failure time kernel machine modeling.

    PubMed

    Sinnott, Jennifer A; Cai, Tianxi

    2013-12-01

    Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.

  16. QCD Sum Rules and Models for Generalized Parton Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anatoly Radyushkin

    2004-10-01

    I use QCD sum rule ideas to construct models for generalized parton distributions. To this end, the perturbative parts of QCD sum rules for the pion and nucleon electromagnetic form factors are interpreted in terms of GPDs and two models are discussed. One of them takes the double Borel transform at adjusted value of the Borel parameter as a model for nonforward parton densities, and another is based on the local duality relation. Possible ways of improving these Ansaetze are briefly discussed.

  17. Anomaly inflow on QCD axial domain-walls and vortices

    NASA Astrophysics Data System (ADS)

    Fukushima, Kenji; Imaki, Shota

    2018-06-01

    We study the chiral effective theory in the presence of quantum chromodynamics (QCD) vortices. Gauge invariance requires novel terms from vortex singularities in the gauged Wess-Zumino-Witten action, which incorporate anomaly-induced currents along the vortices. We examine these terms for systems with QCD axial domain-walls bounded by vortices (vortons) under magnetic fields. We discuss how the baryon and electric charge conservations are satisfied in these systems through interplay between domain-walls and vortices, manifesting Callan-Harvey's mechanism of anomaly inflow.

  18. Kernel Wiener filter and its application to pattern recognition.

    PubMed

    Yoshino, Hirokazu; Dong, Chen; Washizawa, Yoshikazu; Yamashita, Yukihiko

    2010-11-01

    The Wiener filter (WF) is widely used for inverse problems. From an observed signal, it provides the best estimated signal with respect to the squared error averaged over the original and the observed signals among linear operators. The kernel WF (KWF), extended directly from WF, has a problem that an additive noise has to be handled by samples. Since the computational complexity of kernel methods depends on the number of samples, a huge computational cost is necessary for the case. By using the first-order approximation of kernel functions, we realize KWF that can handle such a noise not by samples but as a random variable. We also propose the error estimation method for kernel filters by using the approximations. In order to show the advantages of the proposed methods, we conducted the experiments to denoise images and estimate errors. We also apply KWF to classification since KWF can provide an approximated result of the maximum a posteriori classifier that provides the best recognition accuracy. The noise term in the criterion can be used for the classification in the presence of noise or a new regularization to suppress changes in the input space, whereas the ordinary regularization for the kernel method suppresses changes in the feature space. In order to show the advantages of the proposed methods, we conducted experiments of binary and multiclass classifications and classification in the presence of noise.

  19. Combined multi-kernel head computed tomography images optimized for depicting both brain parenchyma and bone.

    PubMed

    Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki

    2014-01-01

    The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.

  20. Kernelization

    NASA Astrophysics Data System (ADS)

    Fomin, Fedor V.

    Preprocessing (data reduction or kernelization) as a strategy of coping with hard problems is universally used in almost every implementation. The history of preprocessing, like applying reduction rules simplifying truth functions, can be traced back to the 1950's [6]. A natural question in this regard is how to measure the quality of preprocessing rules proposed for a specific problem. For a long time the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this anomaly was that if we start with an instance I of an NP-hard problem and can show that in polynomial time we can replace this with an equivalent instance I' with |I'| < |I| then that would imply P=NP in classical complexity.

  1. QCD corrections to ZZ production in gluon fusion at the LHC

    DOE PAGES

    Caola, Fabrizio; Melnikov, Kirill; Rontsch, Raoul; ...

    2015-11-23

    We compute the next-to-leading-order QCD corrections to the production of two Z-bosons in the annihilation of two gluons at the LHC. Being enhanced by a large gluon flux, these corrections provide a distinct and, potentially, the dominant part of the N 3LO QCD contributions to Z-pair production in proton collisions. The gg → ZZ annihilation is a loop-induced process that receives the dominant contribution from loops of five light quarks, that are included in our computation in the massless approximation. We find that QCD corrections increase the gg → ZZ production cross section by O(50%–100%) depending on the values ofmore » the renormalization and factorization scales used in the leading-order computation and the collider energy. Furthermore, the large corrections to the gg → ZZ channel increase the pp → ZZ cross section by about 6% to 8%, exceeding the estimated theoretical uncertainty of the recent next-to-next-to-leading-order QCD calculation.« less

  2. Introducing etch kernels for efficient pattern sampling and etch bias prediction

    NASA Astrophysics Data System (ADS)

    Weisbuch, François; Lutich, Andrey; Schatz, Jirka

    2018-01-01

    Successful patterning requires good control of the photolithography and etch processes. While compact litho models, mainly based on rigorous physics, can predict very well the contours printed in photoresist, pure empirical etch models are less accurate and more unstable. Compact etch models are based on geometrical kernels to compute the litho-etch biases that measure the distance between litho and etch contours. The definition of the kernels, as well as the choice of calibration patterns, is critical to get a robust etch model. This work proposes to define a set of independent and anisotropic etch kernels-"internal, external, curvature, Gaussian, z_profile"-designed to represent the finest details of the resist geometry to characterize precisely the etch bias at any point along a resist contour. By evaluating the etch kernels on various structures, it is possible to map their etch signatures in a multidimensional space and analyze them to find an optimal sampling of structures. The etch kernels evaluated on these structures were combined with experimental etch bias derived from scanning electron microscope contours to train artificial neural networks to predict etch bias. The method applied to contact and line/space layers shows an improvement in etch model prediction accuracy over standard etch model. This work emphasizes the importance of the etch kernel definition to characterize and predict complex etch effects.

  3. Three-Dimensional Sensitivity Kernels of Z/H Amplitude Ratios of Surface and Body Waves

    NASA Astrophysics Data System (ADS)

    Bao, X.; Shen, Y.

    2017-12-01

    The ellipticity of Rayleigh wave particle motion, or Z/H amplitude ratio, has received increasing attention in inversion for shallow Earth structures. Previous studies of the Z/H ratio assumed one-dimensional (1D) velocity structures beneath the receiver, ignoring the effects of three-dimensional (3D) heterogeneities on wave amplitudes. This simplification may introduce bias in the resulting models. Here we present 3D sensitivity kernels of the Z/H ratio to Vs, Vp, and density perturbations, based on finite-difference modeling of wave propagation in 3D structures and the scattering-integral method. Our full-wave approach overcomes two main issues in previous studies of Rayleigh wave ellipticity: (1) the finite-frequency effects of wave propagation in 3D Earth structures, and (2) isolation of the fundamental mode Rayleigh waves from Rayleigh wave overtones and converted Love waves. In contrast to the 1D depth sensitivity kernels in previous studies, our 3D sensitivity kernels exhibit patterns that vary with azimuths and distances to the receiver. The laterally-summed 3D sensitivity kernels and 1D depth sensitivity kernels, based on the same homogeneous reference model, are nearly identical with small differences that are attributable to the single period of the 1D kernels and a finite period range of the 3D kernels. We further verify the 3D sensitivity kernels by comparing the predictions from the kernels with the measurements from numerical simulations of wave propagation for models with various small-scale perturbations. We also calculate and verify the amplitude kernels for P waves. This study shows that both Rayleigh and body wave Z/H ratios provide vertical and lateral constraints on the structure near the receiver. With seismic arrays, the 3D kernels afford a powerful tool to use the Z/H ratios to obtain accurate and high-resolution Earth models.

  4. Multiple kernel learning using single stage function approximation for binary classification problems

    NASA Astrophysics Data System (ADS)

    Shiju, S.; Sumitra, S.

    2017-12-01

    In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.

  5. On supervised graph Laplacian embedding CA model & kernel construction and its application

    NASA Astrophysics Data System (ADS)

    Zeng, Junwei; Qian, Yongsheng; Wang, Min; Yang, Yongzhong

    2017-01-01

    There are many methods to construct kernel with given data attribute information. Gaussian radial basis function (RBF) kernel is one of the most popular ways to construct a kernel. The key observation is that in real-world data, besides the data attribute information, data label information also exists, which indicates the data class. In order to make use of both data attribute information and data label information, in this work, we propose a supervised kernel construction method. Supervised information from training data is integrated into standard kernel construction process to improve the discriminative property of resulting kernel. A supervised Laplacian embedding cellular automaton model is another key application developed for two-lane heterogeneous traffic flow with the safe distance and large-scale truck. Based on the properties of traffic flow in China, we re-calibrate the cell length, velocity, random slowing mechanism and lane-change conditions and use simulation tests to study the relationships among the speed, density and flux. The numerical results show that the large-scale trucks will have great effects on the traffic flow, which are relevant to the proportion of the large-scale trucks, random slowing rate and the times of the lane space change.

  6. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    NASA Astrophysics Data System (ADS)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  7. Pollen source effects on growth of kernel structures and embryo chemical compounds in maize.

    PubMed

    Tanaka, W; Mantese, A I; Maddonni, G A

    2009-08-01

    Previous studies have reported effects of pollen source on the oil concentration of maize (Zea mays) kernels through modifications to both the embryo/kernel ratio and embryo oil concentration. The present study expands upon previous analyses by addressing pollen source effects on the growth of kernel structures (i.e. pericarp, endosperm and embryo), allocation of embryo chemical constituents (i.e. oil, protein, starch and soluble sugars), and the anatomy and histology of the embryos. Maize kernels with different oil concentration were obtained from pollinations with two parental genotypes of contrasting oil concentration. The dynamics of the growth of kernel structures and allocation of embryo chemical constituents were analysed during the post-flowering period. Mature kernels were dissected to study the anatomy (embryonic axis and scutellum) and histology [cell number and cell size of the scutellums, presence of sub-cellular structures in scutellum tissue (starch granules, oil and protein bodies)] of the embryos. Plants of all crosses exhibited a similar kernel number and kernel weight. Pollen source modified neither the growth period of kernel structures, nor pericarp growth rate. By contrast, pollen source determined a trade-off between embryo and endosperm growth rates, which impacted on the embryo/kernel ratio of mature kernels. Modifications to the embryo size were mediated by scutellum cell number. Pollen source also affected (P < 0.01) allocation of embryo chemical compounds. Negative correlations among embryo oil concentration and those of starch (r = 0.98, P < 0.01) and soluble sugars (r = 0.95, P < 0.05) were found. Coincidently, embryos with low oil concentration had an increased (P < 0.05-0.10) scutellum cell area occupied by starch granules and fewer oil bodies. The effects of pollen source on both embryo/kernel ratio and allocation of embryo chemicals seems to be related to the early established sink strength (i.e. sink size and sink activity) of the

  8. Influence of Initial Correlations on Evolution of a Subsystem in a Heat Bath and Polaron Mobility

    NASA Astrophysics Data System (ADS)

    Los, Victor F.

    2017-08-01

    A regular approach to accounting for initial correlations, which allows to go beyond the unrealistic random phase (initial product state) approximation in deriving the evolution equations, is suggested. An exact homogeneous (time-convolution and time-convolutionless) equations for a relevant part of the two-time equilibrium correlation function for the dynamic variables of a subsystem interacting with a boson field (heat bath) are obtained. No conventional approximation like RPA or Bogoliubov's principle of weakening of initial correlations is used. The obtained equations take into account the initial correlations in the kernel governing their evolution. The solution to these equations is found in the second order of the kernel expansion in the electron-phonon interaction, which demonstrates that generally the initial correlations influence the correlation function's evolution in time. It is explicitly shown that this influence vanishes on a large timescale (actually at t→ ∞) and the evolution process enters an irreversible kinetic regime. The developed approach is applied to the Fröhlich polaron and the low-temperature polaron mobility (which was under a long-time debate) is found with a correction due to initial correlations.

  9. Reconstruction of noisy and blurred images using blur kernel

    NASA Astrophysics Data System (ADS)

    Ellappan, Vijayan; Chopra, Vishal

    2017-11-01

    Blur is a common in so many digital images. Blur can be caused by motion of the camera and scene object. In this work we proposed a new method for deblurring images. This work uses sparse representation to identify the blur kernel. By analyzing the image coordinates Using coarse and fine, we fetch the kernel based image coordinates and according to that observation we get the motion angle of the shaken or blurred image. Then we calculate the length of the motion kernel using radon transformation and Fourier for the length calculation of the image and we use Lucy Richardson algorithm which is also called NON-Blind(NBID) Algorithm for more clean and less noisy image output. All these operation will be performed in MATLAB IDE.

  10. Novel near-infrared sampling apparatus for single kernel analysis of oil content in maize.

    PubMed

    Janni, James; Weinstock, B André; Hagen, Lisa; Wright, Steve

    2008-04-01

    A method of rapid, nondestructive chemical and physical analysis of individual maize (Zea mays L.) kernels is needed for the development of high value food, feed, and fuel traits. Near-infrared (NIR) spectroscopy offers a robust nondestructive method of trait determination. However, traditional NIR bulk sampling techniques cannot be applied successfully to individual kernels. Obtaining optimized single kernel NIR spectra for applied chemometric predictive analysis requires a novel sampling technique that can account for the heterogeneous forms, morphologies, and opacities exhibited in individual maize kernels. In this study such a novel technique is described and compared to less effective means of single kernel NIR analysis. Results of the application of a partial least squares (PLS) derived model for predictive determination of percent oil content per individual kernel are shown.

  11. Hamiltonian Effective Field Theory Study of the N^{*}(1535) Resonance in Lattice QCD.

    PubMed

    Liu, Zhan-Wei; Kamleh, Waseem; Leinweber, Derek B; Stokes, Finn M; Thomas, Anthony W; Wu, Jia-Jun

    2016-02-26

    Drawing on experimental data for baryon resonances, Hamiltonian effective field theory (HEFT) is used to predict the positions of the finite-volume energy levels to be observed in lattice QCD simulations of the lowest-lying J^{P}=1/2^{-} nucleon excitation. In the initial analysis, the phenomenological parameters of the Hamiltonian model are constrained by experiment and the finite-volume eigenstate energies are a prediction of the model. The agreement between HEFT predictions and lattice QCD results obtained on volumes with spatial lengths of 2 and 3 fm is excellent. These lattice results also admit a more conventional analysis where the low-energy coefficients are constrained by lattice QCD results, enabling a determination of resonance properties from lattice QCD itself. Finally, the role and importance of various components of the Hamiltonian model are examined.

  12. Nucleon PDFs and TMDs from Continuum QCD

    NASA Astrophysics Data System (ADS)

    Bednar, Kyle; Cloet, Ian; Tandy, Peter

    2017-09-01

    The parton structure of the nucleon is investigated in an approach based upon QCD's Dyson-Schwinger equations. The method accommodates a variety of QCD's dynamical outcomes including: the running mass of quark propagators and formation of non-pointlike di-quark correlations. All needed elements, including the nucleon wave function solution from a Poincaré covariant Faddeev equation, are encoded in spectral-type representations in the Nakanishi style to facilitate Feynman integral procedures and allow insight into key underlying mechanisms. Results will be presented for spin-independent PDFs and TMDs arising from a truncation to allow only scalar di-quark correlations. The influence of axial-vector di-quark correlations may be discussed if results are available. Supported by NSF Grant No. PHY-1516138.

  13. Prompt atmospheric neutrino fluxes: perturbative QCD models and nuclear effects

    DOE PAGES

    Bhattacharya, Atri; Enberg, Rikard; Jeong, Yu Seon; ...

    2016-11-28

    We evaluate the prompt atmospheric neutrino flux at high energies using three different frameworks for calculating the heavy quark production cross section in QCD: NLO perturbative QCD, k T factorization including low-x resummation, and the dipole model including parton saturation. We use QCD parameters, the value for the charm quark mass and the range for the factorization and renormalization scales that provide the best description of the total charm cross section measured at fixed target experiments, at RHIC and at LHC. Using these parameters we calculate differential cross sections for charm and bottom production and compare with the latest datamore » on forward charm meson production from LHCb at 7 TeV and at 13 TeV, finding good agreement with the data. In addition, we investigate the role of nuclear shadowing by including nuclear parton distribution functions (PDF) for the target air nucleus using two different nuclear PDF schemes. Depending on the scheme used, we find the reduction of the flux due to nuclear effects varies from 10% to 50% at the highest energies. Finally, we compare our results with the IceCube limit on the prompt neutrino flux, which is already providing valuable information about some of the QCD models.« less

  14. Structured Kernel Subspace Learning for Autonomous Robot Navigation.

    PubMed

    Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai

    2018-02-14

    This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.

  15. Nuclear Physics and Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beane, Silas

    2003-11-01

    Impressive progress is currently being made in computing properties and interac- tions of the low-lying hadrons using lattice QCD. However, cost limitations will, for the foreseeable future, necessitate the use of quark masses, Mq, that are signif- icantly larger than those of nature, lattice spacings, a, that are not significantly smaller than the physical scale of interest, and lattice sizes, L, that are not sig- nificantly larger than the physical scale of interest. Extrapolations in the quark masses, lattice spacing and lattice volume are therefore required. The hierarchy of mass scales is: L 1 j Mq j â ºC jmore » a 1 . The appropriate EFT for incorporating the light quark masses, the finite lattice spacing and the lattice size into hadronic observables is C-PT, which provides systematic expansions in the small parame- ters e m L, 1/ Lâ ºC, p/â ºC, Mq/â ºC and aâ ºC . The lattice introduces other unphysical scales as well. Lattice QCD quarks will increasingly be artificially separated« less

  16. An Adaptive Genetic Association Test Using Double Kernel Machines

    PubMed Central

    Zhan, Xiang; Epstein, Michael P.; Ghosh, Debashis

    2014-01-01

    Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study. PMID:26640602

  17. An Adaptive Genetic Association Test Using Double Kernel Machines.

    PubMed

    Zhan, Xiang; Epstein, Michael P; Ghosh, Debashis

    2015-10-01

    Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study.

  18. Dissociation of heavy quarkonium in hot QCD medium in a quasiparticle model

    NASA Astrophysics Data System (ADS)

    Agotiya, Vineet Kumar; Chandra, Vinod; Jamal, M. Yousuf; Nilima, Indrani

    2016-11-01

    Following a recent work on the effective description of the equations of state for hot QCD obtained from a hard thermal loop expression for the gluon self-energy, in terms of the quasigluons and quasiquarks and antiquarks with respective effective fugacities, the dissociation process of heavy quarkonium in hot QCD medium has been investigated. This has been done by investigating the medium modification to a heavy quark potential. The medium-modified potential has a quite different form (a long-range Coulomb tail in addition to the usual Yukawa term) in contrast to the usual picture of Debye screening. The flavor dependence binding energies of the heavy quarkonia states and the dissociation temperature have been obtained by employing the Debye mass for pure gluonic and full QCD case computed employing the quasiparticle picture. Thus, estimated dissociation patterns of the charmonium and bottomonium states, considering Debye mass from different approaches in the pure gluonic case and full QCD, have shown good agreement with the other potential model studies.

  19. Extraction of quark transversity distribution and Collins fragmentation functions with QCD evolution

    DOE PAGES

    Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; ...

    2016-01-13

    In this paper, we study the transverse momentum dependent (TMD) evolution of the Collins azimuthal asymmetries in e +e - annihilations and semi-inclusive hadron production in deep inelastic scattering (SIDIS) processes. All the relevant coefficients are calculated up to the next-to-leading logarithmic (NLL) order accuracy. By applying the TMD evolution at the approximate NLL order in the Collins- Soper-Sterman (CSS) formalism, we extract transversity distributions for u and d quarks and Collins fragmentation functions from current experimental data by a global analysis of the Collins asymmetries in back-to-back di-hadron productions in e +e - annihilations measured by BELLE and BABARmore » Collaborations and SIDIS data from HERMES, COMPASS, and JLab HALL A experiments. The impact of the evolution effects and the relevant theoretical uncertainties are discussed. We further discuss the TMD interpretation for our results, and illustrate the unpolarized quark distribution, transversity distribution, unpolarized quark fragmentation and Collins fragmentation functions depending on the transverse momentum and the hard momentum scale. Finally, we give predictions and discuss impact of future experiments.« less

  20. Extraction of quark transversity distribution and Collins fragmentation functions with QCD evolution

    NASA Astrophysics Data System (ADS)

    Kang, Zhong-Bo; Prokudin, Alexei; Sun, Peng; Yuan, Feng

    2016-01-01

    We study the transverse-momentum-dependent (TMD) evolution of the Collins azimuthal asymmetries in e+e- annihilations and semi-inclusive hadron production in deep inelastic scattering processes. All the relevant coefficients are calculated up to the next-to-leading-logarithmic-order accuracy. By applying the TMD evolution at the approximate next-to-leading-logarithmic order in the Collins-Soper-Sterman formalism, we extract transversity distributions for u and d quarks and Collins fragmentation functions from current experimental data by a global analysis of the Collins asymmetries in back-to-back dihadron productions in e+e- annihilations measured by BELLE and BABAR collaborations and semi-inclusive hadron production in deep inelastic scattering data from HERMES, COMPASS, and JLab HALL A experiments. The impact of the evolution effects and the relevant theoretical uncertainties are discussed. We further discuss the TMD interpretation for our results and illustrate the unpolarized quark distribution, transversity distribution, unpolarized quark fragmentation, and Collins fragmentation functions depending on the transverse momentum and the hard momentum scale. We make detailed predictions for future experiments and discuss their impact.