Science.gov

Sample records for kernels slowing-down

  1. Time Slows Down during Accidents.

    PubMed

    Arstila, Valtteri

    2012-01-01

    The experienced speed of the passage of time is not constant as time can seem to fly or slow down depending on the circumstances we are in. Anecdotally accidents and other frightening events are extreme examples of the latter; people who have survived accidents often report altered phenomenology including how everything appeared to happen in slow motion. While the experienced phenomenology has been investigated, there are no explanations about how one can have these experiences. Instead, the only recently discussed explanation suggests that the anecdotal phenomenology is due to memory effects and hence not really experienced during the accidents. The purpose of this article is (i) to reintroduce the currently forgotten comprehensively altered phenomenology that some people experience during the accidents, (ii) to explain why the recent experiments fail to address the issue at hand, and (iii) to suggest a new framework to explain what happens when people report having experiences of time slowing down in these cases. According to the suggested framework, our cognitive processes become rapidly enhanced. As a result, the relation between the temporal properties of events in the external world and in internal states becomes distorted with the consequence of external world appearing to slow down. That is, the presented solution is a realist one in a sense that it maintains that sometimes people really do have experiences of time slowing down.

  2. Slowing down bubbles with sound

    NASA Astrophysics Data System (ADS)

    Poulain, Cedric; Dangla, Remie; Guinard, Marion

    2009-11-01

    We present experimental evidence that a bubble moving in a fluid in which a well-chosen acoustic noise is superimposed can be significantly slowed down even for moderate acoustic pressure. Through mean velocity measurements, we show that a condition for this effect to occur is for the acoustic noise spectrum to match or overlap the bubble's fundamental resonant mode. We render the bubble's oscillations and translational movements using high speed video. We show that radial oscillations (Rayleigh-Plesset type) have no effect on the mean velocity, while above a critical pressure, a parametric type instability (Faraday waves) is triggered and gives rise to nonlinear surface oscillations. We evidence that these surface waves are subharmonic and responsible for the bubble's drag increase. When the acoustic intensity is increased, Faraday modes interact and the strongly nonlinear oscillations behave randomly, leading to a random behavior of the bubble's trajectory and consequently to a higher slow down. Our observations may suggest new strategies for bubbly flow control, or two-phase microfluidic devices. It might also be applicable to other elastic objects, such as globules, cells or vesicles, for medical applications such as elasticity-based sorting.

  3. Is cosmic acceleration slowing down?

    SciTech Connect

    Shafieloo, Arman; Sahni, Varun; Starobinsky, Alexei A.

    2009-11-15

    We investigate the course of cosmic expansion in its recent past using the Constitution SN Ia sample, along with baryon acoustic oscillations (BAO) and cosmic microwave background (CMB) data. Allowing the equation of state of dark energy (DE) to vary, we find that a coasting model of the universe (q{sub 0}=0) fits the data about as well as Lambda cold dark matter. This effect, which is most clearly seen using the recently introduced Om diagnostic, corresponds to an increase of Om and q at redshifts z < or approx. 0.3. This suggests that cosmic acceleration may have already peaked and that we are currently witnessing its slowing down. The case for evolving DE strengthens if a subsample of the Constitution set consisting of SNLS+ESSENCE+CfA SN Ia data is analyzed in combination with BAO+CMB data. The effect we observe could correspond to DE decaying into dark matter (or something else)

  4. Critical slowing down in a dynamic duopoly

    NASA Astrophysics Data System (ADS)

    Escobido, M. G. O.; Hatano, N.

    2015-01-01

    Anticipating critical transitions is very important in economic systems as it can mean survival or demise of firms under stressful competition. As such identifying indicators that can provide early warning to these transitions are very crucial. In other complex systems, critical slowing down has been shown to anticipate critical transitions. In this paper, we investigate the applicability of the concept in the heterogeneous quantity competition between two firms. We develop a dynamic model where the duopoly can adjust their production in a logistic process. We show that the resulting dynamics is formally equivalent to a competitive Lotka-Volterra system. We investigate the behavior of the dominant eigenvalues and identify conditions that critical slowing down can provide early warning to the critical transitions in the dynamic duopoly.

  5. Lead Slowing Down Spectrometer Status Report

    SciTech Connect

    Warren, Glen A.; Anderson, Kevin K.; Bonebrake, Eric; Casella, Andrew M.; Danon, Yaron; Devlin, M.; Gavron, Victor A.; Haight, R. C.; Imel, G. R.; Kulisek, Jonathan A.; O'Donnell, J. M.; Weltz, Adam

    2012-06-07

    This report documents the progress that has been completed in the first half of FY2012 in the MPACT-funded Lead Slowing Down Spectrometer project. Significant progress has been made on the algorithm development. We have an improve understanding of the experimental responses in LSDS for fuel-related material. The calibration of the ultra-depleted uranium foils was completed, but the results are inconsistent from measurement to measurement. Future work includes developing a conceptual model of an LSDS system to assay plutonium in used fuel, improving agreement between simulations and measurement, design of a thorium fission chamber, and evaluation of additional detector techniques.

  6. PT-symmetric slowing down of decoherence

    DOE PAGES

    Gardas, Bartlomiej; Deffner, Sebastian; Saxena, Avadh Behari

    2016-10-27

    Here, we invesmore » tigate PT-symmetric quantum systems ultraweakly coupled to an environment. We find that such open systems evolve under PT-symmetric, purely dephasing and unital dynamics. The dynamical map describing the evolution is then determined explicitly using a quantum canonical transformation. Furthermore, we provide an explanation of why PT-symmetric dephasing-type interactions lead to a critical slowing down of decoherence. This effect is further exemplified with an experimentally relevant system, a PT-symmetric qubit easily realizable, e.g., in optical or microcavity experiments.« less

  7. PT -symmetric slowing down of decoherence

    NASA Astrophysics Data System (ADS)

    Gardas, Bartłomiej; Deffner, Sebastian; Saxena, Avadh

    2016-10-01

    We investigate P T -symmetric quantum systems ultraweakly coupled to an environment. We find that such open systems evolve under P T -symmetric, purely dephasing and unital dynamics. The dynamical map describing the evolution is then determined explicitly using a quantum canonical transformation. Furthermore, we provide an explanation of why P T -symmetric dephasing-type interactions lead to a critical slowing down of decoherence. This effect is further exemplified with an experimentally relevant system, a P T -symmetric qubit easily realizable, e.g., in optical or microcavity experiments.

  8. Lead Slowing Down Spectrometer Research Plans

    SciTech Connect

    Warren, Glen A.; Kulisek, Jonathan A.; Gavron, Victor; Danon, Yaron; Weltz, Adam; Harris, Jason; Stewart, T.

    2013-03-22

    The MPACT-funded Lead Slowing Down Spectrometry (LSDS) project has been evaluating the feasibility of using LSDS techniques to assay fissile isotopes in used nuclear fuel assemblies. The approach has the potential to provide considerable improvement in the assay of fissile isotopic masses in fuel assemblies compared to other non-destructive techniques in a direct and independent manner. The LSDS collaborations suggests that the next step to in empirically testing the feasibility is to conduct measurements on fresh fuel assemblies to understand investigate self-attenuation and fresh mixed-oxide (MOX) fuel rodlets so we may betterto understand extraction of masses for 235U and 239Pu. While progressing toward these goals, the collaboration also strongly suggests the continued development of enabling technology such as detector development and algorithm development, thatwhich could provide significant performance benefits.

  9. A Comprehensive Investigation on the Slowing Down of Cosmic Acceleration

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Hu, Yazhou; Li, Miao; Li, Nan

    2016-04-01

    Shafieloo et al. first proposed the possibility that the current cosmic acceleration (CA) is slowing down. However, this is rather counterintuitive because a slowing down CA cannot be accommodated in most mainstream cosmological models. In this work, by exploring the evolutionary trajectories of the dark energy equation of state w(z) and deceleration parameter q(z), we present a comprehensive investigation on the slowing down of CA from both the theoretical and the observational sides. For the theoretical side, we study the impact of different w(z) using six parametrization models, and then we discuss the effects of spatial curvature. For the observational side, we investigate the effects of different type Ia supernovae (SNe Ia), baryon acoustic oscillation (BAO), and cosmic microwave background (CMB) data. We find that (1) the evolution of CA is insensitive to the specific form of w(z); in contrast, a non-flat universe favors a slowing down CA more than a flat universe. (2) SNLS3 SNe Ia data sets favor a slowing down CA at a 1σ confidence level, while JLA SNe Ia samples prefer an eternal CA; in contrast, the effects of different BAO data are negligible. (3) Compared with CMB distance prior data, full CMB data favor a slowing down CA more. (4) Due to the low significance, the slowing down of CA is still a theoretical possibility that cannot be confirmed by the current observations.

  10. Critical Slowing Down Governs the Transition to Neuron Spiking

    PubMed Central

    Meisel, Christian; Klaus, Andreas; Kuehn, Christian; Plenz, Dietmar

    2015-01-01

    Many complex systems have been found to exhibit critical transitions, or so-called tipping points, which are sudden changes to a qualitatively different system state. These changes can profoundly impact the functioning of a system ranging from controlled state switching to a catastrophic break-down; signals that predict critical transitions are therefore highly desirable. To this end, research efforts have focused on utilizing qualitative changes in markers related to a system’s tendency to recover more slowly from a perturbation the closer it gets to the transition—a phenomenon called critical slowing down. The recently studied scaling of critical slowing down offers a refined path to understand critical transitions: to identify the transition mechanism and improve transition prediction using scaling laws. Here, we outline and apply this strategy for the first time in a real-world system by studying the transition to spiking in neurons of the mammalian cortex. The dynamical system approach has identified two robust mechanisms for the transition from subthreshold activity to spiking, saddle-node and Hopf bifurcation. Although theory provides precise predictions on signatures of critical slowing down near the bifurcation to spiking, quantitative experimental evidence has been lacking. Using whole-cell patch-clamp recordings from pyramidal neurons and fast-spiking interneurons, we show that 1) the transition to spiking dynamically corresponds to a critical transition exhibiting slowing down, 2) the scaling laws suggest a saddle-node bifurcation governing slowing down, and 3) these precise scaling laws can be used to predict the bifurcation point from a limited window of observation. To our knowledge this is the first report of scaling laws of critical slowing down in an experiment. They present a missing link for a broad class of neuroscience modeling and suggest improved estimation of tipping points by incorporating scaling laws of critical slowing down as a

  11. A New Approach to Charged Particle Slowing Down and Dispersion

    SciTech Connect

    Stevens, David E.

    2016-03-24

    The process by which super-thermal ions slow down against background Coulomb potentials arises in many fields of study. In particular, this is one of the main mechanisms by which the mass and energy from the reaction products of fusion reactions is deposited back into the background. Many of these fields are characterized by length and time scales that are the same magnitude as the range and duration of the trajectory of these particles, before they thermalize into the background. This requires numerical simulation of this slowing down process through numerically integrating the velocities and energies of these particles. This paper first presents a simple introduction to the required plasma physics, followed by the description of the numerical integration used to integrate a beam of particles. This algorithm is unique in that it combines in an integrated manner both a second-order integration of the slowing down with the particle beam dispersion. These two processes are typically computed in isolation from each other. A simple test problem of a beam of alpha particles slowing down against an inert background of deuterium and tritium with varying properties of both the beam and the background illustrate the utility of the algorithm. This is followed by conclusions and appendices. The appendices define the notation, units, and several useful identities.

  12. Report on First Activations with the Lead Slowing Down Spectrometer

    SciTech Connect

    Warren, Glen A.; Mace, Emily K.; Pratt, Sharon L.; Stave, Sean; Woodring, Mitchell L.

    2011-03-03

    On Feb. 17 and 18 2011, six items were irradiated with neutrons using the Lead Slowing Down Spectrometer. After irradiation, dose measurements and gamma-spectrometry measurements were completed on all of the samples. No contamination was found on the samples, and all but one provided no dose. Gamma-spectroscopy measurements qualitatively agreed with expectations based on the materials, with the exception of silver. We observed activation in the room in general, mostly due to 56Mn and 24Na. Most of the activation was short lived, with half-lives on the scale of hours, except for 198Au which has a half-life of 2.7 d.

  13. Slowing down light using a dendritic cell cluster metasurface waveguide

    NASA Astrophysics Data System (ADS)

    Fang, Z. H.; Chen, H.; Yang, F. S.; Luo, C. R.; Zhao, X. P.

    2016-11-01

    Slowing down or even stopping light is the first task to realising optical information transmission and storage. Theoretical studies have revealed that metamaterials can slow down or even stop light; however, the difficulty of preparing metamaterials that operate in visible light hinders progress in the research of slowing or stopping light. Metasurfaces provide a new opportunity to make progress in such research. In this paper, we propose a dendritic cell cluster metasurface consisting of dendritic structures. The simulation results show that dendritic structure can realise abnormal reflection and refraction effects. Single- and double-layer dendritic metasurfaces that respond in visible light were prepared by electrochemical deposition. Abnormal Goos-Hänchen (GH) shifts were experimentally obtained. The rainbow trapping effect was observed in a waveguide constructed using the dendritic metasurface sample. The incident white light was separated into seven colours ranging from blue to red light. The measured transmission energy in the waveguide showed that the energy escaping from the waveguide was zero at the resonant frequency of the sample under a certain amount of incident light. The proposed metasurface has a simple preparation process, functions in visible light, and can be readily extended to the infrared band and communication wavelengths.

  14. Critical slowing down in purely elastic 'snap-through' instabilities

    NASA Astrophysics Data System (ADS)

    Gomez, Michael; Moulton, Derek E.; Vella, Dominic

    2017-02-01

    Many elastic structures have two possible equilibrium states: from umbrellas that become inverted in a sudden gust of wind, to nanoelectromechanical switches, origami patterns and the hopper popper, which jumps after being turned inside-out. These systems typically transition from one state to the other via a rapid `snap-through’. Snap-through allows plants to gradually store elastic energy, before releasing it suddenly to generate rapid motions, as in the Venus flytrap. Similarly, the beak of the hummingbird snaps through to catch insects mid-flight, while technological applications are increasingly exploiting snap-through instabilities. In all of these scenarios, it is the ability to repeatedly generate fast motions that gives snap-through its utility. However, estimates of the speed of snap-through suggest that it should occur more quickly than is usually observed. Here, we study the dynamics of snap-through in detail, showing that, even without dissipation, the dynamics slow down close to the snap-through transition. This is reminiscent of the slowing down observed in critical phenomena, and provides a handheld demonstration of such phenomena, as well as a new tool for tuning dynamic responses in applications of elastic bistability.

  15. Critical slowing down in purely elastic 'snap-through' instabilities

    NASA Astrophysics Data System (ADS)

    Gomez, Michael; Moulton, Derek E.; Vella, Dominic

    2016-10-01

    Many elastic structures have two possible equilibrium states: from umbrellas that become inverted in a sudden gust of wind, to nanoelectromechanical switches, origami patterns and the hopper popper, which jumps after being turned inside-out. These systems typically transition from one state to the other via a rapid `snap-through’. Snap-through allows plants to gradually store elastic energy, before releasing it suddenly to generate rapid motions, as in the Venus flytrap. Similarly, the beak of the hummingbird snaps through to catch insects mid-flight, while technological applications are increasingly exploiting snap-through instabilities. In all of these scenarios, it is the ability to repeatedly generate fast motions that gives snap-through its utility. However, estimates of the speed of snap-through suggest that it should occur more quickly than is usually observed. Here, we study the dynamics of snap-through in detail, showing that, even without dissipation, the dynamics slow down close to the snap-through transition. This is reminiscent of the slowing down observed in critical phenomena, and provides a handheld demonstration of such phenomena, as well as a new tool for tuning dynamic responses in applications of elastic bistability.

  16. Slowing down light using a dendritic cell cluster metasurface waveguide

    PubMed Central

    Fang, Z. H.; Chen, H.; Yang, F. S.; Luo, C. R.; Zhao, X. P.

    2016-01-01

    Slowing down or even stopping light is the first task to realising optical information transmission and storage. Theoretical studies have revealed that metamaterials can slow down or even stop light; however, the difficulty of preparing metamaterials that operate in visible light hinders progress in the research of slowing or stopping light. Metasurfaces provide a new opportunity to make progress in such research. In this paper, we propose a dendritic cell cluster metasurface consisting of dendritic structures. The simulation results show that dendritic structure can realise abnormal reflection and refraction effects. Single- and double-layer dendritic metasurfaces that respond in visible light were prepared by electrochemical deposition. Abnormal Goos-Hänchen (GH) shifts were experimentally obtained. The rainbow trapping effect was observed in a waveguide constructed using the dendritic metasurface sample. The incident white light was separated into seven colours ranging from blue to red light. The measured transmission energy in the waveguide showed that the energy escaping from the waveguide was zero at the resonant frequency of the sample under a certain amount of incident light. The proposed metasurface has a simple preparation process, functions in visible light, and can be readily extended to the infrared band and communication wavelengths. PMID:27886279

  17. Overcoming Critical Slowing Down in Quantum Monte Carlo Simulations

    NASA Astrophysics Data System (ADS)

    Evertz, Hans Gerd; Marcu, Mihai

    The classical d+1-dimensional spin systems used for the simulation of quantum spin systems in d dimensions are, quite generally, vertex models. Standard simulation methods for such models strongly suffer from critical slowing down. Recently, we developed the loop algorithm, a new type of cluster algorithm that to a large extent overcomes critical slowing down for vertex models. We present the basic ideas on the example of the F model, a special case of the 6-vertex model. Numerical results clearly demonstrate the effectiveness of the loop algorithm. Then, using the framework for cluster algorithms developed by Kandel and Domany, we explain how to adapt our algorithm to the cases of the 6-vertex model and the 8-vertex model, which are relevant for spin 1/2 systems. The techniqes presented here can be applied without modification to 2-dimensional spin 1/2 systems, provided that in the Suzuki-Trotter formula the Hamiltonian is broken up into 4 sums of link terms. Generalizations to more complicated situations (higher spins, different uses of the Suzuki-Trotter formula) are, at least in principle, straightforward.

  18. Slowing down light using a dendritic cell cluster metasurface waveguide.

    PubMed

    Fang, Z H; Chen, H; Yang, F S; Luo, C R; Zhao, X P

    2016-11-25

    Slowing down or even stopping light is the first task to realising optical information transmission and storage. Theoretical studies have revealed that metamaterials can slow down or even stop light; however, the difficulty of preparing metamaterials that operate in visible light hinders progress in the research of slowing or stopping light. Metasurfaces provide a new opportunity to make progress in such research. In this paper, we propose a dendritic cell cluster metasurface consisting of dendritic structures. The simulation results show that dendritic structure can realise abnormal reflection and refraction effects. Single- and double-layer dendritic metasurfaces that respond in visible light were prepared by electrochemical deposition. Abnormal Goos-Hänchen (GH) shifts were experimentally obtained. The rainbow trapping effect was observed in a waveguide constructed using the dendritic metasurface sample. The incident white light was separated into seven colours ranging from blue to red light. The measured transmission energy in the waveguide showed that the energy escaping from the waveguide was zero at the resonant frequency of the sample under a certain amount of incident light. The proposed metasurface has a simple preparation process, functions in visible light, and can be readily extended to the infrared band and communication wavelengths.

  19. The promise of slow down ageing may come from curcumin.

    PubMed

    Sikora, E; Bielak-Zmijewska, A; Mosieniak, G; Piwocka, K

    2010-01-01

    No genes exist that have been selected to promote aging. The evolutionary theory of aging tells us that there is a trade-off between body maintenance and investment in reproduction. It is commonly acceptable that the ageing process is driven by the lifelong accumulation of molecular damages mainly due to reactive oxygen species (ROS) produced by mitochondria as well as random errors in DNA replication. Although ageing itself is not a disease, numerous diseases are age-related, such as cancer, Alzheimer's disease, atherosclerosis, metabolic disorders and others, likely caused by low grade inflammation driven by oxygen stress and manifested by increased level of pro-inflammatory cytokines such as IL-1, IL-6 and TNF-alpha, encoded by genes activated by the transcription factor NF-kappaB. It is believed that ageing is plastic and can be slowed down by caloric restriction as well as by some nutraceuticals. As the low grade inflammatory process is believed substantially to contribute to ageing, slowing ageing and postponing the onset of age-related diseases may be achieved by blocking the NF-kappaB-dependent inflammation. In this review we consider the possibility of the natural spice curcumin, a powerful antioxidant, anti-inflammatory agent and efficient inhibitor of NF-kappaB and the mTOR signaling pathway which overlaps that of NF-kappaB, to slow down ageing.

  20. Cosmic slowing down of acceleration for several dark energy parametrizations

    SciTech Connect

    Magaña, Juan; Cárdenas, Víctor H.; Motta, Verónica E-mail: victor.cardenas@uv.cl

    2014-10-01

    We further investigate slowing down of acceleration of the universe scenario for five parametrizations of the equation of state of dark energy using four sets of Type Ia supernovae data. In a maximal probability analysis we also use the baryon acoustic oscillation and cosmic microwave background observations. We found the low redshift transition of the deceleration parameter appears, independently of the parametrization, using supernovae data alone except for the Union 2.1 sample. This feature disappears once we combine the Type Ia supernovae data with high redshift data. We conclude that the rapid variation of the deceleration parameter is independent of the parametrization. We also found more evidence for a tension among the supernovae samples, as well as for the low and high redshift data.

  1. Soil fauna slow down decomposition of leaf litter

    NASA Astrophysics Data System (ADS)

    Frouz, J.

    2009-04-01

    In one year incubation laboratory experiment, decomposition of alder, oak and willow litter was compared with decomposition of excrements of St marks Fly larvae (Bibio marci), produced from the same liter. Decomposition (amount of CO2 produced) was significantly higher in leas litter than in excrements. Invertebrates affect litter by many ways liter is fragmented mechanically during feeding exposed to alkaline environment and enzymes in the gut and coated by clay mineral during gut passage. In order to explore potential mechanisms that may be responsible for reduction of decomposition process 3 litter treatments with mimic certain aspects of invertebrate influence was prepared: fragmented litter, litter treated by alkaline solution and mixed with clay (kaolinite). Among those treatments Alkalization has the most strong effect on decomposition slow down.

  2. Methionine restriction slows down senescence in human diploid fibroblasts.

    PubMed

    Kozieł, Rafał; Ruckenstuhl, Christoph; Albertini, Eva; Neuhaus, Michael; Netzberger, Christine; Bust, Maria; Madeo, Frank; Wiesner, Rudolf J; Jansen-Dürr, Pidder

    2014-12-01

    Methionine restriction (MetR) extends lifespan in animal models including rodents. Using human diploid fibroblasts (HDF), we report here that MetR significantly extends their replicative lifespan, thereby postponing cellular senescence. MetR significantly decreased activity of mitochondrial complex IV and diminished the accumulation of reactive oxygen species. Lifespan extension was accompanied by a significant decrease in the levels of subunits of mitochondrial complex IV, but also complex I, which was due to a decreased translation rate of several mtDNA-encoded subunits. Together, these findings indicate that MetR slows down aging in human cells by modulating mitochondrial protein synthesis and respiratory chain assembly. © 2014 The Authors. Aging Cell published by the Anatomical Society and John Wiley & Sons Ltd.

  3. Less is more: improving proteostasis by translation slow down.

    PubMed

    Sherman, Michael Y; Qian, Shu-Bing

    2013-12-01

    Protein homeostasis, or proteostasis, refers to a proper balance between synthesis, maturation, and degradation of cellular proteins. A growing body of evidence suggests that the ribosome serves as a hub for co-translational folding, chaperone interaction, degradation, and stress response. Accordingly, in addition to the chaperone network and proteasome system, the ribosome has emerged as a major factor in protein homeostasis. Recent work revealed that high rates of elongation of translation negatively affect both the fidelity of translation and the co-translational folding of nascent polypeptides. Accordingly, by slowing down translation one can significantly improve protein folding. In this review, we discuss how to target translational processes to improve proteostasis and implications in treating protein misfolding diseases.

  4. Slowing down of an ion beam in a background plasma

    NASA Astrophysics Data System (ADS)

    Newsham, D.; Ross, T. J.; Rynn, N.

    1996-07-01

    The slowing down of a barium ion beam into two different plasma backgrounds was measured using laser-induced fluorescence. The measurements were performed in a Q machine (Ti=Te=0.2 eV, 6×1010≤nback≤1.2×1010 cm-3), where a barium ion beam, with energy 0-40 eV, was injected, parallel to the confining magnetic field, into both a cesium and a lithium plasma. In order to treat the ion beam as a class of test particles, the ion beam density was maintained at approximately two orders of magnitude below the density of the background plasma. Measured changes in the velocity profile of the ion beam agrees well with the predictions of the Fokker-Planck for both nearly equal mass beam and background ions as well as for a background ion with approximately 1/20th the mass of the beam ion.

  5. Report on Second Activations with the Lead Slowing Down Spectrometer

    SciTech Connect

    Stave, Sean C.; Mace, Emily K.; Pratt, Sharon L.; Warren, Glen A.

    2012-04-27

    Summary On August 18 and 19 2011, five items were irradiated with neutrons using the Lead Slowing Down Spectrometer (LSDS). After irradiation, dose measurements and gamma-spectrometry measurements were completed on all of the samples. No contamination was found on the samples, and all but one provided no dose. Gamma-spectroscopy measurements qualitatively agreed with expectations based on the materials. As during the first activation run, we observed activation in the room in general, mostly due to 56Mn and 24Na. Most of the activation of the samples was short lived, with half-lives on the scale of hours to days, except for 60Co which has a half-life of 5.3 y.

  6. Lead Slowing-Down Spectrometer Research at Lansce

    NASA Astrophysics Data System (ADS)

    Haight, R. C.; Bredeweg, T. A.; Devlin, M.; Gavron, A.; Jandel, M.; O'Donnell, J. M.; Wender, S. A.; Bélier, G.; Granier, T.; Laurent, B.; Taieb, J.; Danon, Y.; Thompson, J. T.

    2013-03-01

    The lead slowing-down spectrometer (LSDS) at Los Alamos is a 20 ton cube of lead with numerous channels, one for the proton beam from the LANSCE accelerator and others for samples and detectors. A pulsed spallation neutron source at the center of the cube is produced by the 800 MeV proton beam incident on an air-cooled tungsten target. Neutrons from this source are quickly downscattered by various reactions until their energies are less than the first excited state of 207Pb (0.57 MeV). After that, the neutrons slow down by elastic scattering where they lose on the average 1% of their energy per collision. The mean energy of the neutron distribution then changes with time as ~ 1/(t + to)2, where "to" is a constant. The low neutron absorption cross section of lead and multiple scattering of the neutrons leads to a very large neutron flux, approximately 1000 times that available in beams at the intense neutron source at the Lujan Center at LANSCE. Thus nuclear cross sections can be measured with very small samples, or conversely, very small cross sections can be measured with somewhat larger samples. Present research with the LSDS at LANSCE includes measuring fission cross sections on short-lived isotopes such as 237U, developing techniques to measure (n,p) and (n, α) cross sections, testing new types of detectors for use in the extreme radiation environment, and, in an applied context, assessing the possibility of measuring the isotopic content of actinide samples with the eventual goal of characterizing fresh and used reactor fuel rods.

  7. Transient subduction following upper plate acceleration/slow down

    NASA Astrophysics Data System (ADS)

    Guillaume, Benjamin; Hertgen, Solenn; Cerpa, Nestor; Martinod, Joseph

    2017-04-01

    Plate reorganization associated with mantle convection leads to changes in plate absolute velocity over geological time scales. At the global scale, these accelerations/slow down can reach values up to 2.5x10-23 m.s-2, i.e changes of up to 5 cm/yr over a 2 m.y. period (after Zahirovic et al., 2015). In this study, we aim at understanding how such changes in the kinematics of the upper plate can influence subduction dynamics and slab geometry. Are changes in the overriding plate tectonic regime and in the slab geometry synchronous or delayed with respect to modifications of plate kinematics? For this, we use an approach combining three-dimensional analogue models and two-dimensional numerical models of subduction (ADELI code). In analogue models, we impose instantaneous changes of the upper plate velocity during subduction and observe how the subduction system turns back to equilibrium with the new boundary conditions. The adjustment times appear independent of the imposed upper plate velocity and of the changes of upper plate velocity. Scaling of our models show that this transient stage lasts ˜ 11±4 m.y. for the shallow (˜125 km deep) dip of the slab, ˜16±2 m.y. for the deeper (˜330 km deep) part of the slab, and ˜4±2 m.y. for bulk upper plate deformation. Using 2-D numerical models, we explore the effect of different internal parameters (thickness and viscosity of the slab, viscosity of the mantle) as well as external parameters (instantaneous vs. progressive acceleration slow/down of the upper plate) on the duration of the transient stage. We also compare our modeling results with present-day subduction zones and their evolution through the last 20 m.y. Data analysis suggests an adjustment time of ˜15 m.y. for shallow slab dip and ˜20 m.y. for deep slab dip in Nature. Since only 1% and 9% of the 260 studied subduction transects exhibit a constant upper plate velocity over the last 20 m.y. and 15 m.y., respectively, most of subduction zones must be in a

  8. Do attractive interactions slow down diffusion in polymer nanocomposites?

    NASA Astrophysics Data System (ADS)

    Lin, Chia-Chun; Gam, Sangah; Meth, Jeffrey S.; Clarke, Nigel; Winey, Karen I.; Composto, Russell J.

    2013-03-01

    Diffusion of deuterated poly(methyl methacrylate) (dPMMA) is slowed down in PMMA matrix filled with spherical silica nanoparticles (NPs) ranging from 13 to 50 nm in diameter. NPs are well dispersed in the matrix up to 40 vol%. The normalized diffusion coefficients (D/D0) decrease as the volume fractions increases, and this decrease is stronger as NPs size decreases. When plotted against the confinement parameter, ID/2Rg, where ID is interparticle distance and 2Rg is probe size, D/D0 collapse onto a master curve. In the strongly confined region where ID < 2Rg, D/D0 decrease dramatically up to 80 %, whereas in the weakly confined region where ID > 2Rg, D/D0 decrease moderately. Even when ID is eight times larger than 2Rg, a 15 % reduction in the diffusion is observed. The master curve of this study, an attractive system, compared with a weakly interacting system previously studied, indicating attractive interactions do not significantly alter center of mass polymer diffusion in polymer nanocomposites.

  9. Slowing Down Downhill Folding: A Three-Probe Study

    SciTech Connect

    Kim, Seung Joong; Matsumura, Yoshitaka; Dumont, Charles; Kihara, Hiroshi; Gruebele, Martin

    2009-09-11

    The mutant Tyr{sup 22}Trp/Glu{sup 33}Tyr/Gly{sup 46}Ala/Gly{sup 48}Ala of {lambda} repressor fragment {lambda}6-85 was previously assigned as an incipient downhill folder. We slow down its folding in a cryogenic water-ethylene-glycol solvent (-18 to -28 C). The refolding kinetics are probed by small-angle x-ray scattering, circular dichroism, and fluorescence to measure the radius of gyration, the average secondary structure content, and the native packing around the single tryptophan residue. The main resolved kinetic phase of the mutant is probe independent and faster than the main phase observed for the pseudo-wild-type. Excess helical structure formed early on by the mutant may reduce the formation of turns and prevent the formation of compact misfolded states, speeding up the overall folding process. Extrapolation of our main cryogenic folding phase and previous T-jump measurements to 37 C yields nearly the same refolding rate as extrapolated by Oas and co-workers from NMR line-shape data. Taken together, all the data consistently indicate a folding speed limit of {approx}4.5 {micro}s for this fast folder.

  10. Ligands Slow Down Pure-Dephasing in Semiconductor Quantum Dots.

    PubMed

    Liu, Jin; Kilina, Svetlana V; Tretiak, Sergei; Prezhdo, Oleg V

    2015-09-22

    It is well-known experimentally and theoretically that surface ligands provide additional pathways for energy relaxation in colloidal semiconductor quantum dots (QDs). They increase the rate of inelastic charge-phonon scattering and provide trap sites for the charges. We show that, surprisingly, ligands have the opposite effect on elastic electron-phonon scattering. Our simulations demonstrate that elastic scattering slows down in CdSe QDs passivated with ligands compared to that in bare QDs. As a result, the pure-dephasing time is increased, and the homogeneous luminescence line width is decreased in the presence of ligands. The lifetime of quantum superpositions of single and multiple excitons increases as well, providing favorable conditions for multiple excitons generation (MEG). Ligands reduce the pure-dephasing rates by decreasing phonon-induced fluctuations of the electronic energy levels. Surface atoms are most mobile in QDs, and therefore, they contribute greatly to the electronic energy fluctuations. The mobility is reduced by interaction with ligands. A simple analytical model suggests that the differences between the bare and passivated QDs persist for up to 5 nm diameters. Both low-frequency acoustic and high-frequency optical phonons participate in the dephasing processes in bare QDs, while low-frequency acoustic modes dominate in passivated QDs. The theoretical predictions regarding the pure-dephasing time, luminescence line width, and MEG can be verified experimentally by studying QDs with different surface passivation.

  11. Hydrogen Bonding Slows Down Surface Diffusion of Molecular Glasses.

    PubMed

    Chen, Yinshan; Zhang, Wei; Yu, Lian

    2016-08-18

    Surface-grating decay has been measured for three organic glasses with extensive hydrogen bonding: sorbitol, maltitol, and maltose. For 1000 nm wavelength gratings, the decay occurs by viscous flow in the entire range of temperature studied, covering the viscosity range 10(5)-10(11) Pa s, whereas under the same conditions, the decay mechanism transitions from viscous flow to surface diffusion for organic glasses of similar molecular sizes but with no or limited hydrogen bonding. These results indicate that extensive hydrogen bonding slows down surface diffusion in organic glasses. This effect arises because molecules can preserve hydrogen bonding even near the surface so that the loss of nearest neighbors does not translate into a proportional decrease of the kinetic barrier for diffusion. This explanation is consistent with a strong correlation between liquid fragility and the surface enhancement of diffusion, both reporting resistance of a liquid to dynamic excitation. Slow surface diffusion is expected to hinder any processes that rely on surface transport, for example, surface crystal growth and formation of stable glasses by vapor deposition.

  12. Ketogenic diet slows down mitochondrial myopathy progression in mice.

    PubMed

    Ahola-Erkkilä, Sofia; Carroll, Christopher J; Peltola-Mjösund, Katja; Tulkki, Valtteri; Mattila, Ismo; Seppänen-Laakso, Tuulikki; Oresic, Matej; Tyynismaa, Henna; Suomalainen, Anu

    2010-05-15

    Mitochondrial dysfunction is a major cause of neurodegenerative and neuromuscular diseases of adult age and of multisystem disorders of childhood. However, no effective treatment exists for these progressive disorders. Cell culture studies suggested that ketogenic diet (KD), with low glucose and high fat content, could select against cells or mitochondria with mutant mitochondrial DNA (mtDNA), but proper patient trials are still lacking. We studied here the transgenic Deletor mouse, a disease model for progressive late-onset mitochondrial myopathy, accumulating mtDNA deletions during aging and manifesting subtle progressive respiratory chain (RC) deficiency. We found that these mice have widespread lipidomic and metabolite changes, including abnormal plasma phospholipid and free amino acid levels and ketone body production. We treated these mice with pre-symptomatic long-term and post-symptomatic shorter term KD. The effects of the diet for disease progression were followed by morphological, metabolomic and lipidomic tools. We show here that the diet decreased the amount of cytochrome c oxidase negative muscle fibers, a key feature in mitochondrial RC deficiencies, and prevented completely the formation of the mitochondrial ultrastructural abnormalities in the muscle. Furthermore, most of the metabolic and lipidomic changes were cured by the diet to wild-type levels. The diet did not, however, significantly affect the mtDNA quality or quantity, but rather induced mitochondrial biogenesis and restored liver lipid levels. Our results show that mitochondrial myopathy induces widespread metabolic changes, and that KD can slow down progression of the disease in mice. These results suggest that KD may be useful for mitochondrial late-onset myopathies.

  13. Lead Slowing Down Spectrometer FY2013 Annual Report

    SciTech Connect

    Warren, Glen A.; Kulisek, Jonathan A.; Gavron, Victor A.; Danon, Yaron; Weltz, Adam; Harris, Jason; Stewart, T.

    2013-10-29

    Executive Summary The Lead Slowing Down Spectrometry (LSDS) project, funded by the Materials Protection And Control Technology campaign, has been evaluating the feasibility of using LSDS techniques to assay fissile isotopes in used nuclear fuel assemblies. The approach has the potential to provide considerable improvement in the assay of fissile isotopic masses in fuel assemblies compared to other non-destructive techniques in a direct and independent manner. This report is a high level summary of the progress completed in FY2013. This progress included: • Fabrication of a 4He scintillator detector to detect fast neutrons in the LSDS operating environment. Testing of the detector will be conducted in FY2014. • Design of a large area 232Th fission chamber. • Analysis using the Los Alamos National Laboratory perturbation model estimated the required number of neutrons for an LSDS measurement to be 10 to the 16th source neutrons. • Application of the algorithms developed at Pacific Northwest National Laboratory to LSDS measurement data of various fissile samples conducted in 2012. The results concluded that the 235U could be measured to 2.7% and the 239Pu could be measured to 6.3%. Significant effort is yet needed to demonstrate the applicability of these algorithms for used-fuel assemblies, but the results reported here are encouraging in demonstrating that we are making progress toward that goal. • Development and cost-analysis of a research plan for the next critical demonstration measurements. The plan suggests measurements on fresh fuel sub assemblies as a means to experimentally test self-attenuation and the use of fresh mixed-oxide fuel as a means to test simultaneous measurement of 235U and 239Pu.

  14. Critical slowing down and hyperuniformity on approach to jamming.

    PubMed

    Atkinson, Steven; Zhang, Ge; Hopkins, Adam B; Torquato, Salvatore

    2016-07-01

    Hyperuniformity characterizes a state of matter that is poised at a critical point at which density or volume-fraction fluctuations are anomalously suppressed at infinite wavelengths. Recently, much attention has been given to the link between strict jamming (mechanical rigidity) and (effective or exact) hyperuniformity in frictionless hard-particle packings. However, in doing so, one must necessarily study very large packings in order to access the long-ranged behavior and to ensure that the packings are truly jammed. We modify the rigorous linear programming method of Donev et al. [J. Comput. Phys. 197, 139 (2004)JCTPAH0021-999110.1016/j.jcp.2003.11.022] in order to test for jamming in putatively collectively and strictly jammed packings of hard disks in two dimensions. We show that this rigorous jamming test is superior to standard ways to ascertain jamming, including the so-called "pressure-leak" test. We find that various standard packing protocols struggle to reliably create packings that are jammed for even modest system sizes of N≈10^{3} bidisperse disks in two dimensions; importantly, these packings have a high reduced pressure that persists over extended amounts of time, meaning that they appear to be jammed by conventional tests, though rigorous jamming tests reveal that they are not. We present evidence that suggests that deviations from hyperuniformity in putative maximally random jammed (MRJ) packings can in part be explained by a shortcoming of the numerical protocols to generate exactly jammed configurations as a result of a type of "critical slowing down" as the packing's collective rearrangements in configuration space become locally confined by high-dimensional "bottlenecks" from which escape is a rare event. Additionally, various protocols are able to produce packings exhibiting hyperuniformity to different extents, but this is because certain protocols are better able to approach exactly jammed configurations. Nonetheless, while one should

  15. Critical slowing down and hyperuniformity on approach to jamming

    NASA Astrophysics Data System (ADS)

    Atkinson, Steven; Zhang, Ge; Hopkins, Adam B.; Torquato, Salvatore

    2016-07-01

    Hyperuniformity characterizes a state of matter that is poised at a critical point at which density or volume-fraction fluctuations are anomalously suppressed at infinite wavelengths. Recently, much attention has been given to the link between strict jamming (mechanical rigidity) and (effective or exact) hyperuniformity in frictionless hard-particle packings. However, in doing so, one must necessarily study very large packings in order to access the long-ranged behavior and to ensure that the packings are truly jammed. We modify the rigorous linear programming method of Donev et al. [J. Comput. Phys. 197, 139 (2004), 10.1016/j.jcp.2003.11.022] in order to test for jamming in putatively collectively and strictly jammed packings of hard disks in two dimensions. We show that this rigorous jamming test is superior to standard ways to ascertain jamming, including the so-called "pressure-leak" test. We find that various standard packing protocols struggle to reliably create packings that are jammed for even modest system sizes of N ≈103 bidisperse disks in two dimensions; importantly, these packings have a high reduced pressure that persists over extended amounts of time, meaning that they appear to be jammed by conventional tests, though rigorous jamming tests reveal that they are not. We present evidence that suggests that deviations from hyperuniformity in putative maximally random jammed (MRJ) packings can in part be explained by a shortcoming of the numerical protocols to generate exactly jammed configurations as a result of a type of "critical slowing down" as the packing's collective rearrangements in configuration space become locally confined by high-dimensional "bottlenecks" from which escape is a rare event. Additionally, various protocols are able to produce packings exhibiting hyperuniformity to different extents, but this is because certain protocols are better able to approach exactly jammed configurations. Nonetheless, while one should not generally

  16. How Accurately Can We Calculate Neutrons Slowing Down In Water ?

    SciTech Connect

    Cullen, D E; Blomquist, R; Greene, M; Lent, E; MacFarlane, R; McKinley, S; Plechaty, E; Sublet, J C

    2006-03-30

    We have compared the results produced by a variety of currently available Monte Carlo neutron transport codes for the relatively simple problem of a fast source of neutrons slowing down and thermalizing in water. Initial comparisons showed rather large differences in the calculated flux; up to 80% differences. By working together we iterated to improve the results by: (1) insuring that all codes were using the same data, (2) improving the models used by the codes, and (3) correcting errors in the codes; no code is perfect. Even after a number of iterations we still found differences, demonstrating that our Monte Carlo and supporting codes are far from perfect; in particularly we found that the often overlooked nuclear data processing codes can be the weakest link in our systems of codes. The results presented here represent the today's state-of-the-art, in the sense that all of the Monte Carlo codes are modern, widely available and used codes. They all use the most up-to-date nuclear data, and the results are very recent, weeks or at most a few months old; these are the results that current users of these codes should expect to obtain from them. As such, the accuracy and limitations of the codes presented here should serve as guidelines to code users in interpreting their results for similar problems. We avoid crystal ball gazing, in the sense that we limit the scope of this report to what is available to code users today, and we avoid predicting future improvements that may or may not actual come to pass. An exception that we make is in presenting results for an improved thermal scattering model currently being testing using advanced versions of NJOY and MCNP that are not currently available to users, but are planned for release in the not too distant future. The other exception is to show comparisons between experimentally measured water cross sections and preliminary ENDF/B-VII thermal scattering law, S({alpha},{beta}) data; although these data are strictly

  17. The Pedagogy of Slowing Down: Teaching Talmud in a Summer Kollel

    ERIC Educational Resources Information Center

    Kanarek, Jane

    2010-01-01

    This article explores a set of practices in the teaching of Talmud called "the pedagogy of slowing down." Through the author's analysis of her own teaching in an intensive Talmud class, "the pedagogy of slowing down" emerges as a pedagogical and cultural model in which the students learn to read more closely and to investigate the multiplicity of…

  18. Anomalous versus Slowed-Down Brownian Diffusion in the Ligand-Binding Equilibrium

    PubMed Central

    Soula, Hédi; Caré, Bertrand; Beslon, Guillaume; Berry, Hugues

    2013-01-01

    Measurements of protein motion in living cells and membranes consistently report transient anomalous diffusion (subdiffusion) that converges back to a Brownian motion with reduced diffusion coefficient at long times after the anomalous diffusion regime. Therefore, slowed-down Brownian motion could be considered the macroscopic limit of transient anomalous diffusion. On the other hand, membranes are also heterogeneous media in which Brownian motion may be locally slowed down due to variations in lipid composition. Here, we investigate whether both situations lead to a similar behavior for the reversible ligand-binding reaction in two dimensions. We compare the (long-time) equilibrium properties obtained with transient anomalous diffusion due to obstacle hindrance or power-law-distributed residence times (continuous-time random walks) to those obtained with space-dependent slowed-down Brownian motion. Using theoretical arguments and Monte Carlo simulations, we show that these three scenarios have distinctive effects on the apparent affinity of the reaction. Whereas continuous-time random walks decrease the apparent affinity of the reaction, locally slowed-down Brownian motion and local hindrance by obstacles both improve it. However, only in the case of slowed-down Brownian motion is the affinity maximal when the slowdown is restricted to a subregion of the available space. Hence, even at long times (equilibrium), these processes are different and exhibit irreconcilable behaviors when the area fraction of reduced mobility changes. PMID:24209851

  19. Anomalous versus slowed-down Brownian diffusion in the ligand-binding equilibrium.

    PubMed

    Soula, Hédi; Caré, Bertrand; Beslon, Guillaume; Berry, Hugues

    2013-11-05

    Measurements of protein motion in living cells and membranes consistently report transient anomalous diffusion (subdiffusion) that converges back to a Brownian motion with reduced diffusion coefficient at long times after the anomalous diffusion regime. Therefore, slowed-down Brownian motion could be considered the macroscopic limit of transient anomalous diffusion. On the other hand, membranes are also heterogeneous media in which Brownian motion may be locally slowed down due to variations in lipid composition. Here, we investigate whether both situations lead to a similar behavior for the reversible ligand-binding reaction in two dimensions. We compare the (long-time) equilibrium properties obtained with transient anomalous diffusion due to obstacle hindrance or power-law-distributed residence times (continuous-time random walks) to those obtained with space-dependent slowed-down Brownian motion. Using theoretical arguments and Monte Carlo simulations, we show that these three scenarios have distinctive effects on the apparent affinity of the reaction. Whereas continuous-time random walks decrease the apparent affinity of the reaction, locally slowed-down Brownian motion and local hindrance by obstacles both improve it. However, only in the case of slowed-down Brownian motion is the affinity maximal when the slowdown is restricted to a subregion of the available space. Hence, even at long times (equilibrium), these processes are different and exhibit irreconcilable behaviors when the area fraction of reduced mobility changes. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  20. Vegetation recovery in tidal marshes reveals critical slowing down under increased inundation

    NASA Astrophysics Data System (ADS)

    van Belzen, Jim; van de Koppel, Johan; Kirwan, Matthew L.; van der Wal, Daphne; Herman, Peter M. J.; Dakos, Vasilis; Kéfi, Sonia; Scheffer, Marten; Guntenspergen, Glenn R.; Bouma, Tjeerd J.

    2017-06-01

    A declining rate of recovery following disturbance has been proposed as an important early warning for impending tipping points in complex systems. Despite extensive theoretical and laboratory studies, this `critical slowing down' remains largely untested in the complex settings of real-world ecosystems. Here, we provide both observational and experimental support of critical slowing down along natural stress gradients in tidal marsh ecosystems. Time series of aerial images of European marsh development reveal a consistent lengthening of recovery time as inundation stress increases. We corroborate this finding with transplantation experiments in European and North American tidal marshes. In particular, our results emphasize the power of direct observational or experimental measures of recovery over indirect statistical signatures, such as spatial variance or autocorrelation. Our results indicate that the phenomenon of critical slowing down can provide a powerful tool to probe the resilience of natural ecosystems.

  1. Vegetation recovery in tidal marshes reveals critical slowing down under increased inundation.

    PubMed

    van Belzen, Jim; van de Koppel, Johan; Kirwan, Matthew L; van der Wal, Daphne; Herman, Peter M J; Dakos, Vasilis; Kéfi, Sonia; Scheffer, Marten; Guntenspergen, Glenn R; Bouma, Tjeerd J

    2017-06-09

    A declining rate of recovery following disturbance has been proposed as an important early warning for impending tipping points in complex systems. Despite extensive theoretical and laboratory studies, this 'critical slowing down' remains largely untested in the complex settings of real-world ecosystems. Here, we provide both observational and experimental support of critical slowing down along natural stress gradients in tidal marsh ecosystems. Time series of aerial images of European marsh development reveal a consistent lengthening of recovery time as inundation stress increases. We corroborate this finding with transplantation experiments in European and North American tidal marshes. In particular, our results emphasize the power of direct observational or experimental measures of recovery over indirect statistical signatures, such as spatial variance or autocorrelation. Our results indicate that the phenomenon of critical slowing down can provide a powerful tool to probe the resilience of natural ecosystems.

  2. Slow-down collisions and nonsequential double ionization in classical simulations.

    PubMed

    Panfili, R; Haan, S L; Eberly, J H

    2002-09-09

    We use classical simulations to analyze the dynamics of nonsequential double-electron short-pulse photoionization. We utilize a microcanonical ensemble of 10(5) two-electron "trajectories," a number large enough to provide large subensembles and even sub-subensembles associated with double ionization. We focus on key events in the final doubly ionized subensemble and back-analyze the subensemble's history, revealing a classical slow-down scenario for nonsequential double ionization. We analyze the dynamics of these slow-down collisions and find that a good phase match between the motions of the electrons can lead to very effective energy transfer, followed by escape over a suppressed barrier.

  3. ACTIV: Sandwich Detector Activity from In-Pile Slowing-Down Spectra Experiment

    SciTech Connect

    2013-08-01

    ACTIV calculates the activities of a sandwich detector, to be used for in-pile measurements in slowing-down spectra below a few keV. The effect of scattering with energy degradation in the filter and in the detectors has been included to a first approximation.

  4. "Slow Down, You Move Too Fast:" Literature Circles as Reflective Practice

    ERIC Educational Resources Information Center

    Sanacore, Joseph

    2013-01-01

    Becoming an effective literacy learner requires a bit of slowing down and appreciating the reflective nature of reading and writing. Literature circles support this instructional direction because they provide opportunities for immersing students in discussions that encourage their personal responses. When students feel their personal responses…

  5. 49 CFR 392.11 - Railroad grade crossings; slowing down required.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Railroad grade crossings; slowing down required... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit...

  6. 49 CFR 392.11 - Railroad grade crossings; slowing down required.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Railroad grade crossings; slowing down required... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit...

  7. "Slow Down, You Move Too Fast:" Literature Circles as Reflective Practice

    ERIC Educational Resources Information Center

    Sanacore, Joseph

    2013-01-01

    Becoming an effective literacy learner requires a bit of slowing down and appreciating the reflective nature of reading and writing. Literature circles support this instructional direction because they provide opportunities for immersing students in discussions that encourage their personal responses. When students feel their personal responses…

  8. Low energy slowing down of nanosize copper clusters on gold (1 1 1) surfaces

    NASA Astrophysics Data System (ADS)

    Lei, H.; Hou, Q.; Hou, M.

    2000-04-01

    The slowing down of copper clusters formed by 440 atoms on a gold (1 1 1) surface is studied in detail by means of molecular dynamics. The atomic classical molecular dynamics is based on the second moment approximation of the tight binding model and, in addition, accounts for the electron-phonon coupling in the frame of the Sommerfeld theory of metals. The slowing down energy range is 0-1 eV/atom, which is characteristic of low energy cluster beam deposition (LECBD). A pronounced epitaxy of the copper clusters is found. However, their morphology is significantly energy dependent. The structure and the radial pair correlation functions are used to study the details of the epitaxial properties as well as the pronounced relaxation in the interfacial cluster atom positions due to the lattice mismatch between copper and gold. The effect of the cluster and substrate average temperature is investigated and can be distinguished from the kinetic effect of the cluster impact.

  9. Resonance treatment using pin-based pointwise energy slowing-down method

    NASA Astrophysics Data System (ADS)

    Choi, Sooyoung; Lee, Changho; Lee, Deokjung

    2017-02-01

    A new resonance self-shielding method using a pointwise energy solution has been developed to overcome the drawbacks of the equivalence theory. The equivalence theory uses a crude resonance scattering source approximation, and assumes a spatially constant scattering source distribution inside a fuel pellet. These two assumptions cause a significant error, in that they overestimate the multi-group effective cross sections, especially for 238U. The new resonance self-shielding method solves pointwise energy slowing-down equations with a sub-divided fuel rod. The method adopts a shadowing effect correction factor and fictitious moderator material to model a realistic pointwise energy solution. The slowing-down solution is used to generate the multi-group cross section. With various light water reactor problems, it was demonstrated that the new resonance self-shielding method significantly improved accuracy in the reactor parameter calculation with no compromise in computation time, compared to the equivalence theory.

  10. Observation of slow down of polystyrene nanogels diffusivities in contact with swollen polystyrene brushes.

    PubMed

    Michailidou, V N; Loppinet, B; Vo, C D; Rühe, J; Tauer, K; Fytas, G

    2008-01-01

    The diffusion of dilute colloids in contact with swollen polymer brushes has been studied by evanescent wave dynamic light scattering. Two polystyrene nanogels with 16 nm and 42 nm radius were put into contact with three polystyrene brushes with varying grafting densities. Partial penetration of the nanogels within the brushes was revealed by the evanescent wave penetration depth-dependent scattering intensities. The experimental short-time diffusion coefficients of the penetrating particles were measured and found to strongly slow down as the nanoparticles get deeper into the brushes. The slow down is much more marked for the smaller (16 nm) nanogels, suggesting a size exclusion type of mechanism and the existence of a characteristic length scale present in the outer part of the brush.

  11. Critical slowing down of cluster algorithms for Ising models coupled to 2-d gravity

    NASA Astrophysics Data System (ADS)

    Bowick, Mark; Falcioni, Marco; Harris, Geoffrey; Marinari, Enzo

    1994-02-01

    We simulate single and multiple Ising models coupled to 2-d gravity using both the Swendsen-Wang and Wolff algorithms to update the spins. We study the integrated autocorrelation time and find that there is considerable critical slowing down, particularly in the magnetization. We argue that this is primarily due to the local nature of the dynamical triangulation algorithm and to the generation of a distribution of baby universes which inhibits cluster growth.

  12. Role of fivefold symmetry in the dynamical slowing down of metallic glass-forming liquids

    NASA Astrophysics Data System (ADS)

    Lü, Y. J.; Bi, Q. L.; Huang, H. S.; Pang, H. H.

    2017-08-01

    Fivefold symmetry is supposed to have an important role in suppressing crystallization and promoting glass transition due to its structural incompatibility with crystal. In this paper, we study the correlation between the fivefold symmetry and the dynamical slowing down in glass-forming Cu-Zr liquids using the single-particle dynamics method based on molecular dynamics simulations. The dynamics of the glass-forming liquids is microscopically characterized by the jump cage motion for individual atoms; moreover, the cooperative jumps become more pronounced upon approaching the glass transition temperature. We find that the role of fivefold symmetry in the dynamical slowing down does not lie in caging atomic motion but, more importantly, in suppressing cooperative jumps. The atoms with a high degree of fivefold symmetry and involved in jump motions appear more sluggish compared to other jumps. This behavior significantly suppresses the cooperative jumps around them, leading to the slowing down of fast dynamics. The degree of suppression has a close relation to the glass-forming ability and contributes to the "strong" character of liquids.

  13. Slowing down of North Pacific climate variability and its implications for abrupt ecosystem change.

    PubMed

    Boulton, Chris A; Lenton, Timothy M

    2015-09-15

    Marine ecosystems are sensitive to stochastic environmental variability, with higher-amplitude, lower-frequency--i.e., "redder"--variability posing a greater threat of triggering large ecosystem changes. Here we show that fluctuations in the Pacific Decadal Oscillation (PDO) index have slowed down markedly over the observational record (1900-present), as indicated by a robust increase in autocorrelation. This "reddening" of the spectrum of climate variability is also found in regionally averaged North Pacific sea surface temperatures (SSTs), and can be at least partly explained by observed deepening of the ocean mixed layer. The progressive reddening of North Pacific climate variability has important implications for marine ecosystems. Ecosystem variables that respond linearly to climate forcing will have become prone to much larger variations over the observational record, whereas ecosystem variables that respond nonlinearly to climate forcing will have become prone to more frequent "regime shifts." Thus, slowing down of North Pacific climate variability can help explain the large magnitude and potentially the quick succession of well-known abrupt changes in North Pacific ecosystems in 1977 and 1989. When looking ahead, despite model limitations in simulating mixed layer depth (MLD) in the North Pacific, global warming is robustly expected to decrease MLD. This could potentially reverse the observed trend of slowing down of North Pacific climate variability and its effects on marine ecosystems.

  14. Numerical studies of fast ion slowing down rates in cool magnetized plasma using LSP

    NASA Astrophysics Data System (ADS)

    Evans, Eugene S.; Kolmes, Elijah; Cohen, Samuel A.; Rognlien, Tom; Cohen, Bruce; Meier, Eric; Welch, Dale R.

    2016-10-01

    In MFE devices, rapid transport of fusion products from the core into the scrape-off layer (SOL) could perform the dual roles of energy and ash removal. The first-orbit trajectories of most fusion products from small field-reversed configuration (FRC) devices will traverse the SOL, allowing those particles to deposit their energy in the SOL and be exhausted along the open field lines. Thus, the fast ion slowing-down time should affect the energy balance of an FRC reactor and its neutron emissions. However, the dynamics of fast ion energy loss processes under the conditions expected in the FRC SOL (with ρe <λDe) are analytically complex, and not yet fully understood. We use LSP, a 3D electromagnetic PIC code, to examine the effects of SOL density and background B-field on the slowing-down time of fast ions in a cool plasma. As we use explicit algorithms, these simulations must spatially resolve both ρe and λDe, as well as temporally resolve both Ωe and ωpe, increasing computation time. Scaling studies of the fast ion charge (Z) and background plasma density are in good agreement with unmagnetized slowing down theory. Notably, Z-scaling represents a viable way to dramatically reduce the required CPU time for each simulation. This work was supported, in part, by DOE Contract Number DE-AC02-09CH11466.

  15. Hydrophobic molecules slow down the hydrogen-bond dynamics of water.

    PubMed

    Bakulin, Artem A; Pshenichnikov, Maxim S; Bakker, Huib J; Petersen, Christian

    2011-03-17

    We study the spectral and orientational dynamics of HDO molecules in solutions of tertiary-butyl-alcohol (TBA), trimethyl-amine-oxide (TMAO), and tetramethylurea (TMU) in isotopically diluted water (HDO:D(2)O and HDO:H(2)O). The spectral dynamics are studied with femtosecond two-dimensional infrared spectroscopy and the orientational dynamics with femtosecond polarization-resolved vibrational pump-probe spectroscopy. We observe a strong slowing down of the spectral diffusion around the central part of the absorption line that increases with increasing solute concentration. At low concentrations, the fraction of water showing slow spectral dynamics is observed to scale with the number of methyl groups, indicating that this effect is due to slow hydrogen-bond dynamics in the hydration shell of the methyl groups of the solute molecules. The slowing down of the vibrational frequency dynamics is strongly correlated with the slowing down of the orientational mobility of the water molecules. This correlation indicates that these effects have a common origin in the effect of hydrophobic molecular groups on the hydrogen-bond dynamics of water.

  16. Critical slowing down as early warning for the onset of collapse in mutualistic communities.

    PubMed

    Dakos, Vasilis; Bascompte, Jordi

    2014-12-09

    Tipping points are crossed when small changes in external conditions cause abrupt unexpected responses in the current state of a system. In the case of ecological communities under stress, the risk of approaching a tipping point is unknown, but its stakes are high. Here, we test recently developed critical slowing-down indicators as early-warning signals for detecting the proximity to a potential tipping point in structurally complex ecological communities. We use the structure of 79 empirical mutualistic networks to simulate a scenario of gradual environmental change that leads to an abrupt first extinction event followed by a sequence of species losses until the point of complete community collapse. We find that critical slowing-down indicators derived from time series of biomasses measured at the species and community level signal the proximity to the onset of community collapse. In particular, we identify specialist species as likely the best-indicator species for monitoring the proximity of a community to collapse. In addition, trends in slowing-down indicators are strongly correlated to the timing of species extinctions. This correlation offers a promising way for mapping species resilience and ranking species risk to extinction in a given community. Our findings pave the road for combining theory on tipping points with patterns of network structure that might prove useful for the management of a broad class of ecological networks under global environmental change.

  17. A quantitative model of application slow-down in multi-resource shared systems

    DOE PAGES

    Lim, Seung-Hwan; Kim, Youngjae

    2016-12-26

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job ismore » characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.« less

  18. A quantitative model of application slow-down in multi-resource shared systems

    SciTech Connect

    Lim, Seung-Hwan; Kim, Youngjae

    2016-12-26

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job is characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.

  19. Critical slowing down and noise-induced intermittency in bistable perception: bifurcation analysis.

    PubMed

    Pisarchik, Alexander N; Jaimes-Reátegui, Rider; Magallón-García, C D Alejandro; Castillo-Morales, C Obed

    2014-08-01

    Stochastic dynamics and critical slowing down were studied experimentally and numerically near the onset of dynamical bistability in visual perception under the influence of noise. Exploring the Necker cube as the essential example of an ambiguous figure, and using its wire contrast as a control parameter, we measured dynamical hysteresis in two coexisting percepts as a function of both the velocity of the parameter change and the background luminance. The bifurcation analysis allowed us to estimate the level of cognitive noise inherent to brain neural cells activity, which induced intermittent switches between different perception states. The results of numerical simulations with a simple energy model are in good qualitative agreement with psychological experiments.

  20. Critical slowing down exponents in structural glasses: Random orthogonal and related models

    NASA Astrophysics Data System (ADS)

    Caltagirone, F.; Ferrari, U.; Leuzzi, L.; Parisi, G.; Rizzo, T.

    2012-08-01

    An important prediction of mode-coupling theory is the relationship between the power-law decay exponents in the β regime and the consequent definition of the so-called exponent parameter λ. In the context of a certain class of mean-field glass models with quenched disorder, the physical meaning of λ has recently been understood, yielding a method to compute it exactly in a static framework. In this paper we exploit this new technique to compute the critical slowing down exponents for such models including, as special cases, the Sherrington-Kirkpatrick model, the p-spin model, and the random orthogonal model.

  1. Diffraction as a reason for slowing down light pulses in vacuum

    NASA Astrophysics Data System (ADS)

    Fedorov, M. V.; Vintskevich, S. V.; Grigoriev, D. A.

    2017-03-01

    The mean velocity of a finite-size short light pulse in a far zone is defined as the vectorial sum of velocities of all rays forming the pulse. Because of diffraction, the mean pulse velocity defined in this way is always somewhat smaller than the speed of light. The conditions are found when this slowing-down effect is sufficiently pronounced to be experimentally measurable. Under these conditions the original Gaussian shape of a pulse is found to be strongly modified with significant lengthening of the rear wing of the field envelope. Schemes for measuring these effects are suggested and discussed.

  2. Lattice Cell Calculations, Slowing Down Theory and Computer Code Wims; Vver Type Reactors

    NASA Astrophysics Data System (ADS)

    Moen, J.; Brekke, A.; Hall, C.

    1991-01-01

    The following sections are included: * INTRODUCTION * WIMS AS A TOOL FOR REACTOR CORE CALCULATIONS * GENERAL STRUCTURE OF THE WIMS CODE * WIMS APPROACH TO THE SLOWING DOWN CALCULATIONS * MULTIGROUP OSCOPIC CROSS SECTIONS, RESONANCE TREATMENT * DETERMINATION OF MULTIGROUP SPECTRA * PHYSICAL MODELS IN MAIN TRANSPORT CALCULATIONS * BURNUP CALCULATIONS * APPLICATION OF WIMSD-4 TO VVER TYPE LATTICES * FINAL REMARKS * REFERENCES * APPENDIX A: DANCOFF FACTOR - STANDARD APPROACH * APPENDIX B: FORMULAS FOR DANCOFF AND BELL FACTORS CALCULATIONS APPLIED IN PREWIM * APPENDIX C: CALCULATION OF ONE GROUP PROBABILITIES Pij IN AN ANNULAR SYSTEM * APPENDIX D: SCHAEFER'S METHOD

  3. Measurements with the high flux lead slowing-down spectrometer at LANL

    NASA Astrophysics Data System (ADS)

    Danon, Y.; Romano, C.; Thompson, J.; Watson, T.; Haight, R. C.; Wender, S. A.; Vieira, D. J.; Bond, E.; Wilhelmy, J. B.; O'Donnell, J. M.; Michaudon, A.; Bredeweg, T. A.; Schurman, T.; Rochman, D.; Granier, T.; Ethvignot, T.; Taieb, J.; Becker, J. A.

    2007-08-01

    A Lead Slowing-Down Spectrometer (LSDS) was recently installed at LANL [D. Rochman, R.C. Haight, J.M. O'Donnell, A. Michaudon, S.A. Wender, D.J. Vieira, E.M. Bond, T.A. Bredeweg, A. Kronenberg, J.B. Wilhelmy, T. Ethvignot, T. Granier, M. Petit, Y. Danon, Characteristics of a lead slowing-down spectrometer coupled to the LANSCE accelerator, Nucl. Instr. and Meth. A 550 (2005) 397]. The LSDS is comprised of a cube of pure lead 1.2 m on the side, with a spallation pulsed neutron source in its center. The LSDS is driven by 800 MeV protons with a time-averaged current of up to 1 μA, pulse widths of 0.05-0.25 μs and a repetition rate of 20-40 Hz. Spallation neutrons are created by directing the proton beam into an air-cooled tungsten target in the center of the lead cube. The neutrons slow down by scattering interactions with the lead and thus enable measurements of neutron-induced reaction rates as a function of the slowing-down time, which correlates to neutron energy. The advantage of an LSDS as a neutron spectrometer is that the neutron flux is 3-4 orders of magnitude higher than a standard time-of-flight experiment at the equivalent flight path, 5.6 m. The effective energy range is 0.1 eV to 100 keV with a typical energy resolution of 30% from 1 eV to 10 keV. The average neutron flux between 1 and 10 keV is about 1.7 × 109 n/cm2/s/μA. This high flux makes the LSDS an important tool for neutron-induced cross section measurements of ultra-small samples (nanograms) or of samples with very low cross sections. The LSDS at LANL was initially built in order to measure the fission cross section of the short-lived metastable isotope of U-235, however it can also be used to measure (n, α) and (n, p) reactions. Fission cross section measurements were made with samples of 235U, 236U, 238U and 239Pu. The smallest sample measured was 10 ng of 239Pu. Measurement of (n, α) cross section with 760 ng of Li-6 was also demonstrated. Possible future cross section measurements

  4. Geant4-DNA simulation of electron slowing-down spectra in liquid water

    NASA Astrophysics Data System (ADS)

    Incerti, S.; Kyriakou, I.; Tran, H. N.

    2017-04-01

    This work presents the simulation of monoenergetic electron slowing-down spectra in liquid water by the Geant4-DNA extension of the Geant4 Monte Carlo toolkit (release 10.2p01). These spectra are simulated for several incident energies using the most recent Geant4-DNA physics models, and they are compared to literature data. The influence of Auger electron production is discussed. For the first time, a dedicated Geant4-DNA example allowing such simulations is described and is provided to Geant4 users, allowing further verification of Geant4-DNA track structure simulation capabilities.

  5. Small but slow world: how network topology and burstiness slow down spreading.

    PubMed

    Karsai, M; Kivelä, M; Pan, R K; Kaski, K; Kertész, J; Barabási, A-L; Saramäki, J

    2011-02-01

    While communication networks show the small-world property of short paths, the spreading dynamics in them turns out slow. Here, the time evolution of information propagation is followed through communication networks by using empirical data on contact sequences and the susceptible-infected model. Introducing null models where event sequences are appropriately shuffled, we are able to distinguish between the contributions of different impeding effects. The slowing down of spreading is found to be caused mainly by weight-topology correlations and the bursty activity patterns of individuals.

  6. Critical slowing down associated with regime shifts in the US housing market

    NASA Astrophysics Data System (ADS)

    Tan, James Peng Lung; Cheong, Siew Siew Ann

    2014-02-01

    Complex systems are described by a large number of variables with strong and nonlinear interactions. Such systems frequently undergo regime shifts. Combining insights from bifurcation theory in nonlinear dynamics and the theory of critical transitions in statistical physics, we know that critical slowing down and critical fluctuations occur close to such regime shifts. In this paper, we show how universal precursors expected from such critical transitions can be used to forecast regime shifts in the US housing market. In the housing permit, volume of homes sold and percentage of homes sold for gain data, we detected strong early warning signals associated with a sequence of coupled regime shifts, starting from a Subprime Mortgage Loans transition in 2003-2004 and ending with the Subprime Crisis in 2007-2008. Weaker signals of critical slowing down were also detected in the US housing market data during the 1997-1998 Asian Financial Crisis and the 2000-2001 Technology Bubble Crisis. Backed by various macroeconomic data, we propose a scenario whereby hot money flowing back into the US during the Asian Financial Crisis fueled the Technology Bubble. When the Technology Bubble collapsed in 2000-2001, the hot money then flowed into the US housing market, triggering the Subprime Mortgage Loans transition in 2003-2004 and an ensuing sequence of transitions. We showed how this sequence of couple transitions unfolded in space and in time over the whole of US.

  7. Temporal variation in antibiotic environments slows down resistance evolution in pathogenic Pseudomonas aeruginosa

    PubMed Central

    Roemhild, Roderich; Barbosa, Camilo; Beardmore, Robert E; Jansen, Gunther; Schulenburg, Hinrich

    2015-01-01

    Antibiotic resistance is a growing concern to public health. New treatment strategies may alleviate the situation by slowing down the evolution of resistance. Here, we evaluated sequential treatment protocols using two fully independent laboratory-controlled evolution experiments with the human pathogen Pseudomonas aeruginosa PA14 and two pairs of clinically relevant antibiotics (doripenem/ciprofloxacin and cefsulodin/gentamicin). Our results consistently show that the sequential application of two antibiotics decelerates resistance evolution relative to monotherapy. Sequential treatment enhanced population extinction although we applied antibiotics at sublethal dosage. In both experiments, we identified an order effect of the antibiotics used in the sequential protocol, leading to significant variation in the long-term efficacy of the tested protocols. These variations appear to be caused by asymmetric evolutionary constraints, whereby adaptation to one drug slowed down adaptation to the other drug, but not vice versa. An understanding of such asymmetric constraints may help future development of evolutionary robust treatments against infectious disease. PMID:26640520

  8. Slowing down of ring polymer diffusion caused by inter-ring threading.

    PubMed

    Lee, Eunsang; Kim, Soree; Jung, YounJoon

    2015-06-01

    Diffusion of long ring polymers in a melt is much slower than the reorganization of their internal structures. While direct evidence for entanglements has not been observed in the long ring polymers unlike linear polymer melts, threading between the rings is suspected to be the main reason for slowing down of ring polymer diffusion. It is, however, difficult to define the threading configuration between two rings because the rings have no chain end. In this work, evidence for threading dynamics of ring polymers is presented by using molecular dynamics simulation and applying a novel analysis method. The simulation results are analyzed in terms of the statistics of persistence and exchange times that have proved useful in studying heterogeneous dynamics of glassy systems. It is found that the threading time of ring polymer melts increases more rapidly with the degree of polymerization than that of linear polymer melts. This indicates that threaded ring polymers cannot diffuse until an unthreading event occurs, which results in the slowing down of ring polymer diffusion.

  9. UDCA slows down intestinal cell proliferation by inducing high and sustained ERK phosphorylation.

    PubMed

    Krishna-Subramanian, S; Hanski, M L; Loddenkemper, C; Choudhary, B; Pagès, G; Zeitz, M; Hanski, C

    2012-06-15

    Ursodeoxycholic acid (UDCA) attenuates colon carcinogenesis in humans and in animal models by an unknown mechanism. We investigated UDCA effects on normal intestinal epithelium in vivo and in vitro to identify the potential chemopreventive mechanism. Feeding of mice with 0.4% UDCA reduced cell proliferation to 50% and suppressed several potential proproliferatory genes including insulin receptor substrate 1 (Irs-1). A similar transcriptional response was observed in the rat intestinal cell line IEC-6 which was then used as an in vitro model. UDCA slowed down the proliferation of IEC-6 cells and induced sustained hyperphosphorylation of ERK1/ERK2 kinases which completely inhibited the proproliferatory effects of EGF and IGF-1. The hyperphosphorylation of ERK1 led to a transcriptional suppression of the Irs-1 gene. Both, the hyperphosphorylation of ERK as well as the suppression of Irs-1 were sufficient to inhibit proliferation of IEC-6 cells. ERK1/ERK2 inhibition in vitro or ERK1 elimination in vitro or in vivo abrogated the antiproliferatory effects of UDCA. We show that UDCA inhibits proliferation of nontransformed intestinal epithelial cells by inducing a sustained hyperphosphorylation of ERK1 kinase which slows down the cell cycle and reduces expression of Irs-1 protein. These data extend our understanding of the physiological and potentially chemopreventive effects of UDCA and identify new targets for chemoprevention.

  10. Synchronous slowing down in coupled logistic maps via random network topology

    PubMed Central

    Wang, Sheng-Jun; Du, Ru-Hai; Jin, Tao; Wu, Xing-Sen; Qu, Shi-Xian

    2016-01-01

    The speed and paths of synchronization play a key role in the function of a system, which has not received enough attention up to now. In this work, we study the synchronization process of coupled logistic maps that reveals the common features of low-dimensional dissipative systems. A slowing down of synchronization process is observed, which is a novel phenomenon. The result shows that there are two typical kinds of transient process before the system reaches complete synchronization, which is demonstrated by both the coupled multiple-period maps and the coupled multiple-band chaotic maps. When the coupling is weak, the evolution of the system is governed mainly by the local dynamic, i.e., the node states are attracted by the stable orbits or chaotic attractors of the single map and evolve toward the synchronized orbit in a less coherent way. When the coupling is strong, the node states evolve in a high coherent way toward the stable orbit on the synchronized manifold, where the collective dynamics dominates the evolution. In a mediate coupling strength, the interplay between the two paths is responsible for the slowing down. The existence of different synchronization paths is also proven by the finite-time Lyapunov exponent and its distribution. PMID:27021897

  11. Gel mesh as ``brake'' to slow down DNA translocation through solid-state nanopores

    NASA Astrophysics Data System (ADS)

    Tang, Zhipeng; Liang, Zexi; Lu, Bo; Li, Ji; Hu, Rui; Zhao, Qing; Yu, Dapeng

    2015-07-01

    Agarose gel is introduced onto the cis side of silicon nitride nanopores by a simple and low-cost method to slow down the speed of DNA translocation. DNA translocation speed is slowed by roughly an order of magnitude without losing signal to noise ratio for different DNA lengths and applied voltages in gel-meshed nanopores. The existence of the gel moves the center-of-mass position of the DNA conformation further from the nanopore center, contributing to the observed slowing of translocation speed. A reduced velocity fluctuation is also noted, which is beneficial for further applications of gel-meshed nanopores. The reptation model is considered in simulation and agrees well with the experimental results.Agarose gel is introduced onto the cis side of silicon nitride nanopores by a simple and low-cost method to slow down the speed of DNA translocation. DNA translocation speed is slowed by roughly an order of magnitude without losing signal to noise ratio for different DNA lengths and applied voltages in gel-meshed nanopores. The existence of the gel moves the center-of-mass position of the DNA conformation further from the nanopore center, contributing to the observed slowing of translocation speed. A reduced velocity fluctuation is also noted, which is beneficial for further applications of gel-meshed nanopores. The reptation model is considered in simulation and agrees well with the experimental results. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03084f

  12. Modeling resonance interference by 0-D slowing-down solution with embedded self-shielding method

    SciTech Connect

    Liu, Y.; Martin, W.; Kim, K. S.; Williams, M.

    2013-07-01

    The resonance integral table based methods employing conventional multigroup structure for the resonance self-shielding calculation have a common difficulty on treating the resonance interference. The problem arises due to the lack of sufficient energy dependence of the resonance cross sections when the calculation is performed in the multigroup structure. To address this, a resonance interference factor model has been proposed to account for the interference effect by comparing the interfered and non-interfered effective cross sections obtained from 0-D homogeneous slowing-down solutions by continuous-energy cross sections. A rigorous homogeneous slowing-down solver is developed with two important features for reducing the calculation time and memory requirement for practical applications. The embedded self-shielding method (ESSM) is chosen as the multigroup resonance self-shielding solver as an integral component of the interference method. The interference method is implemented in the DeCART transport code. Verification results show that the code system provides more accurate effective cross sections and multiplication factors than the conventional interference method for UO{sub 2} and MOX fuel cases. The additional computing time and memory for the interference correction is acceptable for the test problems including a depletion case with 87 isotopes in the fuel region. (authors)

  13. Synchronous slowing down in coupled logistic maps via random network topology

    NASA Astrophysics Data System (ADS)

    Wang, Sheng-Jun; Du, Ru-Hai; Jin, Tao; Wu, Xing-Sen; Qu, Shi-Xian

    2016-03-01

    The speed and paths of synchronization play a key role in the function of a system, which has not received enough attention up to now. In this work, we study the synchronization process of coupled logistic maps that reveals the common features of low-dimensional dissipative systems. A slowing down of synchronization process is observed, which is a novel phenomenon. The result shows that there are two typical kinds of transient process before the system reaches complete synchronization, which is demonstrated by both the coupled multiple-period maps and the coupled multiple-band chaotic maps. When the coupling is weak, the evolution of the system is governed mainly by the local dynamic, i.e., the node states are attracted by the stable orbits or chaotic attractors of the single map and evolve toward the synchronized orbit in a less coherent way. When the coupling is strong, the node states evolve in a high coherent way toward the stable orbit on the synchronized manifold, where the collective dynamics dominates the evolution. In a mediate coupling strength, the interplay between the two paths is responsible for the slowing down. The existence of different synchronization paths is also proven by the finite-time Lyapunov exponent and its distribution.

  14. Early warning of climate tipping points from critical slowing down: comparing methods to improve robustness

    PubMed Central

    Lenton, T. M.; Livina, V. N.; Dakos, V.; Van Nes, E. H.; Scheffer, M.

    2012-01-01

    We address whether robust early warning signals can, in principle, be provided before a climate tipping point is reached, focusing on methods that seek to detect critical slowing down as a precursor of bifurcation. As a test bed, six previously analysed datasets are reconsidered, three palaeoclimate records approaching abrupt transitions at the end of the last ice age and three models of varying complexity forced through a collapse of the Atlantic thermohaline circulation. Approaches based on examining the lag-1 autocorrelation function or on detrended fluctuation analysis are applied together and compared. The effects of aggregating the data, detrending method, sliding window length and filtering bandwidth are examined. Robust indicators of critical slowing down are found prior to the abrupt warming event at the end of the Younger Dryas, but the indicators are less clear prior to the Bølling-Allerød warming, or glacial termination in Antarctica. Early warnings of thermohaline circulation collapse can be masked by inter-annual variability driven by atmospheric dynamics. However, rapidly decaying modes can be successfully filtered out by using a long bandwidth or by aggregating data. The two methods have complementary strengths and weaknesses and we recommend applying them together to improve the robustness of early warnings. PMID:22291229

  15. Development for fissile assay in recycled fuel using lead slowing down spectrometer

    SciTech Connect

    Lee, Yong Deok; Je Park, C.; Kim, Ho-Dong; Song, Kee Chan

    2013-07-01

    A future nuclear energy system is under development to turn spent fuels produced by PWRs into fuels for a SFR (Sodium Fast Reactor) through the pyrochemical process. The knowledge of the isotopic fissile content of the new fuel is very important for fuel safety. A lead slowing down spectrometer (LSDS) is under development to analyze the fissile material content (Pu{sup 239}, Pu{sup 241} and U{sup 235}) of the fuel. The LSDS requires a neutron source, the neutrons will be slowed down through their passage in a lead medium and will finally enter the fuel and will induce fission reactions that will be analysed and the isotopic content of the fuel will be then determined. The issue is that the spent fuel emits intense gamma rays and neutrons by spontaneous fission. The threshold fission detector screens the prompt fast fission neutrons and as a result the LSDS is not influenced by the high level radiation background. The energy resolution of LSDS is good in the range 0.1 eV to 1 keV. It is also the range in which the fission reaction is the most discriminating for the considered fissile isotopes. An electron accelerator has been chosen to produce neutrons with an adequate target through (e{sup -},γ)(γ,n) reactions.

  16. A close look at axonal transport: Cargos slow down when crossing stationary organelles.

    PubMed

    Che, Daphne L; Chowdary, Praveen D; Cui, Bianxiao

    2016-01-01

    The bidirectional transport of cargos along the thin axon is fundamental for the structure, function and survival of neurons. Defective axonal transport has been linked to the mechanism of neurodegenerative diseases. In this paper, we study the effect of the local axonal environment to cargo transport behavior in neurons. Using dual-color fluorescence imaging in microfluidic neuronal devices, we quantify the transport dynamics of cargos when crossing stationary organelles such as non-moving endosomes and stationary mitochondria in the axon. We show that the axonal cargos tend to slow down, or pause transiently within the vicinity of stationary organelles. The slow-down effect is observed in both retrograde and anterograde transport directions of three different cargos (TrkA, lysosomes and TrkB). Our results agree with the hypothesis that bulky axonal structures can pose as steric hindrance for axonal transport. However, the results do not rule out the possibility that cellular mechanisms causing stationary organelles are also responsible for the delay in moving cargos at the same locations.

  17. Early warning of climate tipping points from critical slowing down: comparing methods to improve robustness.

    PubMed

    Lenton, T M; Livina, V N; Dakos, V; van Nes, E H; Scheffer, M

    2012-03-13

    We address whether robust early warning signals can, in principle, be provided before a climate tipping point is reached, focusing on methods that seek to detect critical slowing down as a precursor of bifurcation. As a test bed, six previously analysed datasets are reconsidered, three palaeoclimate records approaching abrupt transitions at the end of the last ice age and three models of varying complexity forced through a collapse of the Atlantic thermohaline circulation. Approaches based on examining the lag-1 autocorrelation function or on detrended fluctuation analysis are applied together and compared. The effects of aggregating the data, detrending method, sliding window length and filtering bandwidth are examined. Robust indicators of critical slowing down are found prior to the abrupt warming event at the end of the Younger Dryas, but the indicators are less clear prior to the Bølling-Allerød warming, or glacial termination in Antarctica. Early warnings of thermohaline circulation collapse can be masked by inter-annual variability driven by atmospheric dynamics. However, rapidly decaying modes can be successfully filtered out by using a long bandwidth or by aggregating data. The two methods have complementary strengths and weaknesses and we recommend applying them together to improve the robustness of early warnings.

  18. Critical Slowing Down of Quadrupole and Hexadecapole Orderings in Iron Pnictide Superconductor

    NASA Astrophysics Data System (ADS)

    Kurihara, Ryosuke; Mitsumoto, Keisuke; Akatsu, Mitsuhiro; Nemoto, Yuichi; Goto, Terutaka; Kobayashi, Yoshiaki; Sato, Masatoshi

    2017-06-01

    Ultrasonic measurements have been carried out to investigate the critical dynamics of structural and superconducting transitions due to degenerate orbital bands in iron pnictide compounds with the formula Ba(Fe1-xCox)2As2. The attenuation coefficient αL[110] of the longitudinal ultrasonic wave for (C11 + C12 + 2C66)/2 for x = 0.036 reveals the critical slowing down of the relaxation time around the structural transition at Ts = 65 K, which is caused by ferro-type ordering of the quadrupole Ox'2 - y'2 coupled to the strain ɛxy. The attenuation coefficient α66 of the transverse ultrasonic wave for C66 for x = 0.071 also exhibits the critical slowing down around the superconducting transition at TSC = 23 K, which is caused by ferro-type ordering of the hexadecapole Hzα(ri,rj) = Ox'y'(ri)Ox'2 - y'2(rj) + Ox'2 - y'2(ri)Ox'y'(rj) of the bound two-electron state coupled to the rotation ωxy. It is proposed that the hexadecapole ordering associated with the superconductivity brings about spontaneous rotation of the macroscopic superconducting state with respect to the host tetragonal lattice.

  19. PT-symmetric slowing down of decoherence

    SciTech Connect

    Gardas, Bartlomiej; Deffner, Sebastian; Saxena, Avadh Behari

    2016-10-27

    Here, we investigate PT-symmetric quantum systems ultraweakly coupled to an environment. We find that such open systems evolve under PT-symmetric, purely dephasing and unital dynamics. The dynamical map describing the evolution is then determined explicitly using a quantum canonical transformation. Furthermore, we provide an explanation of why PT-symmetric dephasing-type interactions lead to a critical slowing down of decoherence. This effect is further exemplified with an experimentally relevant system, a PT-symmetric qubit easily realizable, e.g., in optical or microcavity experiments.

  20. Microdosimetry of the full slowing down of protons using Monte Carlo track structure simulations.

    PubMed

    Liamsuwan, T; Uehara, S; Nikjoo, H

    2015-09-01

    The article investigates two approaches in microdosimetric calculations based on Monte Carlo track structure (MCTS) simulations of a 160-MeV proton beam. In the first approach, microdosimetric parameters of the proton beam were obtained using the weighted sum of proton energy distributions and microdosimetric parameters of proton track segments (TSMs). In the second approach, phase spaces of energy depositions obtained using MCTS simulations in the full slowing down (FSD) mode were used for the microdosimetric calculations. Targets of interest were water cylinders of 2.3-100 nm in diameters and heights. Frequency-averaged lineal energies ([Formula: see text]) obtained using both approaches agreed within the statistical uncertainties. Discrepancies beyond this level were observed for dose-averaged lineal energies ([Formula: see text]) towards the Bragg peak region due to the small number of proton energies used in the TSM approach and different energy deposition patterns in the TSM and FSD of protons.

  1. Equilibrium and stability in a heliotron with anisotropic hot particle slowing-down distribution

    SciTech Connect

    Cooper, W. A.; Asahi, Y.; Narushima, Y.; Suzuki, Y.; Watanabe, K. Y.; Graves, J. P.; Isaev, M. Yu.

    2012-10-15

    The equilibrium and linear fluid Magnetohydrodynamic (MHD) stability in an inward-shifted large helical device heliotron configuration are investigated with the 3D ANIMEC and TERPSICHORE codes, respectively. A modified slowing-down distribution function is invoked to study anisotropic pressure conditions. An appropriate choice of coefficients and exponents allows the simulation of neutral beam injection in which the angle of injection is varied from parallel to perpendicular. The fluid stability analysis concentrates on the application of the Johnson-Kulsrud-Weimer energy principle. The growth rates are maximum at <{beta}>{approx}2%, decrease significantly at <{beta}>{approx}4.5%, do not vary significantly with variations of the injection angle and are similar to those predicted with a bi-Maxwellian hot particle distribution function model. Stability is predicted at <{beta}>{approx}2.5% with a sufficiently peaked energetic particle pressure profile. Electrostatic potential forms from the MHD instability necessary for guiding centre orbit following are calculated.

  2. Critical slowing down of spin fluctuations in BiFeO3

    NASA Astrophysics Data System (ADS)

    Scott, J. F.; Singh, M. K.; Katiyar, R. S.

    2008-10-01

    In earlier work we reported the discovery of phase transitions in BiFeO3 evidenced by divergences in the magnon light-scattering cross-sections at 140 and 201 K (Singh et al 2008 J. Phys.: Condens. Matter 20 252203) and fitted these intensity data to critical exponents α = 0.06 and α' = 0.10 (Scott et al 2008 J. Phys.: Condens. Matter 20 322203), under the assumption that the transitions are strongly magnetoelastic (Redfern et al 2008 at press) and couple to strain divergences through the Pippard relationship (Pippard 1956 Phil. Mag. 1 473). In the present paper we extend those criticality studies to examine the magnon linewidths, which exhibit critical slowing down (and hence linewidth narrowing) of spin fluctuations. The linewidth data near the two transitions are qualitatively different and we cannot reliably extract a critical exponent ν, although the mean field value ν = 1/2 gives a good fit near the lower transition.

  3. Structure and dynamics of water in crowded environments slows down peptide conformational changes

    SciTech Connect

    Lu, Cheng; Prada-Gracia, Diego; Rao, Francesco

    2014-07-28

    The concentration of macromolecules inside the cell is high with respect to conventional in vitro experiments or simulations. In an effort to characterize the effects of crowding on the thermodynamics and kinetics of disordered peptides, molecular dynamics simulations were run at different concentrations by varying the number of identical weakly interacting peptides inside the simulation box. We found that the presence of crowding does not influence very much the overall thermodynamics. On the other hand, peptide conformational dynamics was found to be strongly affected, resulting in a dramatic slowing down at larger concentrations. The observation of long lived water bridges between peptides at higher concentrations points to a nontrivial role of the solvent in the altered peptide kinetics. Our results reinforce the idea for an active role of water in molecular crowding, an effect that is expected to be relevant for problems influenced by large solvent exposure areas like in intrinsically disordered proteins.

  4. Slow down of a globally neutral relativistic e-e+ beam shearing the vacuum

    NASA Astrophysics Data System (ADS)

    Alves, E. P.; Grismayer, T.; Silveirinha, M. G.; Fonseca, R. A.; Silva, L. O.

    2016-01-01

    The microphysics of relativistic collisionless shear flows is investigated in a configuration consisting of a globally neutral, relativistic {{e}-}{{e}+} beam streaming through a hollow plasma/dielectric channel. We show through multidimensional particle-in-cell simulations that this scenario excites the mushroom instability (MI), a transverse shear instability on the electron-scale, when there is no overlap (no contact) between the {{e}-}{{e}+} beam and the walls of the hollow plasma channel. The onset of the MI leads to the conversion of the beam’s kinetic energy into magnetic (and electric) field energy, effectively slowing down a globally neutral body in the absence of contact. The collisionless shear physics explored in this configuration may operate in astrophysical environments, particularly in highly relativistic and supersonic settings where macroscopic shear processes are stable.

  5. Exponential discontinuous numerical scheme for electron transport in the continuous slowing down approximation

    SciTech Connect

    Prinja, A.K.; Lorence, L.J.

    1997-06-01

    A nonlinear discretization scheme in space and energy, based on the recently developed exponential discontinuous method, is applied to continuous slowing down dominated electron transport (i.e., in the absence of scattering.) Numerical results for dose and charge deposition are obtained and compared against results from the ONELD and ONEBFP codes, and against exact results from an adjoint Monte Carlo code. It is found that although the exponential discontinuous scheme yields strictly positive and monotonic solutions, the dose profile is considerably straggled when compared to results from the linear codes. On the other hand, the linear schemes produce negative results which, furthermore, do not damp effectively in some cases. A general conclusion is that while yielding strictly positive solutions, the exponential discontinuous method does not show the crude cell accuracy for charged particle transport as was apparent for neutral particle transport problems.

  6. Disentangling density and temperature effects in the viscous slowing down of glassforming liquids

    NASA Astrophysics Data System (ADS)

    Tarjus, G.; Kivelson, D.; Mossa, S.; Alba-Simionesco, C.

    2004-04-01

    We present a consistent picture of the respective role of density (ρ) and temperature (T) in the viscous slowing down of glassforming liquids and polymers. Specifically, based in part upon a new analysis of simulation and experimental data on liquid ortho-terphenyl, we conclude that a zeroth-order description of the approach to the glass transition (in the range of experimentally accessible pressures) should be formulated in terms of a temperature-driven super-Arrhenius activated behavior rather than a density-driven congestion or jamming phenomenon. The density plays a role at a quantitative level, but its effect on the viscosity and the α-relaxation time can be simply described via a single parameter, an effective interaction energy that is characteristic of the high-T liquid regime; as a result, ρ does not affect the "fragility" of the glassforming system.

  7. High temperature slows down growth in tobacco hornworms (Manduca sexta larvae) under food restriction.

    PubMed

    Hayes, Matthew B; Jiao, Lihong; Tsao, Tsu-hsuan; King, Ian; Jennings, Michael; Hou, Chen

    2015-03-01

    When fed ad libitum (AL), ectothermic animals usually grow faster and have higher metabolic rate at higher ambient temperature. However, if food supply is limited, there is an energy tradeoff between growth and metabolism. Here we hypothesize that for ectothermic animals under food restriction (FR), high temperature will lead to a high metabolic rate, but growth will slow down to compensate for the high metabolism. We measure the rates of growth and metabolism of 4 cohorts of 5th instar hornworms (Manduca sexta larvae) reared at 2 levels of food supply (AL and FR) and 2 temperatures (20 and 30 °C). Our results show that, compared to the cohorts reared at 20 °C, the ones reared at 30 °C have high metabolic rates under both AL and FR conditions, but a high growth rate under AL and a low growth rate under FR, supporting this hypothesis.

  8. Critical slowing down as early warning for the onset and termination of depression

    PubMed Central

    van de Leemput, Ingrid A.; Wichers, Marieke; Cramer, Angélique O. J.; Borsboom, Denny; Tuerlinckx, Francis; Kuppens, Peter; van Nes, Egbert H.; Viechtbauer, Wolfgang; Giltay, Erik J.; Aggen, Steven H.; Derom, Catherine; Jacobs, Nele; Kendler, Kenneth S.; van der Maas, Han L. J.; Neale, Michael C.; Peeters, Frenk; Thiery, Evert; Zachar, Peter; Scheffer, Marten

    2014-01-01

    About 17% of humanity goes through an episode of major depression at some point in their lifetime. Despite the enormous societal costs of this incapacitating disorder, it is largely unknown how the likelihood of falling into a depressive episode can be assessed. Here, we show for a large group of healthy individuals and patients that the probability of an upcoming shift between a depressed and a normal state is related to elevated temporal autocorrelation, variance, and correlation between emotions in fluctuations of autorecorded emotions. These are indicators of the general phenomenon of critical slowing down, which is expected to occur when a system approaches a tipping point. Our results support the hypothesis that mood may have alternative stable states separated by tipping points, and suggest an approach for assessing the likelihood of transitions into and out of depression. PMID:24324144

  9. Gel mesh as "brake" to slow down DNA translocation through solid-state nanopores.

    PubMed

    Tang, Zhipeng; Liang, Zexi; Lu, Bo; Li, Ji; Hu, Rui; Zhao, Qing; Yu, Dapeng

    2015-08-21

    Agarose gel is introduced onto the cis side of silicon nitride nanopores by a simple and low-cost method to slow down the speed of DNA translocation. DNA translocation speed is slowed by roughly an order of magnitude without losing signal to noise ratio for different DNA lengths and applied voltages in gel-meshed nanopores. The existence of the gel moves the center-of-mass position of the DNA conformation further from the nanopore center, contributing to the observed slowing of translocation speed. A reduced velocity fluctuation is also noted, which is beneficial for further applications of gel-meshed nanopores. The reptation model is considered in simulation and agrees well with the experimental results.

  10. Analysis of spent fuel assay with a lead slowing down spectrometer

    SciTech Connect

    Gavron, Victor I; Smith, L. Eric; Ressler, Jennifer J

    2010-10-29

    Assay of fissile materials in spent fuel that are produced or depleted during the operation of a reactor, is of paramount importance to nuclear materials accounting, verification of the reactor operation history, as well as for criticality considerations for storage. In order to prevent future proliferation following the spread of nuclear energy, we must develop accurate methods to assay large quantities of nuclear fuels. We analyze the potential of using a Lead Slowing Down Spectrometer for assaying spent fuel. We conclude that it is possible to design a system that will provide around 1% statistical precision in the determination of the {sup 239}Pu, {sup 241}Pu and {sup 235}U concentrations in a PWR spent-fuel assembly, for intermediate-to-high burnup levels, using commercial neutron sources, and a system of {sup 238}U threshold fission detectors. Pending further analysis of systematic errors, it is possible that missing pins can be detected, as can asymmetry in the fuel bundle.

  11. Analysis of spent fuel assay with a lead slowing down spectrometer

    SciTech Connect

    Gavron, Victor I; Smith, L Eric; Ressler, Jennifer J

    2008-01-01

    Assay of fissile materials in spent fuel that are produced or depleted during the operation of a reactor, is of paramount importance to nuclear materials accounting, verification of the reactor operation history, as well as for criticality considerations for storage. In order to prevent future proliferation following the spread of nuclear energy, we must develop accurate methods to assay large quantities of nuclear fuels. We analyze the potential of using a Lead Slowing Down Spectrometer for assaying spent fuel. We conclude that it is possible to design a system that will provide around 1% statistical precision in the determination of the {sup 239}Pu, {sup 241}Pu and {sup 235}U concentrations in a PWR spent-fuel assembly, for intermediate-to-high burnup levels, using commercial neutron sources, and a system of {sup 238}U threshold fission detectors. Pending further analysis of systematic errors, it is possible that missing pins can be detected, as can asymmetry in the fuel bundle.

  12. Lead Slowing Down Spectrometry Analysis of Data from Measurements on Nuclear Fuel

    SciTech Connect

    Warren, Glen A.; Anderson, Kevin K.; Kulisek, Jonathan A.; Danon, Yaron; Weltz, Adam; Gavron, Victor A.; Harris, Jason; Stewart, Trevor N.

    2015-01-12

    Improved non-destructive assay of isotopic masses in used nuclear fuel would be valuable for nuclear safeguards operations associated with the transport, storage and reprocessing of used nuclear fuel. Our collaboration is examining the feasibility of using lead slowing down spectrometry techniques to assay the isotopic fissile masses in used nuclear fuel assemblies. We present the application of our analysis algorithms on measurements conducted with a lead spectrometer. The measurements involved a single fresh fuel pin and discrete 239Pu and 235U samples. We are able to describe the isotopic fissile masses with root mean square errors over seven different configurations to 6.35% for 239Pu and 2.7% for 235U over seven different configurations. Funding Source(s):

  13. Hydrologically-induced slow-down as a mechanism for tidewater glacier retreat

    NASA Astrophysics Data System (ADS)

    Hewitt, Ian

    2017-04-01

    Outlet glaciers flowing into the ocean often terminate at a calving front, whose position is sensitively determined by the balance between ice discharge and calving/terminus-melting. Rapid retreat of tidewater glaciers can be initiated when the front is perturbed from a preferred pinning point, particularly when the glacier sits in an overdeepened trough. This is believed to make certain areas of ice sheets particularly vulnerable to ice loss. A number of factors may cause a previously stable front position to become unstable, including changes in buttressing provided by an ice shelf, and changes in ocean temperature. Another possibility is that initial retreat is induced by a reduction in the supply of ice from the interior of the ice sheet. Such a reduction can naturally arise from an increase in surface melting and runoff (in the absence of accumulation changes), and this may be amplified if more efficient meltwater routing reduces basal lubrication, as has been observed in some areas of the Greenland ice sheet. Since the initiation of rapid retreat often results in an increase of ice discharge at the front (due to increased ice thickness), such a process may not be easy to detect. In this study, I employ a simplified model of an outlet glacier and its frontal behaviour to examine the extent to which hydrologically induced slow-down of the feeding ice sheet may induce (or help to induce) calving front retreat. The model builds on earlier parameterisations of grounding line fluxes, and assumes that calving occurs according to a criterion that keeps the front close to the flotation thickness. The glacier bed is assumed to be plastic. This allows for a transparent identification of the different forcing terms affecting margin position. We conclude that hydrologically-induced slow-down of ice sheets is likely to have a more significant effect on mass loss than hydrologically-induced speed-up.

  14. Lead Slowing-Down Spectrometry for Spent Fuel Assay: FY11 Status Report

    SciTech Connect

    Warren, Glen A.; Casella, Andrew M.; Haight, R. C.; Anderson, Kevin K.; Danon, Yaron; Hatchett, D.; Becker, Bjorn; Devlin, M.; Imel, G. R.; Beller, D.; Gavron, A.; Kulisek, Jonathan A.; Bowyer, Sonya M.; Gesh, Christopher J.; O'Donnell, J. M.

    2011-08-01

    Executive Summary Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today’s confirmatory assay methods. This document is a progress report for FY2011 collaboration activities. Progress made by the collaboration in FY2011 continues to indicate the promise of LSDS techniques applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model demonstrated the potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space. Similar results were obtained using a perturbation approach developed by LANL. Benchmark measurements have been successfully conducted at LANL and at RPI using their respective LSDS instruments. The ISU and UNLV collaborative effort is focused on the fabrication and testing of prototype fission chambers lined with ultra-depleted 238U and 232Th, and uranium deposition on a stainless steel disc using spiked U3O8 from room temperature ionic liquid was successful, with improving thickness obtained. In FY2012, the collaboration plans a broad array of activities. PNNL will focus on optimizing its empirical model and minimizing its reliance on calibration data, as well continuing efforts on developing an analytical model. Additional measurements are

  15. Spines slow down dendritic chloride diffusion and affect short-term ionic plasticity of GABAergic inhibition

    NASA Astrophysics Data System (ADS)

    Mohapatra, Namrata; Tønnesen, Jan; Vlachos, Andreas; Kuner, Thomas; Deller, Thomas; Nägerl, U. Valentin; Santamaria, Fidel; Jedlicka, Peter

    2016-03-01

    Cl‑ plays a crucial role in neuronal function and synaptic inhibition. However, the impact of neuronal morphology on the diffusion and redistribution of intracellular Cl‑ is not well understood. The role of spines in Cl‑ diffusion along dendritic trees has not been addressed so far. Because measuring fast and spatially restricted Cl‑ changes within dendrites is not yet technically possible, we used computational approaches to predict the effects of spines on Cl‑ dynamics in morphologically complex dendrites. In all morphologies tested, including dendrites imaged by super-resolution STED microscopy in live brain tissue, spines slowed down longitudinal Cl‑ diffusion along dendrites. This effect was robust and could be observed in both deterministic as well as stochastic simulations. Cl‑ extrusion altered Cl‑ diffusion to a much lesser extent than the presence of spines. The spine-dependent slowing of Cl‑ diffusion affected the amount and spatial spread of changes in the GABA reversal potential thereby altering homosynaptic as well as heterosynaptic short-term ionic plasticity at GABAergic synapses in dendrites. Altogether, our results suggest a fundamental role of dendritic spines in shaping Cl‑ diffusion, which could be of relevance in the context of pathological conditions where spine densities and neural excitability are perturbed.

  16. Mechanical slowing-down of cytoplasmic diffusion allows in vivo counting of proteins in individual cells

    NASA Astrophysics Data System (ADS)

    Okumus, Burak; Landgraf, Dirk; Lai, Ghee Chuan; Bakhsi, Somenath; Arias-Castro, Juan Carlos; Yildiz, Sadik; Huh, Dann; Fernandez-Lopez, Raul; Peterson, Celeste N.; Toprak, Erdal; El Karoui, Meriem; Paulsson, Johan

    2016-05-01

    Many key regulatory proteins in bacteria are present in too low numbers to be detected with conventional methods, which poses a particular challenge for single-cell analyses because such proteins can contribute greatly to phenotypic heterogeneity. Here we develop a microfluidics-based platform that enables single-molecule counting of low-abundance proteins by mechanically slowing-down their diffusion within the cytoplasm of live Escherichia coli (E. coli) cells. Our technique also allows for automated microscopy at high throughput with minimal perturbation to native physiology, as well as viable enrichment/retrieval. We illustrate the method by analysing the control of the master regulator of the E. coli stress response, RpoS, by its adapter protein, SprE (RssB). Quantification of SprE numbers shows that though SprE is necessary for RpoS degradation, it is expressed at levels as low as 3-4 molecules per average cell cycle, and fluctuations in SprE are approximately Poisson distributed during exponential phase with no sign of bursting.

  17. Transient slowing down relaxation dynamics of the supercooled dusty plasma liquid after quenching.

    PubMed

    Su, Yen-Shuo; Io, Chong-Wai; I, Lin

    2012-07-01

    The spatiotemporal evolutions of microstructure and motion in the transient relaxation toward the steady supercooled liquid state after quenching a dusty plasma Wigner liquid, formed by charged dust particles suspended in a low pressure discharge, are experimentally investigated through direct optical microscopy. It is found that the quenched liquid slowly evolves to a colder state with more heterogeneities in structure and motion. Hopping particles and defects appear in the form of clusters with multiscale cluster size distributions. Via the structure rearrangement induced by the reduced thermal agitation from the cold thermal bath after quenching, the temporarily stored strain energy can be cascaded through the network to different newly distorted regions and dissipated after transferring to nonlinearly coupled motions with different scales. It leads to the observed self-similar multiscale slowing down relaxation with power law increases of structural order and structural relaxation time, the similar power law decreases of particle motions at different time scales, and the stronger and slower fluctuations with increasing waiting time toward the new steady state.

  18. Exercise and disease progression in multiple sclerosis: can exercise slow down the progression of multiple sclerosis?

    PubMed Central

    Stenager, Egon

    2012-01-01

    It has been suggested that exercise (or physical activity) might have the potential to have an impact on multiple sclerosis (MS) pathology and thereby slow down the disease process in MS patients. The objective of this literature review was to identify the literature linking physical exercise (or activity) and MS disease progression. A systematic literature search was conducted in the following databases: PubMed, SweMed+, Embase, Cochrane Library, PEDro, SPORTDiscus and ISI Web of Science. Different methodological approaches to the problem have been applied including (1) longitudinal exercise studies evaluating the effects on clinical outcome measures, (2) cross-sectional studies evaluating the relationship between fitness status and MRI findings, (3) cross-sectional and longitudinal studies evaluating the relationship between exercise/physical activity and disability/relapse rate and, finally, (4) longitudinal exercise studies applying the experimental autoimmune encephalomyelitis (EAE) animal model of MS. Data from intervention studies evaluating disease progression by clinical measures (1) do not support a disease-modifying effect of exercise; however, MRI data (2), patient-reported data (3) and data from the EAE model (4) indicate a possible disease-modifying effect of exercise, but the strength of the evidence limits definite conclusions. It was concluded that some evidence supports the possibility of a disease-modifying potential of exercise (or physical activity) in MS patients, but future studies using better methodologies are needed to confirm this. PMID:22435073

  19. Mechanical slowing-down of cytoplasmic diffusion allows in vivo counting of proteins in individual cells

    PubMed Central

    Okumus, Burak; Landgraf, Dirk; Lai, Ghee Chuan; Bakhsi, Somenath; Arias-Castro, Juan Carlos; Yildiz, Sadik; Huh, Dann; Fernandez-Lopez, Raul; Peterson, Celeste N.; Toprak, Erdal; El Karoui, Meriem; Paulsson, Johan

    2016-01-01

    Many key regulatory proteins in bacteria are present in too low numbers to be detected with conventional methods, which poses a particular challenge for single-cell analyses because such proteins can contribute greatly to phenotypic heterogeneity. Here we develop a microfluidics-based platform that enables single-molecule counting of low-abundance proteins by mechanically slowing-down their diffusion within the cytoplasm of live Escherichia coli (E. coli) cells. Our technique also allows for automated microscopy at high throughput with minimal perturbation to native physiology, as well as viable enrichment/retrieval. We illustrate the method by analysing the control of the master regulator of the E. coli stress response, RpoS, by its adapter protein, SprE (RssB). Quantification of SprE numbers shows that though SprE is necessary for RpoS degradation, it is expressed at levels as low as 3–4 molecules per average cell cycle, and fluctuations in SprE are approximately Poisson distributed during exponential phase with no sign of bursting. PMID:27189321

  20. Inverse patchy colloids with small patches: fluid structure and dynamical slowing down

    NASA Astrophysics Data System (ADS)

    Ferrari, Silvano; Bianchi, Emanuela; Kalyuzhnyi, Yura V.; Kahl, Gerhard

    2015-06-01

    Inverse patchy colloids (IPCs) differ from conventional patchy particles because their patches repel (rather than attract) each other and attract (rather than repel) the part of the colloidal surface that is free of patches. These particular features occur, e.g. in heterogeneously charged colloidal systems. Here we consider overall neutral IPCs carrying two, relatively small, polar patches. Previous studies of the same model under planar confinement have evidenced the formation of branched, disordered aggregates composed of ring-like structures. We investigate here the bulk behavior of the system via molecular dynamics simulations, focusing on both the structure and the dynamics of the fluid phase in a wide region of the phase diagram. Additionally, the simulation results for the static observables are compared to the Associative Percus Yevick solution of an integral equation approach based on the multi-density Ornstein-Zernike theory. A good agreement between theoretical and numerical quantities is observed even in the region of the phase diagram where the slowing down of the dynamics occurs.

  1. Slow down of a globally neutral relativistic e-e + beam shearing the vacuum

    NASA Astrophysics Data System (ADS)

    Alves, E. P.; Grismayer, T.; Silveirinha, M. G.; Fonseca, R. A.; Silva, L. O.

    2015-11-01

    It has been recently found that the development of electromagnetic instabilities between shearing, globally neutral polarisable dielectric slabs, separated by a nanometer-scale gap, can result in an effective non-contact friction force between slavs, which is the classical analogue of the quantum friction effect proposed by Pendry (1997). This effect has been explored analytically in the sub-relativistic regime, where the development of unstable electromagnetic modes parallel to the direction of motion are responsible for the non-contact friction effect. We explore the interaction of a relativistic, globally neutral e-e + beam streaming through a hollow plasma/dielectric in the absence of overlap (no contact). We show through analytic theory and 3D particle-in-cell simulations that this relativistic scenario excites unstable electromagnetic modes transverse to the direction of propagation. The onset of this electromagnetic instability leads to the conversion of the kinetic energy of the e-e + beam into electric and magnetic field energy, effectively slowing down a relativistic, globally neutral body in the absence of contact. We demonstrate that this effect be explored using beam properties that are readily available at the SLAC National Accelerator Laboratory.

  2. Spines slow down dendritic chloride diffusion and affect short-term ionic plasticity of GABAergic inhibition

    PubMed Central

    Mohapatra, Namrata; Tønnesen, Jan; Vlachos, Andreas; Kuner, Thomas; Deller, Thomas; Nägerl, U. Valentin; Santamaria, Fidel; Jedlicka, Peter

    2016-01-01

    Cl− plays a crucial role in neuronal function and synaptic inhibition. However, the impact of neuronal morphology on the diffusion and redistribution of intracellular Cl− is not well understood. The role of spines in Cl− diffusion along dendritic trees has not been addressed so far. Because measuring fast and spatially restricted Cl− changes within dendrites is not yet technically possible, we used computational approaches to predict the effects of spines on Cl− dynamics in morphologically complex dendrites. In all morphologies tested, including dendrites imaged by super-resolution STED microscopy in live brain tissue, spines slowed down longitudinal Cl− diffusion along dendrites. This effect was robust and could be observed in both deterministic as well as stochastic simulations. Cl− extrusion altered Cl− diffusion to a much lesser extent than the presence of spines. The spine-dependent slowing of Cl− diffusion affected the amount and spatial spread of changes in the GABA reversal potential thereby altering homosynaptic as well as heterosynaptic short-term ionic plasticity at GABAergic synapses in dendrites. Altogether, our results suggest a fundamental role of dendritic spines in shaping Cl− diffusion, which could be of relevance in the context of pathological conditions where spine densities and neural excitability are perturbed. PMID:26987404

  3. Experimental observation of critical slowing down as an early warning of population collapse

    NASA Astrophysics Data System (ADS)

    Vorselen, Daan; Dai, Lei; Korolev, Kirill; Gore, Jeff

    2012-02-01

    Near tipping points marking population collapse or other critical transitions in complex systems small changes in conditions can result in drastic shifts in the system state. In theoretical models it is known that early warning signals can be used to predict the approach of these tipping points (bifurcations), but little is known about how these signals can be detected in practice. Here we use the budding yeast Saccharomyces cerevisiae to study these early warning signals in controlled experimental populations. We grow yeast in the sugar sucrose, where cooperative feeding dynamics causes a fold bifurcation; falling below a critical population size results in sudden collapse. We demonstrate the experimental observation of an increase in both the size and timescale of the fluctuations of population density near this fold bifurcation. Furthermore, we test the utility of theoretically predicted warning signals by observing them in two different slowly deteriorating environments. These findings suggest that these generic indicators of critical slowing down can be useful in predicting catastrophic changes in population biology.

  4. Can vitamin D slow down the progression of chronic kidney disease?

    PubMed

    Shroff, Rukshana; Wan, Mandy; Rees, Lesley

    2012-12-01

    Pharmacological blockade of the renin-angiotensin-aldosterone system (RAAS) is the cornerstone of renoprotective therapy, and the reduction of persistent RAAS activation is considered to be an important target in the treatment of chronic kidney disease (CKD). Vitamin D is a steroid hormone that controls a broad range of metabolic and cell regulatory functions. It acts as a transcription factor and can suppress the renin gene, thereby acting as a negative endocrine regulator of RAAS. RAAS activation can reduce renal Klotho expression, and the Klotho-fibroblast growth factor 23 interaction may further reduce the production of active vitamin D. Results from both clinical and experimental studies suggest that vitamin D therapy is associated with a reduction in blood pressure and left ventricular hypertrophy and improves cardiovascular outcomes. In addition, a reduction in angiotensin II through RAAS blockade may have anti-proteinuric and anti-fibrotic effects. Vitamin D has also been shown to modulate the immune system, regulate inflammatory responses, improve insulin sensitivity and reduce high-density lipoprotein cholesterol. Taken together, these pleiotropic effects of vitamin D may slow down the progression of CKD. In this review, we discuss the experimental and early clinical findings that suggest a renoprotective effect of vitamin D, thereby providing an additional rationale beyond mineral metabolism for the close monitoring of, and supplementation with vitamin D from the earliest stages of CKD.

  5. Critical phase shifts slow down circadian clock recovery: implications for jet lag.

    PubMed

    Leloup, Jean-Christophe; Goldbeter, Albert

    2013-09-21

    Advancing or delaying the light-dark (LD) cycle perturbs the circadian clock, which eventually recovers its original phase with respect to the new LD cycle. Readjustment of the clock occurs by shifting its phase in the same (orthodromic re-entrainment) or opposite direction (antidromic re-entrainment) as the shift in the LD cycle. To investigate circadian clock recovery after phase shifts of the LD cycle we use a detailed computational model previously proposed for the cellular regulatory network underlying the mammalian circadian clock. The model predicts the existence of a sharp threshold separating orthodromic from antidromic re-entrainment. In the vicinity of this threshold, resynchronization of the clock after a phase shift markedly slows down. The type of re-entrainment, the position of the threshold and the time required for resynchronization depend on multiple factors such as the autonomous period of the clock, the direction and magnitude of the phase shift, the clock biochemical kinetic parameters, and light intensity. Partitioning the phase shift into a series of smaller phases shifts decreases the impact on the recovery of the circadian clock. We use the phase response curve to predict the location of the threshold separating orthodromic and antidromic re-entrainment after advanced or delayed phase shifts of the LD cycle. The marked increase in recovery times predicted near the threshold could be responsible for the most severe disturbances of the human circadian clock associated with jet lag.

  6. Spines slow down dendritic chloride diffusion and affect short-term ionic plasticity of GABAergic inhibition.

    PubMed

    Mohapatra, Namrata; Tønnesen, Jan; Vlachos, Andreas; Kuner, Thomas; Deller, Thomas; Nägerl, U Valentin; Santamaria, Fidel; Jedlicka, Peter

    2016-03-18

    Cl(-) plays a crucial role in neuronal function and synaptic inhibition. However, the impact of neuronal morphology on the diffusion and redistribution of intracellular Cl(-) is not well understood. The role of spines in Cl(-) diffusion along dendritic trees has not been addressed so far. Because measuring fast and spatially restricted Cl(-) changes within dendrites is not yet technically possible, we used computational approaches to predict the effects of spines on Cl(-) dynamics in morphologically complex dendrites. In all morphologies tested, including dendrites imaged by super-resolution STED microscopy in live brain tissue, spines slowed down longitudinal Cl(-) diffusion along dendrites. This effect was robust and could be observed in both deterministic as well as stochastic simulations. Cl(-) extrusion altered Cl(-) diffusion to a much lesser extent than the presence of spines. The spine-dependent slowing of Cl(-) diffusion affected the amount and spatial spread of changes in the GABA reversal potential thereby altering homosynaptic as well as heterosynaptic short-term ionic plasticity at GABAergic synapses in dendrites. Altogether, our results suggest a fundamental role of dendritic spines in shaping Cl(-) diffusion, which could be of relevance in the context of pathological conditions where spine densities and neural excitability are perturbed.

  7. Exercise and disease progression in multiple sclerosis: can exercise slow down the progression of multiple sclerosis?

    PubMed

    Dalgas, Ulrik; Stenager, Egon

    2012-03-01

    It has been suggested that exercise (or physical activity) might have the potential to have an impact on multiple sclerosis (MS) pathology and thereby slow down the disease process in MS patients. The objective of this literature review was to identify the literature linking physical exercise (or activity) and MS disease progression. A systematic literature search was conducted in the following databases: PubMed, SweMed+, Embase, Cochrane Library, PEDro, SPORTDiscus and ISI Web of Science. Different methodological approaches to the problem have been applied including (1) longitudinal exercise studies evaluating the effects on clinical outcome measures, (2) cross-sectional studies evaluating the relationship between fitness status and MRI findings, (3) cross-sectional and longitudinal studies evaluating the relationship between exercise/physical activity and disability/relapse rate and, finally, (4) longitudinal exercise studies applying the experimental autoimmune encephalomyelitis (EAE) animal model of MS. Data from intervention studies evaluating disease progression by clinical measures (1) do not support a disease-modifying effect of exercise; however, MRI data (2), patient-reported data (3) and data from the EAE model (4) indicate a possible disease-modifying effect of exercise, but the strength of the evidence limits definite conclusions. It was concluded that some evidence supports the possibility of a disease-modifying potential of exercise (or physical activity) in MS patients, but future studies using better methodologies are needed to confirm this.

  8. Traffic and Environmental Cues and Slow-Down Behaviors in Virtual Driving.

    PubMed

    Hsu, Chun-Chia; Chuang, Kai-Hsiang

    2016-02-01

    This study used a driving simulator to investigate whether the presence of pedestrians and traffic engineering designs that reported to have reduction effects on overall traffic speed at intersections can facilitate drivers adopting lower impact speed behaviors at pedestrian crossings. Twenty-eight men (M age = 39.9 yr., SD = 11.5) with drivers' licenses participated. Nine studied measures were obtained from the speed profiles of each participant. A 14-km virtual road was presented to the participants. It included experimental scenarios of base intersection, pedestrian presence, pedestrian warning sign at intersection and in advance of intersection, and perceptual lane narrowing by hatching lines. Compared to the base intersection, the presence of pedestrians caused drivers to slow down earlier and reach a lower minimum speed before the pedestrian crossing. This speed behavior was not completely evident when adding a pedestrian warning sign at an intersection or having perceptual lane narrowing to the stop line. Additionally, installing pedestrian warning signs in advance of the intersections rather at the intersections was associated with higher impact speeds at pedestrian crossings.

  9. Slowing-down of non-equilibrium concentration fluctuations in confinement

    NASA Astrophysics Data System (ADS)

    Giraudet, Cédric; Bataller, Henri; Sun, Yifei; Donev, Aleksandar; María Ortiz de Zárate, José; Croccolo, Fabrizio

    2015-09-01

    Fluctuations in a fluid are strongly affected by the presence of a macroscopic gradient making them long-ranged and enhancing their amplitude. While small-scale fluctuations exhibit diffusive lifetimes, moderate-scale fluctuations live shorter because of gravity. In this letter we explore fluctuations of even larger size, comparable to the extent of the system in the direction of the gradient, and find experimental evidence of a dramatic slowing-down of their dynamics. We recover diffusive behavior for these strongly confined fluctuations, but with a diffusion coefficient that depends on the solutal Rayleigh number. Results from dynamic shadowgraph experiments are complemented by theoretical calculations and numerical simulations based on fluctuating hydrodynamics, and excellent agreement is found. Hence, the study of the dynamics of non-equilibrium fluctuations allows to probe and measure the competition of physical processes such as diffusion, buoyancy and confinement, i.e. the ingredients included in the Rayleigh number, which is the control parameter of our system.

  10. Relaxation time and critical slowing down of a spin-torque oscillator

    NASA Astrophysics Data System (ADS)

    Taniguchi, Tomohiro; Ito, Takahiro; Tsunegi, Sumito; Kubota, Hitoshi; Utsumi, Yasuhiro

    2017-07-01

    The relaxation phenomena of spin-torque oscillators consisting of nanostructured ferromagnets are interesting research targets in magnetism. A theoretical study on the relaxation time of a spin-torque oscillator from one self-oscillation state to another is investigated. By solving the Landau-Lifshitz-Gilbert equation both analytically and numerically, it is shown that the oscillator relaxes to the self-oscillation state exponentially within a few nanoseconds, except when magnetization is close to a critical point. The relaxation rate, which is an inverse of relaxation time, is proportional to the current. On the other hand, a critical slowing down appears near the critical point, where relaxation is inversely proportional to time, and the relaxation time becomes on the order of hundreds of nanoseconds. These conclusions are primarily obtained for a spin-torque oscillator consisting of a perpendicularly magnetized free layer and an in-plane magnetized pinned layer, and are further developed for application to arbitrary types of spin-torque oscillators.

  11. Non-destructive Assay Measurements Using the RPI Lead Slowing Down Spectrometer

    SciTech Connect

    Becker, Bjorn; Weltz, Adam; Kulisek, Jonathan A.; Thompson, J. T.; Thompson, N.; Danon, Yaron

    2013-10-01

    The use of a Lead Slowing-Down Spectrometer (LSDS) is consid- ered as a possible option for non-destructive assay of fissile material of used nuclear fuel. The primary objective is to quantify the 239Pu and 235U fissile content via a direct measurement, distinguishing them through their characteristic fission spectra in the LSDS. In this pa- per, we present several assay measurements performed at the Rensse- laer Polytechnic Institute (RPI) to demonstrate the feasibility of such a method and to provide benchmark experiments for Monte Carlo cal- culations of the assay system. A fresh UOX fuel rod from the RPI Criticality Research Facility, a 239PuBe source and several highly en- riched 235U discs were assayed in the LSDS. The characteristic fission spectra were measured with 238U and 232Th threshold fission cham- bers, which are only sensitive to fission neutron with energy above the threshold. Despite the constant neutron and gamma background from the PuBe source and the intense interrogation neutron flux, the LSDS system was able to measure the characteristic 235U and 239Pu responses. All measurements were compared to Monte Carlo simula- tions. It was shown that the available simulation tools and models are well suited to simulate the assay, and that it is possible to calculate the absolute count rate in all investigated cases.

  12. Assaying Used Nuclear Fuel Assemblies Using Lead Slowing-Down Spectroscopy and Singular Value Decomposition

    SciTech Connect

    Kulisek, Jonathan A.; Anderson, Kevin K.; Casella, Andrew M.; Gesh, Christopher J.; Warren, Glen A.

    2013-04-01

    This study investigates the use of a Lead Slowing-Down Spectrometer (LSDS) for the direct and independent measurement of fissile isotopes in light-water nuclear reactor fuel assemblies. The current study applies MCNPX, a Monte Carlo radiation transport code, to simulate the measurement of the assay of the used nuclear fuel assemblies in the LSDS. An empirical model has been developed based on the calibration of the LSDS to responses generated from the simulated assay of six well-characterized fuel assemblies. The effects of self-shielding are taken into account by using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the self-shielding functions from the assay of assemblies in the calibration set. The performance of the empirical algorithm was tested on version 1 of the Next-Generation Safeguards Initiative (NGSI) used fuel library consisting of 64 assemblies, as well as a set of 27 diversion assemblies, both of which were developed by Los Alamos National Laboratory. The potential for direct and independent assay of the sum of the masses of Pu-239 and Pu-241 to within 2%, on average, has been demonstrated.

  13. Slowing Down the Presentation of Facial and Body Movements Enhances Imitation Performance in Children with Severe Autism

    ERIC Educational Resources Information Center

    Laine, France; Rauzy, Stephane; Tardif, Carole; Gepner, Bruno

    2011-01-01

    Imitation deficits observed among individuals with autism could be partly explained by the excessive speed of biological movements to be perceived and then reproduced. Along with this assumption, slowing down the speed of presentation of these movements might improve their imitative performances. To test this hypothesis, 19 children with autism,…

  14. Measurements of the fast ion slowing-down times in the HL-2A tokamak and comparison to classical theory

    SciTech Connect

    Zhang, Y. P.; Liu, Yi; Yuan, G. L.; Yang, J. W.; Song, X. Y.; Song, X. M.; Cao, J. Y.; Lei, G. J.; Wei, H. L.; Li, Y. G.; Shi, Z. B.; Li, X.; Yan, L. W.; Yang, Q. W.; Duan, X. R.; Isobe, M.; Collaboration: HL-2A Team

    2012-11-15

    Physics related to fast ions in magnetically confined fusion plasmas is a very important issue, since these particles will play an important role in future burning plasmas. Indeed, they will act as primary heating source and will sustain the self-ignited condition. To measure the fast ion slowing-down times in a magnetohydrodynamic-quiescent plasmas in different scenarios, very short pulses of a deuterium neutral beam, so-called 'blip,' with duration of about 5 ms were tangentially co-injected into a deuterium plasmas at the HuanLiuqi-2A (commonly referred to as HL-2A) tokamak [L. W. Yan, Nucl. Fusion 51, 094016 (2011)]. The decay rate of 2.45 MeV D-D fusion neutrons produced by beam-plasma reactions following neutral beam termination was measured by means of a {sup 235}U fission chamber. Experimental results were compared with those predicted by a classical slowing-down model. These results show that the fast ions are well confined with a peaked profile and the ions are slowed down classically without significant loss in the HL-2A tokamak. Moreover, it has been observed that during electron cyclotron resonance heating the fast ions have a longer slowing-down time and the neutron emission rate decay time becomes longer.

  15. Climatic Slow-down of the Pamir-Karakoram-Himalaya Glaciers Over the Last 25 Years

    NASA Astrophysics Data System (ADS)

    Dehecq, A.; Gourmelen, N.; Trouvé, E.

    2015-12-01

    Climate warming over the 20th century has caused drastic changes in mountain glaciers globally, and of the Himalayan glaciers in particular. The stakes are high; glaciers and ice caps are the largest contributor to the increase in the mass of the world's oceans, and the Himalayas play a key role in the hydrology of the region, impacting on the economy, food safety and flood risk. Partial monitoring of the Himalayan glaciers has revealed a contrasted picture; while many of the Himalayan glaciers are retreating, in some cases locally stable or advancing glaciers in this region have also been observed. Several studies based on field measurements or remote sensing have shown a dominant slow-down of mountain glaciers globally in response to these changes. But they are restricted to a few glaciers or small regions and none has analysed the dynamic response of glaciers to climate changes at regional scales. Here we present a region-wide analysis of annual glacier flow velocity covering the Pamir-Karakoram-Himalaya region obtained from the analysis of the entire archive of Landsat data. Over 90% of the ice-covered regions, as defined by the Randolph Glacier Inventory, are measured, with precision on the retrieved velocity of the order of 4 m/yr. The change in velocities over the last 25 years will be analysed with reference to regional glacier mass balance and topographic caracteristics. We show that the first order temporal evolution of glacier flow mirrors the pattern of glacier mass balance. We observe a general decrease of ice velocity in regions of known ice mass loss, and a more complex patterns consisting of mixed acceleration and decrease of ice velocity in regions that are known to be affected by stable mass balance and surge-like behavior.

  16. Climatic Slow-down of the Pamir-Karakoram-Himalaya Glaciers Over the Last 25 Years

    NASA Astrophysics Data System (ADS)

    Dumont, M.; Brun, E.; Picard, G.; Michou, M.; Libois, Q.; Petit, J. R.; Morin, S.; Josse, B.

    2014-12-01

    Climate warming over the 20th century has caused drastic changes in mountain glaciers globally, and of the Himalayan glaciers in particular. The stakes are high; glaciers and ice caps are the largest contributor to the increase in the mass of the world's oceans, and the Himalayas play a key role in the hydrology of the region, impacting on the economy, food safety and flood risk. Partial monitoring of the Himalayan glaciers has revealed a contrasted picture; while many of the Himalayan glaciers are retreating, in some cases locally stable or advancing glaciers in this region have also been observed. Several studies based on field measurements or remote sensing have shown a dominant slow-down of mountain glaciers globally in response to these changes. But they are restricted to a few glaciers or small regions and none has analysed the dynamic response of glaciers to climate changes at regional scales. Here we present a region-wide analysis of annual glacier flow velocity covering the Pamir-Karakoram-Himalaya region obtained from the analysis of the entire archive of Landsat data. Over 90% of the ice-covered regions, as defined by the Randolph Glacier Inventory, are measured, with precision on the retrieved velocity of the order of 4 m/yr. The change in velocities over the last 25 years will be analysed with reference to regional glacier mass balance and topographic caracteristics. We show that the first order temporal evolution of glacier flow mirrors the pattern of glacier mass balance. We observe a general decrease of ice velocity in regions of known ice mass loss, and a more complex patterns consisting of mixed acceleration and decrease of ice velocity in regions that are known to be affected by stable mass balance and surge-like behavior.

  17. Slowing down fat digestion and absorption by an oxadiazolone inhibitor targeting selectively gastric lipolysis.

    PubMed

    Point, Vanessa; Bénarouche, Anais; Zarrillo, Julie; Guy, Alexandre; Magnez, Romain; Fonseca, Laurence; Raux, Brigitt; Leclaire, Julien; Buono, Gérard; Fotiadu, Frédéric; Durand, Thierry; Carrière, Frédéric; Vaysse, Carole; Couëdelo, Leslie; Cavalier, Jean-François

    2016-11-10

    Based on a previous study and in silico molecular docking experiments, we have designed and synthesized a new series of ten 5-Alkoxy-N-3-(3-PhenoxyPhenyl)-1,3,4-Oxadiazol-2(3H)-one derivatives (RmPPOX). These molecules were further evaluated as selective and potent inhibitors of mammalian digestive lipases: purified dog gastric lipase (DGL) and guinea pig pancreatic lipase related protein 2 (GPLRP2), as well as porcine (PPL) and human (HPL) pancreatic lipases contained in porcine pancreatic extracts (PPE) and human pancreatic juices (HPJ), respectively. These compounds were found to strongly discriminate classical pancreatic lipases (poorly inhibited) from gastric lipase (fully inhibited). Among them, the 5-(2-(Benzyloxy)ethoxy)-3-(3-PhenoxyPhenyl)-1,3,4-Oxadiazol-2(3H)-one (BemPPOX) was identified as the most potent inhibitor of DGL, even more active than the FDA-approved drug Orlistat. BemPPOX and Orlistat were further compared in vitro in the course of test meal digestion, and in vivo with a mesenteric lymph duct cannulated rat model to evaluate their respective impacts on fat absorption. While Orlistat inhibited both gastric and duodenal lipolysis and drastically reduced fat absorption in rats, BemPPOX showed a specific action on gastric lipolysis that slowed down the overall lipolysis process and led to a subsequent reduction of around 55% of the intestinal absorption of fatty acids compared to controls. All these data promote BemPPOX as a potent candidate to efficiently regulate the gastrointestinal lipolysis, and to investigate its link with satiety mechanisms and therefore develop new strategies to "fight against obesity".

  18. Do calcium buffers always slow down the propagation of calcium waves?

    PubMed

    Tsai, Je-Chiang

    2013-12-01

    Calcium buffers are large proteins that act as binding sites for free cytosolic calcium. Since a large fraction of cytosolic calcium is bound to calcium buffers, calcium waves are widely observed under the condition that free cytosolic calcium is heavily buffered. In addition, all physiological buffered excitable systems contain multiple buffers with different affinities. It is thus important to understand the properties of waves in excitable systems with the inclusion of buffers. There is an ongoing controversy about whether or not the addition of calcium buffers into the system always slows down the propagation of calcium waves. To solve this controversy, we incorporate the buffering effect into the generic excitable system, the FitzHugh-Nagumo model, to get the buffered FitzHugh-Nagumo model, and then to study the effect of the added buffer with large diffusivity on traveling waves of such a model in one spatial dimension. We can find a critical dissociation constant (K = K(a)) characterized by system excitability parameter a such that calcium buffers can be classified into two types: weak buffers (K ∈ (K(a), ∞)) and strong buffers (K ∈ (0, K(a))). We analytically show that the addition of weak buffers or strong buffers but with its total concentration b(0)(1) below some critical total concentration b(0,c)(1) into the system can generate a traveling wave of the resulting system which propagates faster than that of the origin system, provided that the diffusivity D1 of the added buffers is sufficiently large. Further, the magnitude of the wave speed of traveling waves of the resulting system is proportional to √D1 as D1 --> ∞. In contrast, the addition of strong buffers with the total concentration b(0)(1) > b(0,c)(1) into the system may not be able to support the formation of a biologically acceptable wave provided that the diffusivity D1 of the added buffers is sufficiently large.

  19. Slowing down the retreat of the Morteratsch glacier, Switzerland, by artificially produced summer snow

    NASA Astrophysics Data System (ADS)

    Oerlemans, Johannes; Keller, Felix; Haag, Martin

    2017-04-01

    Many large valley glaciers in the world are retreating at historically unprecedented rates. Also in the Alps, where warming over the past decades has been more than twice as large as the global mean, all major glaciers have retreated over distances of several kilometers over the past hundred years. The Morteratsch Glacier, Pontresina, Switzerland, is a major touristic attraction. Due to strong retreat the lowest part of the glacier is getting out of sight from the gravel road that provided direct access to the glacier front. The Community of Pontresina has commissioned a preparatory study to find out if it is possible to slow down the retreat of the Morteratsch Glacier in an environmentally friendly way. In this article we report on the outcome of such a study, based on a modelling approach. Our analysis is based on a 20- year weather station record from the lower part of the glacier, combined with calculations with an ice flow model. This model has been carefully calibrated against the historical glacier length record, to ensure an optimal initial state for projections into the future. We arrive at the conclusion that producing summer snow in the ablation zone over a larger area (typically 0.5 to 1 km ^2) is the best option, and may have a significant effect on the rate of retreat on a timescale of decades. We consider three scenarios of climate change: (i) no change, (ii) a rise of the Equilibrium Line Altitude (ELA) by 1 m/yr, and (iii) a rise of the ELA by 2 m/yr. Projections of glacier length are done until the year 2100. It takes about 10 years before snow deposition in the higher ablation zone starts to affect the position of the glacier snout. The difference in glacier length between the snow and no-snow experiments becomes 400 to 500 m within two decades.

  20. Does time ever fly or slow down? The difficult interpretation of psychophysical data on time perception

    PubMed Central

    García-Pérez, Miguel A.

    2014-01-01

    Time perception is studied with subjective or semi-objective psychophysical methods. With subjective methods, observers provide quantitative estimates of duration and data depict the psychophysical function relating subjective duration to objective duration. With semi-objective methods, observers provide categorical or comparative judgments of duration and data depict the psychometric function relating the probability of a certain judgment to objective duration. Both approaches are used to study whether subjective and objective time run at the same pace or whether time flies or slows down under certain conditions. We analyze theoretical aspects affecting the interpretation of data gathered with the most widely used semi-objective methods, including single-presentation and paired-comparison methods. For this purpose, a formal model of psychophysical performance is used in which subjective duration is represented via a psychophysical function and the scalar property. This provides the timing component of the model, which is invariant across methods. A decisional component that varies across methods reflects how observers use subjective durations to make judgments and give the responses requested under each method. Application of the model shows that psychometric functions in single-presentation methods are uninterpretable because the various influences on observed performance are inextricably confounded in the data. In contrast, data gathered with paired-comparison methods permit separating out those influences. Prevalent approaches to fitting psychometric functions to data are also discussed and shown to be inconsistent with widely accepted principles of time perception, implicitly assuming instead that subjective time equals objective time and that observed differences across conditions do not reflect differences in perceived duration but criterion shifts. These analyses prompt evidence-based recommendations for best methodological practice in studies on time

  1. Nitric oxide acts as a slow-down and search signal in developing neurites.

    PubMed

    Trimm, Kevin R; Rehder, Vincent

    2004-02-01

    Nitric oxide (NO) has been demonstrated to act as a signaling molecule during neuronal development, but its precise function is unclear. Here we investigate whether NO might function at the neuronal growth cone to affect growth cone motility. We have previously demonstrated that growth cones of identified neurons from the snail Helisoma trivolvis show a rapid and transient increase in filopodial length in response to NO, which was regulated by soluble guanylyl cyclase (sGC) [S. Van Wagenen and V. Rehder (1999) J. Neurobiol., 39, 168-185]. Because in vivo studies have demonstrated that growth cones have longer filopodia and advance more slowly in regions where pathfinding decisions are being made, this study aimed to establish whether NO could function as a combined 'slow-down and search signal' for growth cones by decreasing neurite outgrowth. In the presence of the NO donor NOC-7, neurites of B5 neurons showed a concentration-dependent effect on neurite outgrowth, ranging from slowing at low, stopping at intermediate and collapsing at high concentrations. The effects of the NO donor were mimicked by directly activating sGC with YC-1, or by increasing its product with 8-bromo-cGMP. In addition, blocking sGC in the presence of NO with NS2028 blocked the effect of NO, suggesting that NO affected outgrowth via sGC. Ca2+ imaging of growth cones with Fura-2 indicated that [Ca2+]i increased transiently in the presence of NOC-7. These results support the hypothesis that NO can function as a potent slow/stop signal for developing neurites. When coupled with transient filopodia elongation, this phenomenon emulates growth cone searching behavior.

  2. Does time ever fly or slow down? The difficult interpretation of psychophysical data on time perception.

    PubMed

    García-Pérez, Miguel A

    2014-01-01

    Time perception is studied with subjective or semi-objective psychophysical methods. With subjective methods, observers provide quantitative estimates of duration and data depict the psychophysical function relating subjective duration to objective duration. With semi-objective methods, observers provide categorical or comparative judgments of duration and data depict the psychometric function relating the probability of a certain judgment to objective duration. Both approaches are used to study whether subjective and objective time run at the same pace or whether time flies or slows down under certain conditions. We analyze theoretical aspects affecting the interpretation of data gathered with the most widely used semi-objective methods, including single-presentation and paired-comparison methods. For this purpose, a formal model of psychophysical performance is used in which subjective duration is represented via a psychophysical function and the scalar property. This provides the timing component of the model, which is invariant across methods. A decisional component that varies across methods reflects how observers use subjective durations to make judgments and give the responses requested under each method. Application of the model shows that psychometric functions in single-presentation methods are uninterpretable because the various influences on observed performance are inextricably confounded in the data. In contrast, data gathered with paired-comparison methods permit separating out those influences. Prevalent approaches to fitting psychometric functions to data are also discussed and shown to be inconsistent with widely accepted principles of time perception, implicitly assuming instead that subjective time equals objective time and that observed differences across conditions do not reflect differences in perceived duration but criterion shifts. These analyses prompt evidence-based recommendations for best methodological practice in studies on time

  3. Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY12 Status Report

    SciTech Connect

    Kulisek, Jonathan A.; Anderson, Kevin K.; Casella, Andrew M.; Siciliano, Edward R.; Warren, Glen A.

    2012-09-28

    Executive Summary Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today’s confirmatory methods. This document is a progress report for FY2012 PNNL analysis and algorithm development. Progress made by PNNL in FY2012 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel assemblies. PNNL further refined the semi-empirical model developed in FY2011 based on singular value decomposition (SVD) to numerically account for the effects of self-shielding. The average uncertainty in the Pu mass across the NGSI-64 fuel assemblies was shown to be less than 3% using only six calibration assemblies with a 2% uncertainty in the isotopic masses. When calibrated against the six NGSI-64 fuel assemblies, the algorithm was able to determine the total Pu mass within <2% uncertainty for the 27 diversion cases also developed under NGSI. Two purely empirical algorithms were developed that do not require the use of Pu isotopic fission chambers. The semi-empirical and purely empirical algorithms were successfully tested using MCNPX simulations as well applied to experimental data measured by RPI using their LSDS. The algorithms were able to describe the 235U masses of the RPI measurements with an average uncertainty of 2.3%. Analyses were conducted that provided valuable insight with regard to design requirements (e

  4. Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY11 Status Report

    SciTech Connect

    Kulisek, Jonathan A.; Anderson, Kevin K.; Bowyer, Sonya M.; Casella, Andrew M.; Gesh, Christopher J.; Warren, Glen A.

    2011-09-30

    Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today's confirmatory assay methods. This document is a progress report for FY2011 PNNL analysis and algorithm development. Progress made by PNNL in FY2011 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model, which accounts for self-shielding effects using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the true self-shielding functions of the used fuel assembly models. The potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space was demonstrated. Also, in FY2011, PNNL continued to develop an analytical model. Such efforts included the addition of six more non-fissile absorbers in the analytical shielding function and the non-uniformity of the neutron flux across the LSDS assay chamber. A hybrid analytical-empirical approach was developed to determine the mass of total Pu (sum of the masses of 239Pu, 240Pu, and 241Pu), which is an important quantity in safeguards. Results using this hybrid method were of approximately the same accuracy as the pure

  5. HF(v′ = 3) forward scattering in the F + H2 reaction: Shape resonance and slow-down mechanism

    PubMed Central

    Wang, Xingan; Dong, Wenrui; Qiu, Minghui; Ren, Zefeng; Che, Li; Dai, Dongxu; Wang, Xiuyan; Yang, Xueming; Sun, Zhigang; Fu, Bina; Lee, Soo-Y.; Xu, Xin; Zhang, Dong H.

    2008-01-01

    Crossed molecular beam experiments and accurate quantum dynamics calculations have been carried out to address the long standing and intriguing issue of the forward scattering observed in the F + H2 → HF(v′ = 3) + H reaction. Our study reveals that forward scattering in the reaction channel is not caused by Feshbach or dynamical resonances as in the F + H2 → HF(v′ = 2) + H reaction. It is caused predominantly by the slow-down mechanism over the centrifugal barrier in the exit channel, with some small contribution from the shape resonance mechanism in a very small collision energy regime slightly above the HF(v′ = 3) threshold. Our analysis also shows that forward scattering caused by dynamical resonances can very likely be accompanied by forward scattering in a different product vibrational state caused by a slow-down mechanism. PMID:18434547

  6. HF(v' = 3) forward scattering in the F + H2 reaction: shape resonance and slow-down mechanism.

    PubMed

    Wang, Xingan; Dong, Wenrui; Qiu, Minghui; Ren, Zefeng; Che, Li; Dai, Dongxu; Wang, Xiuyan; Yang, Xueming; Sun, Zhigang; Fu, Bina; Lee, Soo-Y; Xu, Xin; Zhang, Dong H

    2008-04-29

    Crossed molecular beam experiments and accurate quantum dynamics calculations have been carried out to address the long standing and intriguing issue of the forward scattering observed in the F + H(2) --> HF(v' = 3) + H reaction. Our study reveals that forward scattering in the reaction channel is not caused by Feshbach or dynamical resonances as in the F + H(2) --> HF(v' = 2) + H reaction. It is caused predominantly by the slow-down mechanism over the centrifugal barrier in the exit channel, with some small contribution from the shape resonance mechanism in a very small collision energy regime slightly above the HF(v' = 3) threshold. Our analysis also shows that forward scattering caused by dynamical resonances can very likely be accompanied by forward scattering in a different product vibrational state caused by a slow-down mechanism.

  7. Lack of Critical Slowing Down Suggests that Financial Meltdowns Are Not Critical Transitions, yet Rising Variability Could Signal Systemic Risk.

    PubMed

    Guttal, Vishwesha; Raghavendra, Srinivas; Goel, Nikunj; Hoarau, Quentin

    2016-01-01

    Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms.

  8. Lack of Critical Slowing Down Suggests that Financial Meltdowns Are Not Critical Transitions, yet Rising Variability Could Signal Systemic Risk

    PubMed Central

    Hoarau, Quentin

    2016-01-01

    Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms. PMID:26761792

  9. Lead Slowing-Down Spectrometry for Spent Fuel Assay: FY12 Status Report

    SciTech Connect

    Warren, Glen A.; Anderson, Kevin K.; Casella, Andrew M.; Danon, Yaron; Devlin, M.; Gavron, A.; Haight, R. C.; Harris, Jason; Imel, G. R.; Kulisek, Jonathan A.; O'Donnell, J. M.; Stewart, T.; Weltz, Adam

    2012-10-01

    Executive Summary The Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign is supporting a multi-institutional collaboration to study the feasibility of using Lead Slowing Down Spectroscopy (LSDS) to conduct direct, independent and accurate assay of fissile isotopes in used fuel assemblies. The collaboration consists of Pacific Northwest National Laboratory (PNNL), Los Alamos National Laboratory (LANL), Rensselaer Polytechnic Institute (RPI), Idaho State University (ISU). There are three main challenges to implementing LSDS to assay used fuel assemblies. These challenges are the development of an algorithm for interpreting the data with an acceptable accuracy for the fissile masses, the development of suitable detectors for the technique, and the experimental benchmarking of the approach. This report is a summary of the progress in these areas made by the collaboration during FY2012. Significant progress was made on the project in FY2012. Extensive characterization of a “semi-empirical” algorithm was conducted. For example, we studied the impact on the accuracy of this algorithm by the minimization of the calibration set, uncertainties in the calibration masses, and by the choice of time window. Issues such a lead size, number of required neutrons, placement of the neutron source and the impact of cadmium around the detectors were also studied. In addition, new algorithms were developed that do not require the use of plutonium fission chambers. These algorithms were applied to measurement data taken by RPI and shown to determine the 235U mass within 4%. For detectors, a new concept for a fast neutron detector involving 4He recoil from neutron scattering was investigated. The detector has the potential to provide a couple of orders of magnitude more sensitivity than 238U fission chambers. Progress was also made on the more conventional approach of using 232Th fission chambers as fast neutron detectors. For

  10. Toroidal Alfvénic Eigenmodes Driven by Energetic Particles with Maxwell and Slowing-down Distributions

    NASA Astrophysics Data System (ADS)

    Hou, Yawei; Zhu, Ping; Zou, Zhihui; Kim, Charlson C.; Hu, Zhaoqing; Wang, Zhengxiong

    2016-10-01

    The energetic-particle (EP) driven toroidal Alfvén eigenmodes (TAEs) in a circular-shaped large aspect ratio tokamak are studied using the hybrid kinetic-MHD model in the NIMROD code, where the EPs are advanced using the δf particle-in-cell (PIC) method and their kinetic effects are coupled to the bulk plasma through moment closures. Two initial distributions of EPs, Maxwell and slowing-down, are considered. The influence of EP parameters, including density, temperature and density gradient, on the frequency and the growth rate of TAEs are obtained and benchmarked with theory and gyrokinetic simulations for the Maxwell distribution with good agreement. When the density and temperature of EPs are above certain thresholds, the transition from TAE to energetic particle modes (EPM) occurs and the mode structure also changes. Comparisons between Maxwell and slowing-down distributions in terms of EP-driven TAEs and EPMs will also be presented and discussed. Supported by the National Magnetic Confinement Fusion Science Program of China Grant Nos. 2014GB124002 and 2015GB101004, and the Natural Science Foundation of China Grant No. 11205194.

  11. Slow-down of 13C spin diffusion in organic solids by fast MAS: a CODEX NMR Study.

    PubMed

    Reichert, D; Bonagamba, T J; Schmidt-Rohr, K

    2001-07-01

    One- and two-dimensional 13C exchange nuclear magnetic resonance experiments under magic-angle spinning (MAS) can provide detailed information on slow segmental reorientations and chemical exchange in organic solids, including polymers and proteins. However, observations of dynamics on the time scale of seconds or longer are hampered by the competing process of dipolar 13C spin exchange (spin diffusion). In this Communication, we show that fast MAS can significantly slow down the dipolar spin exchange effect for unprotonated carbon sites. The exchange is measured quantitatively using the centerband-only detection of exchange technique, which enables the detection of exchange at any spinning speed, even in the absence of changes of isotropic chemical shifts. For chemically equivalent unprotonated 13C sites, the dipolar spin exchange rate is found to decrease slightly less than proportionally with the sample-rotation frequency, between 8 and 28 kHz. In the same range, the dipolar spin exchange rate for a glassy polymer with an inhomogeneously broadened MAS line decreases by a factor of 10. For methylene groups, no or only a minor slow-down of the exchange rate is found.

  12. The Widom-Rowlinson mixture on a sphere: elimination of exponential slowing down at first-order phase transitions.

    PubMed

    Fischer, T; Vink, R L C

    2010-03-17

    Computer simulations of first-order phase transitions using 'standard' toroidal boundary conditions are generally hampered by exponential slowing down. This is partly due to interface formation, and partly due to shape transitions. The latter occur when droplets become large such that they self-interact through the periodic boundaries. On a spherical simulation topology, however, shape transitions are absent. We expect that by using an appropriate bias function, exponential slowing down can be largely eliminated. In this work, these ideas are applied to the two-dimensional Widom-Rowlinson mixture confined to the surface of a sphere. Indeed, on the sphere, we find that the number of Monte Carlo steps needed to sample a first-order phase transition does not increase exponentially with system size, but rather as a power law τ α V(α), with α≈2.5, and V the system area. This is remarkably close to a random walk for which α(RW) = 2. The benefit of this improved scaling behavior for biased sampling methods, such as the Wang-Landau algorithm, is investigated in detail.

  13. MicroRNA-124 slows down the progression of Huntington's disease by promoting neurogenesis in the striatum.

    PubMed

    Liu, Tian; Im, Wooseok; Mook-Jung, Inhee; Kim, Manho

    2015-05-01

    MicroRNA-124 contributes to neurogenesis through regulating its targets, but its expression both in the brain of Huntington's disease mouse models and patients is decreased. However, the effects of microRNA-124 on the progression of Huntington's disease have not been reported. Results from this study showed that microRNA-124 increased the latency to fall for each R6/2 Huntington's disease transgenic mouse in the rotarod test. 5-Bromo-2'-deoxyuridine (BrdU) staining of the striatum shows an increase in neurogenesis. In addition, brain-derived neurotrophic factor and peroxisome proliferator-activated receptor gamma coactivator 1-alpha protein levels in the striatum were increased and SRY-related HMG box transcription factor 9 protein level was decreased. These findings suggest that microRNA-124 slows down the progression of Huntington's disease possibly through its important role in neuronal differentiation and survival.

  14. Ultrafast Measurement of Critical Slowing Down of Hole-Spin Relaxation in Ferromagnetic GaMnAs

    NASA Astrophysics Data System (ADS)

    Patz, Aaron; Li, Tianqi; Perakis, Ilias; Liu, Xinyu; Furdyna, Jacek; Wang, Jigang

    2011-03-01

    We have studied ultrafast photoinduced hole spin relaxation in GaMnAs via degenerate ultrafast magneto-optical Kerr spectroscopy. Near-infrared pump pulses strongly excite the sample, and probe pulses at the same photon energy reveal subpicosecond demagnetization accompanied by energy and spin relaxation of holes manifesting themselves as a fast (~ 200 fs) and a slow (ps) recovery of transient MOKE signals. By carefully analyzing the temporal profiles at different temperatures, we are able to isolate femtosecond hole spin relaxation processes, which are subject to a critical slowing down near the critical temperature of 77K. These results demonstrate a new spectroscopy tool to study the highly elusive hole spin relaxation processes in heavily-doped, correlated spin systems, and have important implications for future applications of these materials in spintronics and magnetic-photonic-electronic multifunctional devices.

  15. Causality-driven slow-down and speed-up of diffusion in non-Markovian temporal networks.

    PubMed

    Scholtes, Ingo; Wider, Nicolas; Pfitzner, René; Garas, Antonios; Tessone, Claudio J; Schweitzer, Frank

    2014-09-24

    Recent research has highlighted limitations of studying complex systems with time-varying topologies from the perspective of static, time-aggregated networks. Non-Markovian characteristics resulting from the ordering of interactions in temporal networks were identified as one important mechanism that alters causality and affects dynamical processes. So far, an analytical explanation for this phenomenon and for the significant variations observed across different systems is missing. Here we introduce a methodology that allows to analytically predict causality-driven changes of diffusion speed in non-Markovian temporal networks. Validating our predictions in six data sets we show that compared with the time-aggregated network, non-Markovian characteristics can lead to both a slow-down or speed-up of diffusion, which can even outweigh the decelerating effect of community structures in the static topology. Thus, non-Markovian properties of temporal networks constitute an important additional dimension of complexity in time-varying complex systems.

  16. Group-index independent coupling to band engineered SOI photonic crystal waveguide with large slow-down factor.

    PubMed

    Rahimi, Somayyeh; Hosseini, Amir; Xu, Xiaochuan; Subbaraman, Harish; Chen, Ray T

    2011-10-24

    Group-index independent coupling to a silicon-on-insulator (SOI) based band-engineered photonic crystal (PCW) waveguide is presented. A single hole size is used for designing both the PCW coupler and the band-engineered PCW to improve fabrication yield. Efficiency of several types of PCW couplers is numerically investigated. An on-chip integrated Fourier transform spectral interferometry device is used to experimentally determine the group-index while excluding the effect of the couplers. A low-loss, low-dispersion slow light transmission over 18 nm bandwidth under the silica light line with a group index of 26.5 is demonstrated, that corresponds to the largest slow-down factor of 0.31 ever demonstrated for a PCW with oxide bottom cladding.

  17. FOXO/DAF-16 Activation Slows Down Turnover of the Majority of Proteins in C. elegans

    SciTech Connect

    Dhondt, Ineke; Petyuk, Vladislav A.; Cai, Huaihan; Vandemeulebroucke, Lieselot; Vierstraete, Andy; Smith, Richard D.; Depuydt, Geert; Braeckman, Bart  P.

    2016-09-13

    Most aging hypotheses assume the accumulation of damage, resulting in gradual physiological decline and, ultimately, death. Avoiding protein damage accumulation by enhanced turnover should slow down the aging process and extend the lifespan. But, lowering translational efficiency extends rather than shortens the lifespan in C. elegans. We studied turnover of individual proteins in the long-lived daf-2 mutant by combining SILeNCe (stable isotope labeling by nitrogen in Caenorhabditiselegans) and mass spectrometry. Intriguingly, the majority of proteins displayed prolonged half-lives in daf-2, whereas others remained unchanged, signifying that longevity is not supported by high protein turnover. We found that this slowdown was most prominent for translation-related and mitochondrial proteins. Conversely, the high turnover of lysosomal hydrolases and very low turnover of cytoskeletal proteins remained largely unchanged. The slowdown of protein dynamics and decreased abundance of the translational machinery may point to the importance of anabolic attenuation in lifespan extension, as suggested by the hyperfunction theory.

  18. Rural Growth Slows Down.

    ERIC Educational Resources Information Center

    Henry, Mark; And Others

    1987-01-01

    After decade of growth, rural income, population, and overall economic activity have stalled and again lag behind urban trends. Causes include banking and transportation deregulation, international competition, agricultural finance problems. Only nonmetropolitan counties dependent on retirement, government, and trade show continuing income growth…

  19. FOXO/DAF-16 Activation Slows Down Turnover of the Majority of Proteins in C. elegans

    DOE PAGES

    Dhondt, Ineke; Petyuk, Vladislav A.; Cai, Huaihan; ...

    2016-09-13

    Most aging hypotheses assume the accumulation of damage, resulting in gradual physiological decline and, ultimately, death. Avoiding protein damage accumulation by enhanced turnover should slow down the aging process and extend the lifespan. But, lowering translational efficiency extends rather than shortens the lifespan in C. elegans. We studied turnover of individual proteins in the long-lived daf-2 mutant by combining SILeNCe (stable isotope labeling by nitrogen in Caenorhabditiselegans) and mass spectrometry. Intriguingly, the majority of proteins displayed prolonged half-lives in daf-2, whereas others remained unchanged, signifying that longevity is not supported by high protein turnover. We found that this slowdown wasmore » most prominent for translation-related and mitochondrial proteins. Conversely, the high turnover of lysosomal hydrolases and very low turnover of cytoskeletal proteins remained largely unchanged. The slowdown of protein dynamics and decreased abundance of the translational machinery may point to the importance of anabolic attenuation in lifespan extension, as suggested by the hyperfunction theory.« less

  20. Forward scattering due to slow-down of the intermediate in the H + HD --> D + H2 reaction

    NASA Astrophysics Data System (ADS)

    Harich, Steven A.; Dai, Dongxu; Wang, Chia C.; Yang, Xueming; Chao, Sheng Der; Skodje, Rex T.

    2002-09-01

    Quantum dynamical processes near the energy barrier that separates reactants from products influence the detailed mechanism by which elementary chemical reactions occur. In fact, these processes can change the product scattering behaviour from that expected from simple collision considerations, as seen in the two classical reactions F + H2 --> HF + H and H + H2 --> H2 + H and their isotopic variants. In the case of the F + HD reaction, the role of a quantized trapped Feshbach resonance state had been directly determined, confirming previous conclusions that Feshbach resonances cause state-specific forward scattering of product molecules. Forward scattering has also been observed in the H + D2 --> HD + D reaction and attributed to a time-delayed mechanism. But despite extensive experimental and theoretical investigations, the details of the mechanism remain unclear. Here we present crossed-beam scattering experiments and quantum calculations on the H + HD --> H2 + D reaction. We find that the motion of the system along the reaction coordinate slows down as it approaches the top of the reaction barrier, thereby allowing vibrations perpendicular to the reaction coordinate and forward scattering. The reaction thus proceeds, as previously suggested, through a well-defined `quantized bottleneck state' different from the trapped Feshbach resonance states observed before.

  1. Antiglycation and antioxidation properties of Juglans regia and Calendula officinalis: possible role in reducing diabetic complications and slowing down ageing.

    PubMed

    Ahmad, Haroon; Khan, Ibrar; Wahid, Abdul

    2012-09-01

    Accumulation of advanced glycation end products (AGEs) in the body due to the non-enzymatic glycation of proteins and oxidation is associated with aging and diabetes mellitus. In this study we wanted to investigate the antiglycation and antioxidation potential of two medicinal plants: Juglans regia and Calendula officinalis. In-vitro investigation was carried out to discover the antiglycation and antioxidation potential of J. regia and C. officinalis. Using an Ultraviolet Double-beam Spectrophotometer, we evaluated the antiglycation property of the crude methanolic extracts of J. regia and C. officinalis by assessing their ability to inhibit the Maillard reaction. Employing the same instrument we also measured the antioxidation potential of these plant extracts using the nitric oxide (NO) free radical-scavenging assay. J. regia had greater antiglycation ability, with a minimum inhibitory concentration (MIC50) of 28 microg/mL as compared with that of C. officinalis (270 microg/mL). C. officinalis had greater antioxidation potential (26.10, 22.07 and 16.06% at 0.5 mg, 0.25 mg and 0.125 mg, respectively, as compared with 18.15, 16.50 and 16.06% of J. regia, respectively). J. regia and C. officinalis inhibited the Maillard reaction and prevented oxidation in-vitro. Hence, the extracts of these plants could have therapeutic uses in curbing chronic diabetic complications and slowing down aging.

  2. Update on Establishing the Feasibility of Lead Slowing Down Spectroscopy for Direct Measurement of Plutonium in Used Fuel

    SciTech Connect

    Kulisek, Jonathan A.; Anderson, Kevin K.; Casella, Andrew M.; Warren, Glen A.; Gavron, Victor A.; Danon, Yaron; Weltz, Adam; Harris, Jason; Imel, G. R.; Stewart, T.

    2013-08-30

    Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) of next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration to address the feasibility of Lead Slowing Down Spectroscopy (LSDS) as an active, nondestructive assay method. LSDS has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than today’s confirmatory assay methods, for which typical uncertainties are approximately 10%. LSDS techniques are sensitive to the fission resonances in the energy range of ~0.1-1000 eV, enabling their use to determine the mass content of the fissile isotopes in used fuel. This paper will present an update with regard to applying LSDS for used fuel assay and the development of algorithms to extract fissile isotopic masses from the used fuel.

  3. D-Factor: A Quantitative Model of Application Slow-Down in Multi-Resource Shared Systems

    SciTech Connect

    Lim, Seung-Hwan; Huh, Jae-Seok; Kim, Youngjae; Shipman, Galen M; Das, Chita

    2012-01-01

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price - resource contention among jobs increases job completion time. In this paper, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job is characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We also show that the model can be integrated with an existing on-line scheduler to minimize the makespan of workloads.

  4. Measurements of (n,α) cross-section of small samples using a lead-slowing-down-spectrometer

    NASA Astrophysics Data System (ADS)

    Romano, Catherine; Danon, Yaron; Haight, Robert C.; Wender, Stephen A.; Vieira, David J.; Bond, Evelyn M.; Rundberg, Robert S.; Wilhelmy, Jerry B.; O'Donnell, John M.; Michaudon, Andre F.; Bredeweg, Todd A.; Rochman, Dimitri; Granier, Thierry; Ethvignot, Thierry

    2006-06-01

    At the Los Alamos Neutron Science Center (LANSCE) a compensated ionization chamber (CIC) was placed in a lead slowing down spectrometer (LSDS) to measure the 6Li(n,α) 3H cross-section as a feasibility test for further work. The LSDS consists of a 1.2 m cube of lead with a tungsten target in the center where spallation neutrons are produced when bombarded with pulses of 800 MeV protons. The resulting neutron flux is of the order of 10 14 n/cm 2 /s which allows the cross-section measurement of samples of the order of 10's of nanograms. The initial experiment measured a 91 μg sample of natural lithium flouride. Cross-section measurements were obtained in the 0.1 eV-2 keV energy range. A 62 μg sample was placed in the chamber with a higher neutron beam intensity, and data was obtained in the 0.1-300 eV range. Adjustments in chamber dimensions and electronic configuration will improve gamma flash compensation at high beam intensity, decrease the dead time, and increase the energy range where data can be obtained. The intense neutron flux will allow the use of a smaller sample.

  5. Leaf litter traits of invasive species slow down decomposition compared to Spanish natives: a broad phylogenetic comparison.

    PubMed

    Godoy, Oscar; Castro-Díez, Pilar; Van Logtestijn, Richard S P; Cornelissen, Johannes H C; Valladares, Fernando

    2010-03-01

    Leaf traits related to the performance of invasive alien species can influence nutrient cycling through litter decomposition. However, there is no consensus yet about whether there are consistent differences in functional leaf traits between invasive and native species that also manifest themselves through their "after life" effects on litter decomposition. When addressing this question it is important to avoid confounding effects of other plant traits related to early phylogenetic divergences and to understand the mechanism underlying the observed results to predict which invasive species will exert larger effects on nutrient cycling. We compared initial leaf litter traits, and their effect on decomposability as tested in standardized incubations, in 19 invasive-native pairs of co-familial species from Spain. They included 12 woody and seven herbaceous alien species representative of the Spanish invasive flora. The predictive power of leaf litter decomposition rates followed the order: growth form > family > status (invasive vs. native) > leaf type. Within species pairs litter decomposition tended to be slower and more dependent on N and P in invaders than in natives. This difference was likely driven by the higher lignin content of invader leaves. Although our study has the limitation of not representing the natural conditions from each invaded community, it suggests a potential slowing down of the nutrient cycle at ecosystem scale upon invasion.

  6. Measurement and Analysis Plan for Investigation of Spent-Fuel Assay Using Lead Slowing-Down Spectroscopy

    SciTech Connect

    Smith, Leon E.; Haas, Derek A.; Gavron, Victor A.; Imel, G. R.; Ressler, Jennifer J.; Bowyer, Sonya M.; Danon, Y.; Beller, D.

    2009-09-25

    Under funding from the Department of Energy Office of Nuclear Energy’s Materials, Protection, Accounting, and Control for Transmutation (MPACT) program (formerly the Advanced Fuel Cycle Initiative Safeguards Campaign), Pacific Northwest National Laboratory (PNNL) and Los Alamos National Laboratory (LANL) are collaborating to study the viability of lead slowing-down spectroscopy (LSDS) for spent-fuel assay. Based on the results of previous simulation studies conducted by PNNL and LANL to estimate potential LSDS performance, a more comprehensive study of LSDS viability has been defined. That study includes benchmarking measurements, development and testing of key enabling instrumentation, and continued study of time-spectra analysis methods. This report satisfies the requirements for a PNNL/LANL deliverable that describes the objectives, plans and contributing organizations for a comprehensive three-year study of LSDS for spent-fuel assay. This deliverable was generated largely during the LSDS workshop held on August 25-26, 2009 at Rensselaer Polytechnic Institute (RPI). The workshop itself was a prominent milestone in the FY09 MPACT project and is also described within this report.

  7. Slow Down and Concentrate: Time for a Paradigm Shift in Fall Prevention among People with Parkinson's Disease?

    PubMed

    Stack, Emma L; Roberts, Helen C

    2013-01-01

    Introduction. We know little about how environmental challenges beyond home exacerbate difficulty moving, leading to falls among people with Parkinson's (PwP). Aims. To survey falls beyond home, identifying challenges amenable to behaviour change. Methods. We distributed 380 questionnaires to PwP in Southern England, asking participants to count and describe falls beyond home in the previous 12 months. Results. Among 255 responses, 136 PwP (diagnosed a median 8 years) reported falling beyond home. They described 249 falls in detail, commonly falling forward after tripping in streets. Single fallers (one fall in 12 months) commonly missed their footing, walking, or changing position and recovered to standing alone or with unfamiliar help. Repeat fallers (median falls, two) commonly felt shaken or embarrassed and sought medical advice. Very frequent fallers (falling at least monthly; median falls beyond home, six) commonly fell backward, in shops and after collapse but often recovered to standing alone. Conclusion. Even independently active PwP who do not fall at home may fall beyond home, often after tripping. Falling beyond home may result in psychological and/or physical trauma (embarrassment if observed by strangers and/or injury if falling backwards onto a hard surface). Prevention requires vigilance and preparedness: slowing down and concentrating on a single task might effectively prevent falling.

  8. Epigenomic maintenance through dietary intervention can facilitate DNA repair process to slow down the progress of premature aging.

    PubMed

    Ghosh, Shampa; Sinha, Jitendra Kumar; Raghunath, Manchala

    2016-09-01

    DNA damage caused by various sources remains one of the most researched topics in the area of aging and neurodegeneration. Increased DNA damage causes premature aging. Aging is plastic and is characterised by the decline in the ability of a cell/organism to maintain genomic stability. Lifespan can be modulated by various interventions like calorie restriction, a balanced diet of macro and micronutrients or supplementation with nutrients/nutrient formulations such as Amalaki rasayana, docosahexaenoic acid, resveratrol, curcumin, etc. Increased levels of DNA damage in the form of double stranded and single stranded breaks are associated with decreased longevity in animal models like WNIN/Ob obese rats. Erroneous DNA repair can result in accumulation of DNA damage products, which in turn result in premature aging disorders such as Hutchinson-Gilford progeria syndrome. Epigenomic studies of the aging process have opened a completely new arena for research and development of drugs and therapeutic agents. We propose here that agents or interventions that can maintain epigenomic stability and facilitate the DNA repair process can slow down the progress of premature aging, if not completely prevent it. © 2016 IUBMB Life, 68(9):717-721, 2016.

  9. [Slowing down the rate of irreversible age-related atrophy of the thymus gland by atopic autotransplantation of its tissue, subjected to long-term cryoconservation].

    PubMed

    Kulikov, A V; Arkhipova, L V; Smirnova, G N; Novoselova, E G; Shpurova, N A; Shishova, N V; Sukhikh, G T

    2010-01-01

    An experimental procedure has been developed enabling to slow down the rate of irreversible atrophy of the thymus gland. The atopic autotransplantation of its tissue subjected to prolonged cryoconservation enables one to inhibit the aging of the organism with respect to several biochemical and immunological indicators.

  10. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    ERIC Educational Resources Information Center

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  11. Comparison of Monte Carlo calculated electron slowing-down spectra generated by 60Co γ-rays, electrons, protons and light ions

    NASA Astrophysics Data System (ADS)

    Tilly, N.; Fernández-Varea, J. M.; Grusell, E.; Brahme, A.

    2002-04-01

    When analysing the factors affecting the relative biological effectiveness (RBE) of different radiation qualities, it is essential to consider particularly the low-energy slowing-down electrons (around 100 eV to 1 keV) since they have the potential of inflicting severe damage to the DNA. We present a modified and extended version of the Monte Carlo code PENELOPE that enables scoring of slowing-down spectra, mean local energy imparted spectra and average intra-track nearest-neighbour energy deposition distances of the secondary electrons generated by different radiation qualities, such as electrons, photons, protons and light ions in general. The resulting spectra show that the low-linear energy transfer (LET) beams, 60Co γ-rays and electrons with initial energies of 0.1 MeV and higher, have as expected approximately the same electron slowing-down fluence per unit dose in the biologically important low-energy interval. Consistent with the general behaviour of the RBE of low-energy electrons, protons and light ions, the low-energy electron slowing-down fluence per unit dose is larger than for low-LET beams, and it increases with decreasing initial projectile energy.

  12. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    ERIC Educational Resources Information Center

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  13. Therapeutic dosages of aspirin counteract the IL-6 induced pro-tumorigenic effects by slowing down the ribosome biogenesis rate

    PubMed Central

    Brighenti, Elisa; Giannone, Ferdinando Antonino; Fornari, Francesca; Onofrillo, Carmine; Govoni, Marzia; Montanaro, Lorenzo; Treré, Davide; Derenzini, Massimo

    2016-01-01

    Chronic inflammation is a risk factor for the onset of cancer and the regular use of aspirin reduces the risk of cancer development. Here we showed that therapeutic dosages of aspirin counteract the pro-tumorigenic effects of the inflammatory cytokine interleukin(IL)-6 in cancer and non-cancer cell lines, and in mouse liver in vivo. We found that therapeutic dosages of aspirin prevented IL-6 from inducing the down-regulation of p53 expression and the acquisition of the epithelial mesenchymal transition (EMT) phenotypic changes in the cell lines. This was the result of a reduction in c-Myc mRNA transcription which was responsible for a down-regulation of the ribosomal protein S6 expression which, in turn, slowed down the rRNA maturation process, thus reducing the ribosome biogenesis rate. The perturbation of ribosome biogenesis hindered the Mdm2-mediated proteasomal degradation of p53, throughout the ribosomal protein-Mdm2-p53 pathway. P53 stabilization hindered the IL-6 induction of the EMT changes. The same effects were observed in livers from mice stimulated with IL-6 and treated with aspirin. It is worth noting that aspirin down-regulated ribosome biogenesis, stabilized p53 and up-regulated E-cadherin expression in unstimulated control cells also. In conclusion, these data showed that therapeutic dosages of aspirin increase the p53-mediated tumor-suppressor activity of the cells thus being in this way able to reduce the risk of cancer onset, either or not linked to chronic inflammatory processes. PMID:27557515

  14. A method for (n,alpha) and (n,p) cross section measurements using a lead slowing-down spectrometer

    NASA Astrophysics Data System (ADS)

    Thompson, Jason Tyler

    The need for nuclear data comes from several sources including astrophysics, stockpile stewardship, and reactor design. Photodisintegration, neutron capture, and charged particle out reactions on stable or short-lived radioisotopes play crucial roles during stellar evolution and forming solar isotopic abundances whereas these reactions can affect the safety of our national weapons stockpile or criticality and safety calculations for reactors. Although models can be used to predict some of these values, these predictions are only as good as the experimental data that constrains them. For neutron-induced emission of α particles and protons ((n,α) and (n,p) reactions) at energies below 1 MeV, the experimental data is at best scarce and models must rely on extrapolations from unlike situations, (i.e. different reactions, isotopes, and energies) providing ample room for uncertainty. In this work a new method of measuring energy dependent (n,α) and (n,p) cross sections was developed for the energy range of 0.1 eV - ˜100 keV using a lead slowing-down spectrometer (LSDS). The LSDS provides a ˜10 4 neutron flux increase over the more conventionally used time-of-flight (ToF) methods at equivalent beam conditions, allowing for the measurement of small cross sections (µb’s to mb’s) while using small sample masses (µg’s to mg’s). Several detector concepts were designed and tested, including specially constructed Canberra passivated, implanted, planar silicon (PIPS) detectors; and gas-electron-multiplier (GEM) foils. All designs are compensated to minimize γ-flash problems. The GEM detector was found to function satisfactory for (n,α) measurements, but the PIPS detectors were found to be better suited for (n,p) reaction measurements. A digital data acquisition (DAQ) system was programmed such that background can be measured simultaneously with the reaction cross section. Measurements of the 147Sm(n,α)144Nd and 149 Sm(n,α)146Nd reaction cross sections were

  15. Critical slowing down of polar nano regions ensemble in Gd3+-substituted PbMg1/3Nb2/3O3 ceramics

    NASA Astrophysics Data System (ADS)

    Pandey, Adityanarayan H.; Gupta, S. M.; Lalla, N. P.; Nigam, A. K.

    2017-07-01

    Investigations on Gd-substituted lead magnesium niobate (Pb1-xGdxMg(1+x)/3Nb(2-x)/3O3; varying x = 0.01-0.1) ceramics have revealed critical slowing down of the polar nano regions (PNRs) ensemble into a "super-dipolar glass state" for higher Gd-substitution x ≥ 0.05. Low temperature electric field induced polarization switching study (P-E) has revealed a sharp decrease in the remanent polarization up to x = 0.03, which strengthen the critical slowing down of polar nano-domains dynamics, suggesting a reduction in the correlation between or within polar nano regions (PNRs) leading to a reduction in its size. Bright field imaging by using transmission electron microscope has also confirmed the reduction of the size of polar nano regions with increasing "x." Selected area electron diffraction pattern along ⟨110⟩ unit axis has revealed enhancement in intensity of the superlattice reflections spot at ½ ½ ½ along ⟨111⟩ unit axis with increasing "x," which is associated with the enhancement of chemical ordered regions and correlate well to enhancement in the degree of diffuseness parameters "δA" determined from fitting of the temperature dependent dielectric constant ɛ(T) plot above the dielectric maximum peak (ɛmax). The enhanced "δA" for x ≥ 0.05 is due to additional disorder created by the Gd-ions substitution at the Mg-site, which is consistent with the phase and microstructural analysis. Fitting of frequency dependent Tm (temperature of ɛmax) to the power law of critical dynamic has revealed realistic pre-factor fitting parameters for x ≥ 0.05 suggesting critical slowing down of the polar nano-domains dynamics ensemble resulting in super-dipolar glass state.

  16. "You can save time if…"-A qualitative study on internal factors slowing down clinical trials in Sub-Saharan Africa.

    PubMed

    Vischer, Nerina; Pfeiffer, Constanze; Limacher, Manuela; Burri, Christian

    2017-01-01

    The costs, complexity, legal requirements and number of amendments associated with clinical trials are rising constantly, which negatively affects the efficient conduct of trials. In Sub-Saharan Africa, this situation is exacerbated by capacity and funding limitations, which further increase the workload of clinical trialists. At the same time, trials are critically important for improving public health in these settings. The aim of this study was to identify the internal factors that slow down clinical trials in Sub-Saharan Africa. Here, factors are limited to those that exclusively relate to clinical trial teams and sponsors. These factors may be influenced independently of external conditions and may significantly increase trial efficiency if addressed by the respective teams. We conducted sixty key informant interviews with clinical trial staff working in different positions in two clinical research centres in Kenya, Ghana, Burkina Faso and Senegal. The study covered English- and French-speaking, and Eastern and Western parts of Sub-Saharan Africa. We performed thematic analysis of the interview transcripts. We found various internal factors associated with slowing down clinical trials; these were summarised into two broad themes, "planning" and "site organisation". These themes were consistently mentioned across positions and countries. "Planning" factors related to budget feasibility, clear project ideas, realistic deadlines, understanding of trial processes, adaptation to the local context and involvement of site staff in planning. "Site organisation" factors covered staff turnover, employment conditions, career paths, workload, delegation and management. We found that internal factors slowing down clinical trials are of high importance to trial staff. Our data suggest that adequate and coherent planning, careful assessment of the setting, clear task allocation and management capacity strengthening may help to overcome the identified internal factors and

  17. "You can save time if…"—A qualitative study on internal factors slowing down clinical trials in Sub-Saharan Africa

    PubMed Central

    Pfeiffer, Constanze; Limacher, Manuela; Burri, Christian

    2017-01-01

    Background The costs, complexity, legal requirements and number of amendments associated with clinical trials are rising constantly, which negatively affects the efficient conduct of trials. In Sub-Saharan Africa, this situation is exacerbated by capacity and funding limitations, which further increase the workload of clinical trialists. At the same time, trials are critically important for improving public health in these settings. The aim of this study was to identify the internal factors that slow down clinical trials in Sub-Saharan Africa. Here, factors are limited to those that exclusively relate to clinical trial teams and sponsors. These factors may be influenced independently of external conditions and may significantly increase trial efficiency if addressed by the respective teams. Methods We conducted sixty key informant interviews with clinical trial staff working in different positions in two clinical research centres in Kenya, Ghana, Burkina Faso and Senegal. The study covered English- and French-speaking, and Eastern and Western parts of Sub-Saharan Africa. We performed thematic analysis of the interview transcripts. Results We found various internal factors associated with slowing down clinical trials; these were summarised into two broad themes, “planning” and “site organisation”. These themes were consistently mentioned across positions and countries. “Planning” factors related to budget feasibility, clear project ideas, realistic deadlines, understanding of trial processes, adaptation to the local context and involvement of site staff in planning. “Site organisation” factors covered staff turnover, employment conditions, career paths, workload, delegation and management. Conclusions We found that internal factors slowing down clinical trials are of high importance to trial staff. Our data suggest that adequate and coherent planning, careful assessment of the setting, clear task allocation and management capacity strengthening may

  18. Effect of the size of experimental channels of the lead slowing-down spectrometer SVZ-100 (Institute for Nuclear Research, Moscow) on the moderation constant

    NASA Astrophysics Data System (ADS)

    Latysheva, L. N.; Bergman, A. A.; Sobolevsky, N. M.; Ilić, R. D.

    2013-04-01

    Lead slowing-down (LSD) spectrometers have a low energy resolution (about 30%), but their luminosity is 103 to 104 times higher than that of time-of-flight (TOF) spectrometers. A high luminosity of LSD spectrometers makes it possible to use them to measure neutron cross section for samples of mass about several micrograms. These features specify a niche for the application of LSD spectrometers in measuring neutron cross sections for elements hardly available in macroscopic amounts—in particular, for actinides. A mathematical simulation of the parameters of SVZ-100 LSD spectrometer of the Institute for Nuclear Research (INR, Moscow) is performed in the present study on the basis of the MCNPX code. It is found that the moderation constant, which is the main parameter of LSD spectrometers, is highly sensitive to the size and shape of detecting volumes in calculations and, hence, to the real size of experimental channels of the LSD spectrometer.

  19. Effect of the size of experimental channels of the lead slowing-down spectrometer SVZ-100 (Institute for Nuclear Research, Moscow) on the moderation constant

    SciTech Connect

    Latysheva, L. N.; Bergman, A. A.; Sobolevsky, N. M.; Ilic, R. D.

    2013-04-15

    Lead slowing-down (LSD) spectrometers have a low energy resolution (about 30%), but their luminosity is 10{sup 3} to 10{sup 4} times higher than that of time-of-flight (TOF) spectrometers. A high luminosity of LSD spectrometers makes it possible to use them to measure neutron cross section for samples of mass about several micrograms. These features specify a niche for the application of LSD spectrometers in measuring neutron cross sections for elements hardly available in macroscopic amounts-in particular, for actinides. A mathematical simulation of the parameters of SVZ-100 LSD spectrometer of the Institute for Nuclear Research (INR, Moscow) is performed in the present study on the basis of the MCNPX code. It is found that the moderation constant, which is the main parameter of LSD spectrometers, is highly sensitive to the size and shape of detecting volumes in calculations and, hence, to the real size of experimental channels of the LSD spectrometer.

  20. The range of penetration and the backscattering coefficient by using both the analytic and the stochastic theoretical ways of electron slowing down in solid targets: Comparative study

    NASA Astrophysics Data System (ADS)

    Bentabet, A.

    2011-04-01

    The aim of this paper is to determine the best theoretical way (stochastic or analytic) to use in the study of the range of penetration and the backscattering coefficient of electron impinging in solid target, by adopting the same input in the form of collision cross-sections. For this purpose, differential elastic cross sections has been calculated by using our semi-empirical model [A. Bentabet, Z. Chaoui, A. Aydin, A. Azbouche, Vacuum 85 (2010) 156]; and that of inelastic cross sections has been calculated by using Gryzinski's excitation function. Moreover, in stochastic case, the obtained quantities are calculated by using both Monte Carlo schemes based on continuous slowing down approximation (CSDA) and on individual electron scattering events methods.

  1. Critical slowing down and elastic anomaly of uniaxial ferroelectric Ca0.28Ba0.72Nb2O6 crystals with tungsten bronze structure

    NASA Astrophysics Data System (ADS)

    Suzuki, K.; Matsumoto, K.; Dec, J.; Łukasiewicz, T.; Kleemann, W.; Kojima, S.

    2014-08-01

    The ferroelectric phase transition of uniaxial Ca0.28Ba0.72Nb2O6 single crystals with a moderate effective charge disorder was investigated by Brillouin scattering to clarify the dynamic properties. In the tetragonal paraelectric phase a remarkable softening of the sound velocity of the longitudinal acoustic mode and a significant increase in the sound attenuation were observed close to the Curie temperature TC=527K. The intermediate temperature T* ˜640K and the Burns temperature TB ˜790K were determined from the temperature variation in the sound attenuation. The intense broad central peak (CP) caused by polarization and strain fluctuations due to polar nanoregions was clearly observed in the vicinity of TC. The relaxation time determined by the CP width clearly shows critical slowing down towards TC, reflecting a weakly first-order phase transition under weak random fields.

  2. Slow-down or speed-up of inter- and intra-cluster diffusion of controversial knowledge in stubborn communities based on a small world network

    NASA Astrophysics Data System (ADS)

    Ausloos, Marcel

    2015-06-01

    Diffusion of knowledge is expected to be huge when agents are open minded. The report concerns a more difficult diffusion case when communities are made of stubborn agents. Communities having markedly different opinions are for example the Neocreationist and Intelligent Design Proponents (IDP), on one hand, and the Darwinian Evolution Defenders (DED), on the other hand. The case of knowledge diffusion within such communities is studied here on a network based on an adjacency matrix built from time ordered selected quotations of agents, whence for inter- and intra-communities. The network is intrinsically directed and not necessarily reciprocal. Thus, the adjacency matrices have complex eigenvalues; the eigenvectors present complex components. A quantification of the slow-down or speed-up effects of information diffusion in such temporal networks, with non-Markovian contact sequences, can be made by comparing the real time dependent (directed) network to its counterpart, the time aggregated (undirected) network, - which has real eigenvalues. In order to do so, small world networks which both contain an odd number of nodes are studied and compared to similar networks with an even number of nodes. It is found that (i) the diffusion of knowledge is more difficult on the largest networks; (ii) the network size influences the slowing-down or speeding-up diffusion process. Interestingly, it is observed that (iii) the diffusion of knowledge is slower in IDP and faster in DED communities. It is suggested that the finding can be "rationalized", if some "scientific quality" and "publication habit" is attributed to the agents, as common sense would guess. This finding offers some opening discussion toward tying scientific knowledge to belief.

  3. Weighted Bergman kernels and virtual Bergman kernels

    NASA Astrophysics Data System (ADS)

    Roos, Guy

    2005-12-01

    We introduce the notion of "virtual Bergman kernel" and apply it to the computation of the Bergman kernel of "domains inflated by Hermitian balls", in particular when the base domain is a bounded symmetric domain.

  4. Single and multiple resistance QTL delay symptom appearance and slow down root colonization by Aphanomyces euteiches in pea near isogenic lines.

    PubMed

    Lavaud, C; Baviere, M; Le Roy, G; Hervé, M R; Moussart, A; Delourme, R; Pilet-Nayel, M-L

    2016-07-27

    Understanding the effects of resistance QTL on pathogen development cycle is an important issue for the creation of QTL combination strategies to durably increase disease resistance in plants. The oomycete pathogen Aphanomyces euteiches, causing root rot disease, is one of the major factors limiting the pea crop in the main producing countries. No commercial resistant varieties are currently available in Europe. Resistance alleles at seven main QTL were recently identified and introgressed into pea agronomic lines, resulting in the creation of Near Isogenic Lines (NILs) at the QTL. This study aimed to determine the effect of main A. euteiches resistance QTL in NILs on different steps of the pathogen life cycle. NILs carrying resistance alleles at main QTL in susceptible genetic backgrounds were evaluated in a destructive test under controlled conditions. The development of root rot disease severity and pathogen DNA levels in the roots was measured during ten days after inoculation. Significant effects of several resistance alleles at the two major QTL Ae-Ps7.6 and Ae-Ps4.5 were observed on symptom appearance and root colonization by A. euteiches. Some resistance alleles at three other minor-effect QTL (Ae-Ps2.2, Ae-Ps3.1 and Ae-Ps5.1) significantly decreased root colonization. The combination of resistance alleles at two or three QTL including the major QTL Ae-Ps7.6 (Ae-Ps5.1/Ae-Ps7.6 or Ae-Ps2.2/Ae-Ps3.1/Ae-Ps7.6) had an increased effect on delaying symptom appearance and/or slowing down root colonization by A. euteiches and on plant resistance levels, compared to the effects of individual or no resistance alleles. This study demonstrated the effects of single or multiple resistance QTL on delaying symptom appearance and/or slowing down colonization by A. euteiches in pea roots, using original plant material and a precise pathogen quantification method. Our findings suggest that single resistance QTL can act on multiple or specific steps of the disease development

  5. Slow-downs and speed-ups of India-Eurasia convergence since ˜20Ma: Data-noise, uncertainties and dynamic implications

    NASA Astrophysics Data System (ADS)

    Iaffaldano, Giampiero; Bodin, Thomas; Sambridge, Malcolm

    2013-04-01

    India-Somalia and North America-Eurasia relative motions since Early Miocene (˜20Ma) have been recently reconstructed at unprecedented temporal resolution (<1Myr) from magnetic surveys of the Carlsberg and northern Mid-Atlantic Ridges. These new datasets revamped interest in the convergence of India relative to Eurasia, which is obtained from the India-Somalia-Nubia-North America-Eurasia plate circuit. Unless finite rotations are arbitrarily smoothed through time, however, the reconstructed kinematics (i.e. stage Euler vectors) appear to be surprisingly unusual over the past ˜20Myr. In fact, the Euler pole for the India-Eurasia rigid motion scattered erratically over a broad region, while the associated angular velocity underwent sudden increases and decreases. Consequently, convergence across the Himalayan front featured significant speed-ups as well as slow-downs with almost no consistent trend. Arguably, this pattern arises from the presence of data-noise, which biases kinematic reconstructions—particularly at high temporal resolution. The rapid and important India-Eurasia plate-motion changes reconstructed since Early Miocene are likely to be of apparent nature, because they cannot result even from the most optimistic estimates of torques associated, for instance, with the descent of the Indian slab into Earth's mantle. Our previous work aimed at reducing noise in finite-rotation datasets via an expanded Bayesian formulation, which offers several advantages over arbitrary smoothing methods. Here we build on this advance and revise the India-Eurasia kinematics since ˜20Ma, accounting also for three alternative histories of rifting in Africa. We find that India-Eurasia kinematics are simpler and, most importantly, geodynamically plausible upon noise reduction. Convergence across the Himalayan front overall decreased until ˜10Ma, but then systematically increased, albeit moderately, towards the present-day. We test with global dynamic models of the coupled

  6. Slow-downs and speed-ups of India-Eurasia convergence since ~20 Ma: Data-noise, uncertainties and dynamic implications

    NASA Astrophysics Data System (ADS)

    Iaffaldano, G.; Bodin, T.; Sambridge, M.

    2012-12-01

    India-Somalia and North America-Eurasia relative motions since Early Miocene (~20 Ma) have been recently reconstructed at unprecedented temporal resolution from magnetic surveys of the Carlsberg and northern Mid-Atlantic Ridges. These new datasets revamped interest in the convergence of India relative to Eurasia, which is obtained from the India-Somalia-Nubia-North America-Eurasia plate circuit. Unless finite rotations are arbitrarily smoothed through time, however, the reconstructed kinematics (i.e. stage Euler vectors) appear to be surprisingly unusual over the past ~20 Myr. In fact, the Euler pole for the India-Eurasia rigid motion scattered erratically over a broad region, while the associated angular velocity underwent sudden increases and decreases. As a consequence, convergence across the Himalayan front featured significant speed-ups as well as slow-downs with almost no consistent trend. Arguably, this pattern arises from the presence of data-noise that biases kinematic reconstructions, particularly at high temporal resolution. The rapid and important India-Eurasia plate-motion changes reconstructed since Early Miocene are likely to be of apparent nature, because they cannot result even from the most optimistic estimates of torques associated, for instance, with the descent of the Indian slab into Earth's mantle. Our recent work aimed at reducing noise in finite-rotation datasets via an expanded Bayesian formulation, which offers several advantages over arbitrary smoothing methods. Here we build on this advance and revise the India-Eurasia kinematics since ~20 Ma, accounting also for three alternative histories of rifting in Africa. We find that India-Eurasia kinematics are simpler and, most importantly, geodynamically plausible upon noise reduction. Convergence across the Himalayan front decreased systematically until ~10 Ma, but then increased moderately until the present-day. We test with global dynamic models of the coupled mantle/lithosphere system how

  7. Information slows down hierarchy growth

    NASA Astrophysics Data System (ADS)

    Czaplicka, Agnieszka; Suchecki, Krzysztof; Miñano, Borja; Trias, Miquel; Hołyst, Janusz A.

    2014-06-01

    We consider models of growing multilevel systems wherein the growth process is driven by rules of tournament selection. A system can be conceived as an evolving tree with a new node being attached to a contestant node at the best hierarchy level (a level nearest to the tree root). The proposed evolution reflects limited information on system properties available to new nodes. It can also be expressed in terms of population dynamics. Two models are considered: a constant tournament (CT) model wherein the number of tournament participants is constant throughout system evolution, and a proportional tournament (PT) model where this number increases proportionally to the growing size of the system itself. The results of analytical calculations based on a rate equation fit well to numerical simulations for both models. In the CT model all hierarchy levels emerge, but the birth time of a consecutive hierarchy level increases exponentially or faster for each new level. The number of nodes at the first hierarchy level grows logarithmically in time, while the size of the last, "worst" hierarchy level oscillates quasi-log-periodically. In the PT model, the occupations of the first two hierarchy levels increase linearly, but worse hierarchy levels either do not emerge at all or appear only by chance in the early stage of system evolution to further stop growing at all. The results allow us to conclude that information available to each new node in tournament dynamics restrains the emergence of new hierarchy levels and that it is the absolute amount of information, not relative, which governs such behavior.

  8. Information slows down hierarchy growth.

    PubMed

    Czaplicka, Agnieszka; Suchecki, Krzysztof; Miñano, Borja; Trias, Miquel; Hołyst, Janusz A

    2014-06-01

    We consider models of growing multilevel systems wherein the growth process is driven by rules of tournament selection. A system can be conceived as an evolving tree with a new node being attached to a contestant node at the best hierarchy level (a level nearest to the tree root). The proposed evolution reflects limited information on system properties available to new nodes. It can also be expressed in terms of population dynamics. Two models are considered: a constant tournament (CT) model wherein the number of tournament participants is constant throughout system evolution, and a proportional tournament (PT) model where this number increases proportionally to the growing size of the system itself. The results of analytical calculations based on a rate equation fit well to numerical simulations for both models. In the CT model all hierarchy levels emerge, but the birth time of a consecutive hierarchy level increases exponentially or faster for each new level. The number of nodes at the first hierarchy level grows logarithmically in time, while the size of the last, "worst" hierarchy level oscillates quasi-log-periodically. In the PT model, the occupations of the first two hierarchy levels increase linearly, but worse hierarchy levels either do not emerge at all or appear only by chance in the early stage of system evolution to further stop growing at all. The results allow us to conclude that information available to each new node in tournament dynamics restrains the emergence of new hierarchy levels and that it is the absolute amount of information, not relative, which governs such behavior.

  9. Why does diversification slow down?

    PubMed

    Moen, Daniel; Morlon, Hélène

    2014-04-01

    Studies of phylogenetic diversification often show evidence for slowdowns in diversification rates over the history of clades. Recent studies seeking biological explanations for this pattern have emphasized the role of niche differentiation, as in hypotheses of adaptive radiation and ecological limits to diversity. Yet many other biological explanations might underlie diversification slowdowns. In this paper, we focus on the geographic context of diversification, environment-driven bursts of speciation, failure of clades to keep pace with a changing environment, and protracted speciation. We argue that, despite being currently underemphasized, these alternatives represent biologically plausible explanations that should be considered along with niche differentiation. Testing the importance of these alternative hypotheses might yield fundamentally different explanations for what influences species richness within clades through time.

  10. Some new results on electron transport in the atmosphere. [Monte Carlo calculation of penetration, diffusion, and slowing down of electron beams in air

    NASA Technical Reports Server (NTRS)

    Berger, M. J.; Seltzer, S. M.; Maeda, K.

    1972-01-01

    The penetration, diffusion and slowing down of electrons in a semi-infinite air medium has been studied by the Monte Carlo method. The results are applicable to the atmosphere at altitudes up to 300 km. Most of the results pertain to monoenergetic electron beams injected into the atmosphere at a height of 300 km, either vertically downwards or with a pitch-angle distribution isotropic over the downward hemisphere. Some results were also obtained for various initial pitch angles between 0 deg and 90 deg. Information has been generated concerning the following topics: (1) the backscattering of electrons from the atmosphere, expressed in terms of backscattering coefficients, angular distributions and energy spectra of reflected electrons, for incident energies T(o) between 2 keV and 2 MeV; (2) energy deposition by electrons as a function of the altitude, down to 80 km, for T(o) between 2 keV and 2 MeV; (3) the corresponding energy depostion by electron-produced bremsstrahlung, down to 30 km; (4) the evolution of the electron flux spectrum as function of the atmospheric depth, for T(o) between 2 keV and 20 keV. Energy deposition results are given for incident electron beams with exponential and power-exponential spectra.

  11. Application of manure containing tetracyclines slowed down the dissipation of tet resistance genes and caused changes in the composition of soil bacteria.

    PubMed

    Xiong, Wenguang; Wang, Mei; Dai, Jinjun; Sun, Yongxue; Zeng, Zhenling

    2017-09-09

    Manure application contributes to the increased environmental burden of antibiotic resistance genes (ARGs). We investigated the response of tetracycline (tet) resistance genes and bacterial taxa to manure application amended with tetracyclines over two months. Representative tetracyclines (oxytetracycline, chlorotetracycline and doxycycline), tet resistance genes (tet(M), tet(O), tet(W), tet(S), tet(Q) and tet(X)) and bacterial taxa in the untreated soil, +manure, and +manure+tetracyclines groups were analyzed. The abundances of all tet resistance genes in the +manure group were significantly higher than those in the untreated soil group on day 1. The abundances of all tet resistance genes (except tet(Q) and tet(X)) were significantly lower in the +manure group than those in the +manure+tetracyclines group on day 30 and 60. The dissipation rates were higher in the +manure group than those in the +manure+tetracyclines group. Disturbance of soil bacterial community composition imposed by tetracyclines was also observed. The results indicated that tetracyclines slowed down the dissipation of tet resistance genes in arable soil after manure application. Application of manure amended with tetracyclines may provide a significant selective advantage for species affiliated to the taxonomical families of Micromonosporaceae, Propionibacteriaceae, Streptomycetaceae, Nitrospiraceae and Clostridiaceae. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Asparagine slows down the breakdown of storage lipid and degradation of autophagic bodies in sugar-starved embryo axes of germinating lupin seeds.

    PubMed

    Borek, Sławomir; Paluch-Lubawa, Ewelina; Pukacka, Stanisława; Pietrowska-Borek, Małgorzata; Ratajczak, Lech

    2017-02-01

    The research was conducted on embryo axes of yellow lupin (Lupinus luteus L.), white lupin (Lupinus albus L.) and Andean lupin (Lupinus mutabilis Sweet), which were isolated from imbibed seeds and cultured for 96h in vitro under different conditions of carbon and nitrogen nutrition. Isolated embryo axes were fed with 60mM sucrose or were sugar-starved. The effect of 35mM asparagine (a central amino acid in the metabolism of germinating lupin seeds) and 35mM nitrate (used as an inorganic kind of nitrogen) on growth, storage lipid breakdown and autophagy was investigated. The sugar-starved isolated embryo axes contained more total lipid than axes fed with sucrose, and the content of this storage compound was even higher in sugar-starved isolated embryo axes fed with asparagine. Ultrastructural observations showed that asparagine significantly slowed down decomposition of autophagic bodies, and this allowed detailed analysis of their content. We found peroxisomes inside autophagic bodies in cells of sugar-starved Andean lupin embryo axes fed with asparagine, which led us to conclude that peroxisomes may be degraded during autophagy in sugar-starved isolated lupin embryo axes. One reason for the slower degradation of autophagic bodies was the markedly lower lipolytic activity in axes fed with asparagine. Copyright © 2016 The Author(s). Published by Elsevier GmbH.. All rights reserved.

  13. Antihypertensive treatment with cerebral hemodynamics monitoring by ultrasonography in elderly hypertensives without a history of stroke may prevent or slow down cognitive decline. A pending issue.

    PubMed

    Hadjiev, Dimiter I; Mineva, Petya P

    2011-03-01

    The role of the antihypertensive therapy in preventing cognitive disorders in elderly persons without a history of stroke is still a matter of debate. This article focuses on the pathogenesis of vascular cognitive disorders in hypertension and on the impact of antihypertensive treatment in their prevention. Cerebral white matter lesions, caused by small vessel disease and cerebral hypoperfusion, have been found in the majority of elderly hypertensives. They correlate with cognitive disorders, particularly impairments of attention and executive functions. Excessive blood pressure lowering in elderly patients with long-standing hypertension below a certain critical level, may increase the risk of further cerebral hypoperfusion because of disrupted cerebral blood flow autoregulation. As a result, worsening of the cognitive functions could occur, especially in cases with additional vascular risk factors. Five randomized, placebo-controlled trials have focused on the efficacy of antihypertensive treatments in preventing cognitive impairments in elderly patients without a prior cerebrovascular disease. Four of them have not found positive effects. We suggest that repeated neuropsychological assessments and ultrasonography for evaluation of carotid atherosclerosis, as well as cerebral hemodynamics monitoring could adjust the antihypertensive therapy with the aim to decrease the risk of cerebral hypoperfusion and prevent or slow down cognitive decline in elderly hypertensives. Prospective studies are needed to confirm such a treatment strategy.

  14. Fission fragment mass and energy distributions as a function of incident neutron energy measured in a lead slowing-down spectrometer

    SciTech Connect

    Romano, C.; Danon, Y.; Block, R.; Thompson, J.; Blain, E.; Bond, E.

    2010-01-15

    A new method of measuring fission fragment mass and energy distributions as a function of incident neutron energy in the range from below 0.1 eV to 1 keV has been developed. The method involves placing a double-sided Frisch-gridded fission chamber in Rensselaer Polytechnic Institute's lead slowing-down spectrometer (LSDS). The high neutron flux of the LSDS allows for the measurement of the energy-dependent, neutron-induced fission cross sections simultaneously with the mass and kinetic energy of the fission fragments of various small samples. The samples may be isotopes that are not available in large quantities (submicrograms) or with small fission cross sections (microbarns). The fission chamber consists of two anodes shielded by Frisch grids on either side of a single cathode. The sample is located in the center of the cathode and is made by depositing small amounts of actinides on very thin films. The chamber was successfully tested and calibrated using 0.41+-0.04 ng of {sup 252}Cf and the resulting mass distributions were compared to those of previous work. As a proof of concept, the chamber was placed in the LSDS to measure the neutron-induced fission cross section and fragment mass and energy distributions of 25.3+-0.5 mug of {sup 235}U. Changes in the mass distributions as a function of incident neutron energy are evident and are examined using the multimodal fission mode model.

  15. Defects Slow Down Nonradiative Electron-Hole Recombination in TiS3 Nanoribbons: A Time-Domain Ab Initio Study.

    PubMed

    Wei, Yaqing; Zhou, Zhaohui; Long, Run

    2017-09-21

    Layered TiS3 materials hold appealing potential in photovoltaics and optoelectronics due to their excellent electronic and optical properties. Using time domain density functional theory combined with nonadiabatic (NA) molecular dynamics, we show that the electron-hole recombination in pristine TiS3 nanoribbons (NRs) occurs in tens of picoseconds and is over 10-fold faster than the experimental value. By performing an atomistic ab initio simulation with a sulfur vacancy, we demonstrate that a sulfur vacancy greatly reduces electron-hole recombination, achieving good agreement with experiment. Introduction of a sulfur vacancy increases the band gap slightly because the NR's highest occupied molecular orbital is lowered in energy. More importantly, the sulfur vacancy partially diminishes the electron and hole wave functions' overlap and reduces NA electron-phonon coupling, which competes successfully with the longer decoherence time, slowing down recombination. Our study suggests that a rational choice of defects can control nonradiative electron-hole recombination in TiS3 NRs and provides mechanistic principles for photovoltaic and optoelectronic device design.

  16. Decline of deep and bottom water ventilation and slowing down of anthropogenic carbon storage in the Weddell Sea, 1984-2011

    NASA Astrophysics Data System (ADS)

    Huhn, Oliver; Rhein, Monika; Hoppema, Mario; van Heuven, Steven

    2013-06-01

    We use a 27 year long time series of repeated transient tracer observations to investigate the evolution of the ventilation time scales and the related content of anthropogenic carbon (Cant) in deep and bottom water in the Weddell Sea. This time series consists of chlorofluorocarbon (CFC) observations from 1984 to 2008 together with first combined CFC and sulphur hexafluoride (SF6) measurements from 2010/2011 along the Prime Meridian in the Antarctic Ocean and across the Weddell Sea. Applying the Transit Time Distribution (TTD) method we find that all deep water masses in the Weddell Sea have been continually growing older and getting less ventilated during the last 27 years. The decline of the ventilation rate of Weddell Sea Bottom Water (WSBW) and Weddell Sea Deep Water (WSDW) along the Prime Meridian is in the order of 15-21%; the Warm Deep Water (WDW) ventilation rate declined much faster by 33%. About 88-94% of the age increase in WSBW near its source regions (1.8-2.4 years per year) is explained by the age increase of WDW (4.5 years per year). As a consequence of the aging, the Cant increase in the deep and bottom water formed in the Weddell Sea slowed down by 14-21% over the period of observations.

  17. Expression of CD73 slows down migration of skin dendritic cells, affecting the sensitization phase of contact hypersensitivity reactions in mice.

    PubMed

    Neuberger, A; Ring, S; Silva-Vilches, C; Schrader, J; Enk, A; Mahnke, K

    2017-09-01

    Application of haptens to the skin induces release of immune stimulatory ATP into the extracellular space. This "danger" signal can be converted to immunosuppressive adenosine (ADO) by the action of the ectonucleotidases CD39 and CD73, expressed by skin and immune cells. Thus, the expression and regulation of CD73 by skin derived cells may have crucial influence on the outcome of contact hypersensitivity (CHS) reactions. To investigate the role of CD73 expression during 2,4,6-trinitrochlorobenzene (TNCB) induced CHS reactions. Wild type (wt) and CD73 deficient mice were subjected to TNCB induced CHS. In the different mouse strains the resulting ear swelling reaction was recorded along with a detailed phenotypic analysis of the skin migrating subsets of dendritic cells (DC). In CD73 deficient animals the motility of DC was higher as compared to wt animals and in particular after sensitization we found increased migration of Langerin(+) DC from skin to draining lymph nodes (LN). In the TNCB model this led to a stronger sensitization as indicated by increased frequency of interferon-γ producing T cells in the LN and an increased ear thickness after challenge. CD73 derived ADO production slows down migration of Langerin(+) DC from skin to LN. This may be a crucial mechanism to avoid over boarding immune reactions against haptens. Copyright © 2017 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.

  18. Experimental assessment of the performance of a proposed lead slowing-down spectrometer at WNR/PSR (Weapons Neutron Research/Proton Storage Ring)

    SciTech Connect

    Moore, M.S.; Koehler, P.E.; Michaudon, A.; Schelberg, A. ); Danon, Y.; Block, R.C.; Slovacek, R.E. ); Hoff, R.W.; Lougheed, R.W. )

    1990-01-01

    In November 1989, we carried out a measurement of the fission cross section of {sup 247}Cm, {sup 250}Cf, and {sup 254}Es on the Rensselaer Intense Neutron Source (RINS) at Rensselaer Polytechnic Institute (RPI). In July 1990, we carried out a second measurement, using the same fission chamber and electronics, in beam geometry at the Los Alamos Neutron Scattering Center (LANSCE) facility. Using the relative count rates observed in the two experiments, and the flux-enhancement factors determined by the RPI group for a lead slowing-down spectrometer compared to beam geometry, we can assess the performance of a spectrometer similar to RINS, driven by the Proton Storage Ring (PSR) at the Los Alamos National Laboratory. With such a spectrometer, we find that is is feasible to make measurements with samples of 1 ng for fission 1 {mu}g for capture, and of isotopes with half-lives of tens of minutes. It is important to note that, while a significant amount of information can be obtained from the low resolution RINS measurement, a definitive determination of average properties, including the level density, requires that the resonance structure be resolved. 12 refs., 5 figs., 3 tabs.

  19. Significant change in the construction of a door to a room with slowed down neutron field by means of commonly used inexpensive protective materials.

    PubMed

    Konefał, Adam; Łaciak, Marcin; Dawidowska, Anna; Osewski, Wojciech

    2014-12-01

    The detailed analysis of nuclear reactions occurring in materials of the door is presented for the typical construction of an entrance door to a room with a slowed down neutron field. The changes in the construction of the door were determined to reduce effectively the level of neutron and gamma radiation in the vicinity of the door in a room adjoining the neutron field room. Optimisation of the door construction was performed with the use of Monte Carlo calculations (GEANT4). The construction proposed in this paper bases on the commonly used inexpensive protective materials such as borax (13.4 cm), lead (4 cm) and stainless steel (0.1 and 0.5 cm on the side of the neutron field room and of the adjoining room, respectively). The improved construction of the door, worked out in the presented studies, can be an effective protection against neutrons with energies up to 1 MeV. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Measurement of Neutron-Induced Fission Cross Sections of {sup 229}Th and {sup 231}Pa Using Linac-Driven Lead Slowing-Down Spectrometer

    SciTech Connect

    Kobayashi, Katsuhei; Yamamoto, Shuji; Lee, Samyol; Cho, Hyun-Je; Yamana, Hajimu; Moriyama, Hirotake; Fujita, Yoshiaki; Mitsugashira, Toshiaki

    2001-11-15

    Use is made of a back-to-back type of double fission chamber and an electron linear accelerator-driven lead slowing-down spectrometer to measure the neutron-induced fission cross sections of {sup 229}Th and {sup 231}Pa below 10 keV relative to that of {sup 235}U. A measurement relative to the {sup 10}B(n, {alpha}) reaction is also made using a BF{sub 3} counter at energies below 1 keV and normalized to the absolute value obtained by using the cross section of the {sup 235}U(n,f) reaction between 200 eV and 1 keV.The experimental data of the {sup 229}Th(n,f) reaction, which was measured by Konakhovich et al., show higher cross-section values, especially at energies of 0.1 to 0.4 eV. The data by Gokhberg et al. seem to be lower than the current measurement above 6 keV. Although the evaluated data in JENDL-3.2 are in general agreement with the measurement, the evaluation is higher from 0.25 to 5 eV and lower above 10 eV. The ENDF/B-VI data evaluated above 10 eV are also lower. The current thermal neutron-induced fission cross section at 0.0253 eV is 32.4 {+-} 10.7 b, which is in good agreement with results of Gindler et al., Mughabghab, and JENDL-3.2.The mean value of the {sup 231}Pa(n,f) cross sections between 0.37 and 0.52 eV, which were measured by Leonard and Odegaarden, is close to the current measurement. The evaluated data in ENDF/B-VI are lower below 0.15 eV and higher above {approx}30 eV. The ENDF/B-VI and the JEF-2.2 are extremely higher above 1 keV. The JENDL-3.2 data are in general agreement with the measurement, although they are lower above {approx}100 eV.

  1. Semisupervised kernel matrix learning by kernel propagation.

    PubMed

    Hu, Enliang; Chen, Songcan; Zhang, Daoqiang; Yin, Xuesong

    2010-11-01

    The goal of semisupervised kernel matrix learning (SS-KML) is to learn a kernel matrix on all the given samples on which just a little supervised information, such as class label or pairwise constraint, is provided. Despite extensive research, the performance of SS-KML still leaves some space for improvement in terms of effectiveness and efficiency. For example, a recent pairwise constraints propagation (PCP) algorithm has formulated SS-KML into a semidefinite programming (SDP) problem, but its computation is very expensive, which undoubtedly restricts PCPs scalability in practice. In this paper, a novel algorithm, called kernel propagation (KP), is proposed to improve the comprehensive performance in SS-KML. The main idea of KP is first to learn a small-sized sub-kernel matrix (named seed-kernel matrix) and then propagate it into a larger-sized full-kernel matrix. Specifically, the implementation of KP consists of three stages: 1) separate the supervised sample (sub)set X(l) from the full sample set X; 2) learn a seed-kernel matrix on X(l) through solving a small-scale SDP problem; and 3) propagate the learnt seed-kernel matrix into a full-kernel matrix on X . Furthermore, following the idea in KP, we naturally develop two conveniently realizable out-of-sample extensions for KML: one is batch-style extension, and the other is online-style extension. The experiments demonstrate that KP is encouraging in both effectiveness and efficiency compared with three state-of-the-art algorithms and its related out-of-sample extensions are promising too.

  2. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  3. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2016-02-25

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  4. Iterative software kernels

    SciTech Connect

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  5. Learning with Box Kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-04-12

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, since the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given which dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  6. Learning with box kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-11-01

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, because the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given that dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  7. Kernel Affine Projection Algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  8. Multiple collaborative kernel tracking.

    PubMed

    Fan, Zhimin; Yang, Ming; Wu, Ying

    2007-07-01

    Those motion parameters that cannot be recovered from image measurements are unobservable in the visual dynamic system. This paper studies this important issue of singularity in the context of kernel-based tracking and presents a novel approach that is based on a motion field representation which employs redundant but sparsely correlated local motion parameters instead of compact but uncorrelated global ones. This approach makes it easy to design fully observable kernel-based motion estimators. This paper shows that these high-dimensional motion fields can be estimated efficiently by the collaboration among a set of simpler local kernel-based motion estimators, which makes the new approach very practical.

  9. Robotic Intelligence Kernel: Communications

    SciTech Connect

    Walton, Mike C.

    2009-09-16

    The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.

  10. Robotic Intelligence Kernel: Driver

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.

  11. The two isomers of HDTIC compounds from Astragali Radix slow down telomere shortening rate via attenuating oxidative stress and increasing DNA repair ability in human fetal lung diploid fibroblast cells.

    PubMed

    Wang, Peichang; Zhang, Zongyu; Sun, Ying; Liu, Xinwen; Tong, Tanjun

    2010-01-01

    4-Hydroxy-5-hydroxymethyl-[1,3]dioxolan-2,6'-spirane-5',6',7',8'-tetrahydro-indolizine-3'-carbaldehyde (HDTIC)-1 and HDTIC-2 are two isomers extracted from Astragalus membranaceus (Fisch) Bunge Var. mongholicus (Bge) Hsiao. Our previous study had demonstrated that they could extend the lifespan of human fetal lung diploid fibroblasts (2BS). To investigate the mechanisms of the HDTIC-induced delay of replicative senescence, in this study, we assessed the effects of these two compounds on telomere shortening rate and DNA repair ability in 2BS cells. The telomere shortening rates of the cells cultured with HDTIC-1 or HDTIC-2 were 31.5 and 41.1 bp with each division, respectively, which were much less than that of the control cells (71.1 bp/PD). We also found that 2BS cells pretreated with HDTIC-1 or HDTIC-2 had a significant reduction in DNA damage after exposure to 200 microM H(2)O(2) for 5 min. Moreover, the 100 microM H(2)O(2)-induced DNA damage was significantly repaired after the damaged cells were continually cultured with HDTIC for 1 h. These results suggest that HDTIC compounds slow down the telomere shortening rate of 2BS cells, which is mainly due to the biological properties of the compounds including the reduction of DNA damage and the improvement of DNA repair ability. In addition, the slow down of telomere shortening rate, the reduction of DNA damage, and the improvement of DNA repair ability induced by HDTIC may be responsible for their delay of replicative senescence.

  12. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  13. Kernel mucking in top

    SciTech Connect

    LeFebvre, W.

    1994-08-01

    For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.

  14. Robotic Intelligence Kernel: Visualization

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.

  15. Robotic Intelligence Kernel: Architecture

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  16. Discrete beta dose kernel matrices for nuclides applied in targeted radionuclide therapy (TRT) calculated with MCNP5

    SciTech Connect

    Reiner, Dora; Blaickner, Matthias; Rattay, Frank

    2009-11-15

    radionuclides applied in TRT. In contrast to analytical dose point kernels, the discrete kernels elude the problem of overestimation near the source and take energy depositions into account, which occur beyond the range of the continuous-slowing-down approximation (csda range). Recalculation of the 1x1x1 mm{sup 3} kernels to other dose kernels with varying voxel dimensions, cubic or noncubic, is shown to be easily manageable and thereby provides a resolution-independent system of dose calculation.

  17. [Demography: can growth be slowed down?].

    PubMed

    1990-01-01

    The UN Fund for Population Activities report on the status of world population in 1990 is particularly unsettling because it indicates that fertility is not declining as rapidly as had been predicted. The world population of some 5.3 billion is growing by 90-100 million per year. 6 years ago the growth rate appeared to be declining everywhere except in Africa and some regions of South Asia. Hopes that the world population would stabilize at around 10.2 billion by the end of the 21st century now appear unrealistic. Some countries such as the Philippines, India, and Morocco which had some success in slowing growth in the 1960s and 70s have seen a significant deceleration in the decline. Growth rates in several African countries are already 2.7% per year and increasing. It is projected that Africa's population will reach 1.581 billion by 2025. Already there are severe shortages of arable land in some overwhelmingly agricultural countries like Rwanda and Burundi, and malnutrition is widespread on the continent. Between 1979-81 and 1986- 87, cereal production declined in 25 African countries out of 43 for which the Food and Agriculture Organization has data. The urban population of developing countries is increasing at 3.6%/year. It grew from 285 million in 1950 to 1.384 billion today and is projected at 4.050 billion in 2050. Provision of water, electricity, and sanitary services will be very difficult. From 1970-88 the number of urban households without portable water increased from 138 million to 215 million. It is not merely the quality of life that is menaced by constant population growth, but also the very future of the earth as a habitat, because of the degradation of soils and forests and resulting global warming. 6-7 million hectares of agricultural land are believed to be lost to erosion each year. Deforestation is a principal cause of soil erosion. Each year more than 11 million hectares of tropical forest and forested zones are stripped, in addition to some 4.4 million hectares selectively harvested for lumber. Deforestation contributes to global warming and to deterioration of the ozone layer. Consequences of global warming by the middle of the next century may include decertification of entire countries, raising of the level of the oceans, and submersion of certain countries. To avert demographic and ecologic disaster, the geographic and financial access of women in developing countries to contraception should be improved, and some neglected groups such as adolescents should be brought into family planning programs. The condition of women must be improved so that they have access to a source of status other than motherhood.

  18. Critical slowing down with bistable higher harmonics

    NASA Astrophysics Data System (ADS)

    Sharaby, Yasser A.; Hassan, S. S.; Joshi, A.

    2013-01-01

    Switching response in an optical bistable model of two-level atoms in a ring cavity is investigated outside the rotating wave approximation (RWA) in the high- and low-Q cavity cases. Analytical and numerical investigations of the non-autonomous model Bloch equations, up to first Fourier harmonics, show that the switching time in response to linear perturbation of the incident field at the critical points of the bistable curves is significantly affected by the atomic and cavity detuning parameters. The faster oscillatory behavior outside the RWA reflects itself in the additional ultra-low output (first harmonic) field component, which has reversed bistable feature in both the high-Q and low-Q cases. Irregular oscillations with increased atomic detuning are showed only in the lower bistable branch of the first harmonic field. Irregularity of the oscillations is due to the interference of the oscillations of the higher frequency terms with the atomic dispersive polarization in the high-Q case, and the Rabi oscillations in the low-Q case.

  19. Polygamy slows down population divergence in shorebirds

    USGS Publications Warehouse

    Jackson, Josephine D'Urban; dos Remedios, Natalie; Maher, Kathryn; Zefania, Sama; Haig, Susan M.; Oyler-McCance, Sara J.; Blomqvist, Donald; Burke, Terry; Bruford, Michael W.; Székely, Tamás; Küpper, Clemens

    2017-01-01

    Sexual selection may act as a promotor of speciation since divergent mate choice and competition for mates can rapidly lead to reproductive isolation. Alternatively, sexual selection may also retard speciation since polygamous individuals can access additional mates by increased breeding dispersal. High breeding dispersal should hence increase gene flow and reduce diversification in polygamous species. Here, we test how polygamy predicts diversification in shorebirds using genetic differentiation and subspecies richness as proxies for population divergence. Examining microsatellite data from 79 populations in 10 plover species (Genus: Charadrius) we found that polygamous species display significantly less genetic structure and weaker isolation-by-distance effects than monogamous species. Consistent with this result, a comparative analysis including 136 shorebird species showed significantly fewer subspecies for polygamous than for monogamous species. By contrast, migratory behavior neither predicted genetic differentiation nor subspecies richness. Taken together, our results suggest that dispersal associated with polygamy may facilitate gene flow and limit population divergence. Therefore, intense sexual selection, as occurs in polygamous species, may act as a brake rather than an engine of speciation in shorebirds. We discuss alternative explanations for these results and call for further studies to understand the relationships between sexual selection, dispersal, and diversification.

  20. Polygamy slows down population divergence in shorebirds.

    PubMed

    D'Urban Jackson, Josephine; Dos Remedios, Natalie; Maher, Kathryn H; Zefania, Sama; Haig, Susan; Oyler-McCance, Sara; Blomqvist, Donald; Burke, Terry; Bruford, Michael W; Székely, Tamás; Küpper, Clemens

    2017-02-24

    Sexual selection may act as a promotor of speciation since divergent mate choice and competition for mates can rapidly lead to reproductive isolation. Alternatively, sexual selection may also retard speciation since polygamous individuals can access additional mates by increased breeding dispersal. High breeding dispersal should hence increase gene flow and reduce diversification in polygamous species. Here we test how polygamy predicts diversification in shorebirds using genetic differentiation and subspecies richness as proxies for population divergence. Examining microsatellite data from 79 populations in ten plover species (Genus: Charadrius) we found that polygamous species display significantly less genetic structure and weaker isolation-by-distance effects than monogamous species. Consistent with this result, a comparative analysis including 136 shorebird species showed significantly fewer subspecies for polygamous than for monogamous species. By contrast, migratory behaviour neither predicted genetic differentiation nor subspecies richness. Taken together, our results suggest that dispersal associated with polygamy may facilitate gene flow and limit population divergence. Therefore, intense sexual selection, as occurs in polygamous species, may act as a brake rather than an engine of speciation in shorebirds. We discuss alternative explanations for these results and call for further studies to understand the relationships between sexual selection, dispersal and diversification. This article is protected by copyright. All rights reserved.

  1. Motivating motorists to voluntarily slow down.

    PubMed

    Spiegel, Rainer; Kalla, Roger; Spiegel, Frank; Brandt, Thomas; Strupp, Michael

    2010-01-01

    It is estimated that by 2020 road accidents will rise from ninth to third place in the worldwide ranking of the burden of disease. Traffic calming can reduce road accidents; however, many motorists do not adhere to speed limits. We report on an intervention that can influence many motorists at dangerous sites, where accidents are likely to occur (e.g., near playgrounds, schools). The intervention is a speed-displaying device mounted next to the road (visible to both motorists and the public). Our findings indicate that the device is associated with a significant speed reduction relative to the control condition.

  2. Not slowing down | Center for Cancer Research

    Cancer.gov

    Nine and-a-half-year-old Travis Carpenter gets a lot of speeding tickets. (He stresses that “and-a-half” part, too). These speeding tickets don’t come from a law enforcement officer but Jesse, one of his nurses at the NIH Clinical Center. Travis uses a power chair that he’s adorned with racing stickers, and his speeding tickets come from him zooming down the Clinical Center’s hallways, dodging the steady traffic of doctors, nurses, patients and families. He loves all things racing, NASCAR and pit crews. Neurofibromatosis type 1 isn’t slowing him down. Read more...

  3. Time for bacteria to slow down.

    PubMed

    Armitage, Judith P; Berry, Richard M

    2010-04-02

    The speed of the bacterial flagellar motor is thought to be regulated by structural changes in the motor. Two new studies, Boehm et al. (2010) in this issue and Paul et al. (2010) in Molecular Cell, now show that cyclic di-GMP also regulates flagellar motor speed through interactions between the cyclic di-GMP binding protein YcgR and the motor proteins.

  4. Words can slow down category learning.

    PubMed

    Brojde, Chandra L; Porter, Chelsea; Colunga, Eliana

    2011-08-01

    Words have been shown to influence many cognitive tasks, including category learning. Most demonstrations of these effects have focused on instances in which words facilitate performance. One possibility is that words augment representations, predicting an across the-board benefit of words during category learning. We propose that words shift attention to dimensions that have been historically predictive in similar contexts. Under this account, there should be cases in which words are detrimental to performance. The results from two experiments show that words impair learning of object categories under some conditions. Experiment 1 shows that words hurt performance when learning to categorize by texture. Experiment 2 shows that words also hurt when learning to categorize by brightness, leading to selectively attending to shape when both shape and hue could be used to correctly categorize stimuli. We suggest that both the positive and negative effects of words have developmental origins in the history of word usage while learning categories. [corrected

  5. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2015-12-22

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  6. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2016-06-01

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  7. Kernel Optimization in Discriminant Analysis

    PubMed Central

    You, Di; Hamsici, Onur C.; Martinez, Aleix M.

    2011-01-01

    Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results using a large number of databases and classifiers demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates. PMID:20820072

  8. Sudden slowing down of charge carrier dynamics at the Mott metal-insulator transition in kappa-(D{sub 8}-BEDT-TTF){sub 2}Cu[N(CN){sub 2}]Br.

    SciTech Connect

    Brandenburg, J.; Muller, J.; Schlueter, J. A.

    2012-02-01

    We investigate the dynamics of correlated charge carriers in the vicinity of the Mott metal-insulator (MI) transition in the quasi-two-dimensional organic charge-transfer salt {kappa}-(D{sub 8}-BEDT-TTF){sub 2}Cu[N(CN){sub 2}]Br by means of fluctuation (noise) spectroscopy. The observed 1/f-type fluctuations are quantitatively very well described by a phenomenological model based on the concept of non-exponential kinetics. The main result is a correlation-induced enhancement of the fluctuations accompanied by a substantial shift of spectral weight to low frequencies in the vicinity of the Mott critical endpoint. This sudden slowing down of the electron dynamics, observed here in a pure Mott system, may be a universal feature of MI transitions. Our findings are compatible with an electronic phase separation in the critical region of the phase diagram and offer an explanation for the not yet understood absence of effective mass enhancement when crossing the Mott transition.

  9. Replacement of leucine-93 by alanine or threonine slows down the decay of the N and O intermediates in the photocycle of bacteriorhodopsin: Implications for proton uptake and 13-cis-retinal----all-trans-retinal reisomerization

    SciTech Connect

    Subramaniam, S.; Greenhalgh, D.A.; Rath, P.; Rothschild, K.J.; Khorana, H.G. )

    1991-08-01

    The authors report that the replacement of Leu-93 in bacteriorhodopsin by Ala (L93A) or Thr (L93T) slows down the photocycle by approximately 100-fold relative to wild-type bacteriorhodopsin. Time-resolved visible absorption spectroscopy and resonance Raman experiments, respectively, show the presence of long-lived O-like and N-like intermediates in the photocycles of the above mutants. We infer the existence of an equilibrium between the N and O intermediates in the photocycles of these mutants. The L93A and L93T mutants exhibit normal proton pumping under continuous illumination, suggesting that the decay of the N and/or O intermediate, and consequently, proton translocation, can be accelerated by the absorption of a second photon. Since the 13-cis----all-trans reisomerization of retinal is completed during the decay of the N and O intermediates, they conclude that the interaction of Leu-93 with retinal is important in this phase of the photocycle. This conclusion is supported by a recent structural model of bacteriorhodopsin that suggests that Leu-93 is near the C-13 methyl group of retinal.

  10. Analytical continuous slowing down model for nuclear reaction cross-section measurements by exploitation of stopping for projectile energy scanning and results for 13C(3He,α)12C and 13C(3He,p)15N

    NASA Astrophysics Data System (ADS)

    Möller, S.

    2017-03-01

    Ion beam analysis is a set of precise, calibration free and non-destructive methods for determining surface-near concentrations of potentially all elements and isotopes in a single measurement. For determination of concentrations the reaction cross-section of the projectile with the targets has to be known, in general at the primary beam energy and all energies below. To reduce the experimental effort of cross-section measurements a new method is presented here. The method is based on the projectile energy reduction when passing matter of thick targets. The continuous slowing down approximation is used to determine cross-sections from a thick target at projectile energies below the primary energy by backward calculation of the measured product spectra. Results for 12C(3He,p)14N below 4.5 MeV are in rough agreement with literature data and reproduce the measured spectra. New data for reactions of 3He with 13C are acquired using the new technique. The applied approximations and further applications are discussed.

  11. Kernel machine SNP-set testing under multiple candidate kernels.

    PubMed

    Wu, Michael C; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M; Harmon, Quaker E; Lin, Xinyi; Engel, Stephanie M; Molldrem, Jeffrey J; Armistead, Paul M

    2013-04-01

    Joint testing for the cumulative effect of multiple single-nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large-scale genetic association studies. The kernel machine (KM)-testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori because this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest P-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power vs. using the best candidate kernel.

  12. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or...

  13. Kernel phase and kernel amplitude in Fizeau imaging

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin J. S.

    2016-12-01

    Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent history of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.

  14. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including pieces and particles, regardless of whether edible or inedible, contained in any lot of almonds...

  15. The Adaptive Kernel Neural Network

    DTIC Science & Technology

    1989-10-01

    A neural network architecture for clustering and classification is described. The Adaptive Kernel Neural Network (AKNN) is a density estimation...classification layer. The AKNN retains the inherent parallelism common in neural network models. Its relationship to the kernel estimator allows the network to

  16. Robotic intelligence kernel

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  17. Flexible Kernel Memory

    PubMed Central

    Nowicki, Dimitri; Siegelmann, Hava

    2010-01-01

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013

  18. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle...

  19. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off....

  20. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  1. RTOS kernel in portable electrocardiograph

    NASA Astrophysics Data System (ADS)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  2. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  3. Travel-Time and Amplitude Sensitivity Kernels

    DTIC Science & Technology

    2011-09-01

    amplitude sensitivity kernels shown in the lower panels concentrate about the corresponding eigenrays . Each 3D kernel exhibits a broad negative...in 2 and 3 dimensions have similar 11 shapes to corresponding travel-time sensitivity kernels (TSKs), centered about the respective eigenrays

  4. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  5. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  6. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  7. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  8. Polar lipids from oat kernels

    USDA-ARS?s Scientific Manuscript database

    Oat (Avena sativa L.) kernels appear to contain much higher polar lipid concentrations than other plant tissues. We have extracted, identified, and quantified polar lipids from 18 oat genotypes grown in replicated plots in three environments in order to determine genotypic or environmental variation...

  9. Adaptive wiener image restoration kernel

    DOEpatents

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  10. Diffusion Kernels on Statistical Manifolds

    DTIC Science & Technology

    2004-01-16

    International Press, 1994. Michael Spivak . Differential Geometry, volume 1. Publish or Perish, 1979. 36 Chengxiang Zhai and John Lafferty. A study of smoothing...construction of information diffusion kernels, since these concepts are not widely used in machine learning. We refer to Spivak (1979) for details and further

  11. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  12. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    PubMed

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.

  13. QTL mapping of 1000-kernel weight, kernel length, and kernel width in bread wheat (Triticum aestivum L.).

    PubMed

    Ramya, P; Chaubal, A; Kulkarni, K; Gupta, L; Kadoo, N; Dhaliwal, H S; Chhuneja, P; Lagu, M; Gupta, V

    2010-01-01

    Kernel size and morphology influence the market value and milling yield of bread wheat (Triticum aestivum L.). The objective of this study was to identify quantitative trait loci (QTLs) controlling kernel traits in hexaploid wheat. We recorded 1000-kernel weight, kernel length, and kernel width for 185 recombinant inbred lines from the cross Rye Selection 111 × Chinese Spring grown in 2 agro-climatic regions in India for many years. Composite interval mapping (CIM) was employed for QTL detection using a linkage map with 169 simple sequence repeat (SSR) markers. For 1000-kernel weight, 10 QTLs were identified on wheat chromosomes 1A, 1D, 2B, 2D, 4B, 5B, and 6B, whereas 6 QTLs for kernel length were detected on 1A, 2B, 2D, 5A, 5B and 5D. Chromosomes 1D, 2B, 2D, 4B, 5B and 5D had 9 QTLs for kernel width. Chromosomal regions with QTLs detected consistently for multiple year-location combinations were identified for each trait. Pleiotropic QTLs were found on chromosomes 2B, 2D, 4B, and 5B. The identified genomic regions controlling wheat kernel size and shape can be targeted during further studies for their genetic dissection.

  14. Filters, reproducing kernel, and adaptive meshfree method

    NASA Astrophysics Data System (ADS)

    You, Y.; Chen, J.-S.; Lu, H.

    Reproducing kernel, with its intrinsic feature of moving averaging, can be utilized as a low-pass filter with scale decomposition capability. The discrete convolution of two nth order reproducing kernels with arbitrary support size in each kernel results in a filtered reproducing kernel function that has the same reproducing order. This property is utilized to separate the numerical solution into an unfiltered lower order portion and a filtered higher order portion. As such, the corresponding high-pass filter of this reproducing kernel filter can be used to identify the locations of high gradient, and consequently serves as an operator for error indication in meshfree analysis. In conjunction with the naturally conforming property of the reproducing kernel approximation, a meshfree adaptivity method is also proposed.

  15. Image texture analysis of crushed wheat kernels

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Martin, C. R.; Steele, James L.; Dempster, Richard E.

    1992-03-01

    The development of new approaches for wheat hardness assessment may impact the grain industry in marketing, milling, and breeding. This study used image texture features for wheat hardness evaluation. Application of digital imaging to grain for grading purposes is principally based on morphometrical (shape and size) characteristics of the kernels. A composite sample of 320 kernels for 17 wheat varieties were collected after testing and crushing with a single kernel hardness characterization meter. Six wheat classes where represented: HRW, HRS, SRW, SWW, Durum, and Club. In this study, parameters which characterize texture or spatial distribution of gray levels of an image were determined and used to classify images of crushed wheat kernels. The texture parameters of crushed wheat kernel images were different depending on class, hardness and variety of the wheat. Image texture analysis of crushed wheat kernels showed promise for use in class, hardness, milling quality, and variety discrimination.

  16. Several new kernel estimators for population abundance

    NASA Astrophysics Data System (ADS)

    Albadareen, Baker; Ismail, Noriszura

    2017-04-01

    The parameter f(0) is crucial in line transect sampling which is regularly used for computing population abundance in wildlife. The usual kernel estimator of f(0) has a high negative bias. Our study proposes several new estimators which are shown to be more efficient than the usual kernel estimator. A simulation technique is adopted to compare the performance of the proposed estimators with the classical kernel estimator. An application of the new estimators on real data set is discussed.

  17. Diffusion Map Kernel Analysis for Target Classification

    DTIC Science & Technology

    2010-06-01

    Gaussian and Polynomial kernels are most familiar from support vector machines. The Laplacian and Rayleigh were introduced previously in [7]. IV ...Cancer • Clev. Heart: Heart Disease Data Set, Cleveland • Wisc . BC: Wisconsin Breast Cancer Original • Sonar2: Shallow Water Acoustic Toolset [9...the Rayleigh kernel captures the embedding with an average PC of 77.3% and a slightly higher PFA than the Gaussian kernel. For the Wisc . BC

  18. Kernel earth mover's distance for EEG classification.

    PubMed

    Daliri, Mohammad Reza

    2013-07-01

    Here, we propose a new kernel approach based on the earth mover's distance (EMD) for electroencephalography (EEG) signal classification. The EEG time series are first transformed into histograms in this approach. The distance between these histograms is then computed using the EMD in a pair-wise manner. We bring the distances into a kernel form called kernel EMD. The support vector classifier can then be used for the classification of EEG signals. The experimental results on the real EEG data show that the new kernel method is very effective, and can classify the data with higher accuracy than traditional methods.

  19. Modeling an Operating System Kernel

    NASA Astrophysics Data System (ADS)

    Börger, Egon; Craig, Iain

    We define a high-level model of an operating system (OS) kernel which can be refined to concrete systems in various ways, reflecting alternative design decisions. We aim at an exposition practitioners and lecturers can use effectively to communicate (document and teach) design ideas for operating system functionality at a conceptual level. The operational and rigorous nature of our definition provides a basis for the practitioner to validate and verify precisely stated system properties of interest, thus helping to make OS code reliable. As a by-product we introduce a novel combination of parallel and interruptable sequential Abstract State Machine steps.

  20. Molecular Hydrodynamics from Memory Kernels

    NASA Astrophysics Data System (ADS)

    Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin

    2016-04-01

    The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t-3 /2 . We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius.

  1. Comparison between an event-by-event Monte Carlo code, NOREC, and ETRAN for electron scaled point kernels between 20 keV and 1 MeV.

    PubMed

    Cho, Sang Hyun; Vassiliev, Oleg N; Horton, John L

    2007-03-01

    An event-by-event Monte Carlo code called NOREC, a substantially improved version of the Oak Ridge electron transport code (OREC), was released in 2003, after a number of modifications to OREC. In spite of some earlier work, the characteristics of the code have not been clearly shown so far, especially for a wide range of electron energies. Therefore, NOREC was used in this study to generate one of the popular dosimetric quantities, the scaled point kernel, for a number of electron energies between 0.02 and 1.0 MeV. Calculated kernels were compared with the most well-known published kernels based on a condensed history Monte Carlo code, ETRAN, to show not only general agreement between the codes for the electron energy range considered but also possible differences between an event-by-event code and a condensed history code. There was general agreement between the kernels within about 5% up to 0.7 r/r (0) for 100 keV and 1 MeV electrons. Note that r/r (0) denotes the scaled distance, where r is the radial distance from the source to the dose point and r (0) is the continuous slowing down approximation (CSDA) range of a mono-energetic electron. For the same range of scaled distances, the discrepancies for 20 and 500 keV electrons were up to 6 and 12%, respectively. Especially, there was more pronounced disagreement for 500 keV electrons than for 20 keV electrons. The degree of disagreement for 500 keV electrons decreased when NOREC results were compared with published EGS4/PRESTA results, producing similar agreement to other electron energies.

  2. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  3. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...

  4. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...

  5. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  6. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  7. Kernel current source density method.

    PubMed

    Potworowski, Jan; Jakuczun, Wit; Lȩski, Szymon; Wójcik, Daniel

    2012-02-01

    Local field potentials (LFP), the low-frequency part of extracellular electrical recordings, are a measure of the neural activity reflecting dendritic processing of synaptic inputs to neuronal populations. To localize synaptic dynamics, it is convenient, whenever possible, to estimate the density of transmembrane current sources (CSD) generating the LFP. In this work, we propose a new framework, the kernel current source density method (kCSD), for nonparametric estimation of CSD from LFP recorded from arbitrarily distributed electrodes using kernel methods. We test specific implementations of this framework on model data measured with one-, two-, and three-dimensional multielectrode setups. We compare these methods with the traditional approach through numerical approximation of the Laplacian and with the recently developed inverse current source density methods (iCSD). We show that iCSD is a special case of kCSD. The proposed method opens up new experimental possibilities for CSD analysis from existing or new recordings on arbitrarily distributed electrodes (not necessarily on a grid), which can be obtained in extracellular recordings of single unit activity with multiple electrodes.

  8. KERNEL PHASE IN FIZEAU INTERFEROMETRY

    SciTech Connect

    Martinache, Frantz

    2010-11-20

    The detection of high contrast companions at small angular separation appears feasible in conventional direct images using the self-calibration properties of interferometric observable quantities. The friendly notion of closure phase, which is key to the recent observational successes of non-redundant aperture masking interferometry used with adaptive optics, appears to be one example of a wide family of observable quantities that are not contaminated by phase noise. In the high-Strehl regime, soon to be available thanks to the coming generation of extreme adaptive optics systems on ground-based telescopes, and already available from space, closure phase like information can be extracted from any direct image, even taken with a redundant aperture. These new phase-noise immune observable quantities, called kernel phases, are determined a priori from the knowledge of the geometry of the pupil only. Re-analysis of archive data acquired with the Hubble Space Telescope NICMOS instrument using this new kernel-phase algorithm demonstrates the power of the method as it clearly detects and locates with milliarcsecond precision a known companion to a star at angular separation less than the diffraction limit.

  9. Bergman Kernel from Path Integral

    NASA Astrophysics Data System (ADS)

    Douglas, Michael R.; Klevtsov, Semyon

    2010-01-01

    We rederive the expansion of the Bergman kernel on Kähler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory, and generalize it to supersymmetric quantum mechanics. One physics interpretation of this result is as an expansion of the projector of wave functions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kähler form. This is relevant for the quantum Hall effect in curved space, and for its higher dimensional generalizations. Other applications include the theory of coherent states, the study of balanced metrics, noncommutative field theory, and a conjecture on metrics in black hole backgrounds discussed in [24]. We give a short overview of these various topics. From a conceptual point of view, this expansion is noteworthy as it is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey et al short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry.

  10. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  11. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  12. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  13. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  14. Kernel method for corrections to scaling.

    PubMed

    Harada, Kenji

    2015-07-01

    Scaling analysis, in which one infers scaling exponents and a scaling function in a scaling law from given data, is a powerful tool for determining universal properties of critical phenomena in many fields of science. However, there are corrections to scaling in many cases, and then the inference problem becomes ill-posed by an uncontrollable irrelevant scaling variable. We propose a new kernel method based on Gaussian process regression to fix this problem generally. We test the performance of the new kernel method for some example cases. In all cases, when the precision of the example data increases, inference results of the new kernel method correctly converge. Because there is no limitation in the new kernel method for the scaling function even with corrections to scaling, unlike in the conventional method, the new kernel method can be widely applied to real data in critical phenomena.

  15. The context-tree kernel for strings.

    PubMed

    Cuturi, Marco; Vert, Jean-Philippe

    2005-10-01

    We propose a new kernel for strings which borrows ideas and techniques from information theory and data compression. This kernel can be used in combination with any kernel method, in particular Support Vector Machines for string classification, with notable applications in proteomics. By using a Bayesian averaging framework with conjugate priors on a class of Markovian models known as probabilistic suffix trees or context-trees, we compute the value of this kernel in linear time and space while only using the information contained in the spectrum of the considered strings. This is ensured through an adaptation of a compression method known as the context-tree weighting algorithm. Encouraging classification results are reported on a standard protein homology detection experiment, showing that the context-tree kernel performs well with respect to other state-of-the-art methods while using no biological prior knowledge.

  16. Bayesian Kernel Mixtures for Counts.

    PubMed

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  17. MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES

    PubMed Central

    Dunson, David B.

    2013-01-01

    Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563

  18. Dose point kernel for boron-11 decay and the cellular S values in boron neutron capture therapy.

    PubMed

    Ma, Yunzhi; Geng, JinPeng; Gao, Song; Bao, Shanglian

    2006-12-01

    The study of the radiobiology of boron neutron capture therapy is based on the cellular level dosimetry of boron-10's thermal neutron capture reaction 10B(n,alpha)7Li, in which one 1.47 MeV helium-4 ion and one 0.84 MeV lithium-7 ion are spawned. Because of the chemical preference of boron-10 carrier molecules, the dose is heterogeneously distributed in cells. In the present work, the (scaled) dose point kernel of boron-11 decay, called 11B-DPK, was calculated by GEANT4 Monte Carlo simulation code. The DPK curve drops suddenly at the radius of 4.26 microm, the continuous slowing down approximation (CSDA) range of a lithium-7 ion. Then, after a slight ascending, the curve decreases to near zero when the radius goes beyond 8.20 microm, which is the CSDA range of a 1.47 MeV helium-4 ion. With the DPK data, S values for nuclei and cells with the boron-10 on the cell surface are calculated for different combinations of cell and nucleus sizes. The S value for a cell radius of 10 microm and a nucleus radius of 5 microm is slightly larger than the value published by Tung et al. [Appl. Radiat. Isot. 61, 739-743 (2004)]. This result is potentially more accurate than the published value since it includes the contribution of a lithium-7 ion as well as the alpha particle.

  19. Perturbed kernel approximation on homogeneous manifolds

    NASA Astrophysics Data System (ADS)

    Levesley, J.; Sun, X.

    2007-02-01

    Current methods for interpolation and approximation within a native space rely heavily on the strict positive-definiteness of the underlying kernels. If the domains of approximation are the unit spheres in euclidean spaces, then zonal kernels (kernels that are invariant under the orthogonal group action) are strongly favored. In the implementation of these methods to handle real world problems, however, some or all of the symmetries and positive-definiteness may be lost in digitalization due to small random errors that occur unpredictably during various stages of the execution. Perturbation analysis is therefore needed to address the stability problem encountered. In this paper we study two kinds of perturbations of positive-definite kernels: small random perturbations and perturbations by Dunkl's intertwining operators [C. Dunkl, Y. Xu, Orthogonal polynomials of several variables, Encyclopedia of Mathematics and Its Applications, vol. 81, Cambridge University Press, Cambridge, 2001]. We show that with some reasonable assumptions, a small random perturbation of a strictly positive-definite kernel can still provide vehicles for interpolation and enjoy the same error estimates. We examine the actions of the Dunkl intertwining operators on zonal (strictly) positive-definite kernels on spheres. We show that the resulted kernels are (strictly) positive-definite on spheres of lower dimensions.

  20. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  1. Approximating W projection as a separable kernel

    NASA Astrophysics Data System (ADS)

    Merry, Bruce

    2016-02-01

    W projection is a commonly used approach to allow interferometric imaging to be accelerated by fast Fourier transforms, but it can require a huge amount of storage for convolution kernels. The kernels are not separable, but we show that they can be closely approximated by separable kernels. The error scales with the fourth power of the field of view, and so is small enough to be ignored at mid- to high frequencies. We also show that hybrid imaging algorithms combining W projection with either faceting, snapshotting, or W stacking allow the error to be made arbitrarily small, making the approximation suitable even for high-resolution wide-field instruments.

  2. Invariance kernel of biological regulatory networks.

    PubMed

    Ahmad, Jamil; Roux, Olivier

    2010-01-01

    The analysis of Biological Regulatory Network (BRN) leads to the computing of the set of the possible behaviours of the biological components. These behaviours are seen as trajectories and we are specifically interested in cyclic trajectories since they stand for stability. The set of cycles is given by the so-called invariance kernel of a BRN. This paper presents a method for deriving symbolic formulae for the length, volume and diameter of a cylindrical invariance kernel. These formulae are expressed in terms of delay parameters expressions and give the existence of an invariance kernel and a hint of the number of cyclic trajectories.

  3. The Kernel Energy Method: Construction of 3 & 4 tuple Kernels from a List of Double Kernel Interactions

    PubMed Central

    Huang, Lulu; Massa, Lou

    2010-01-01

    The Kernel Energy Method (KEM) provides a way to calculate the ab-initio energy of very large biological molecules. The results are accurate, and the computational time reduced. However, by use of a list of double kernel interactions a significant additional reduction of computational effort may be achieved, still retaining ab-initio accuracy. A numerical comparison of the indices that name the known double interactions in question, allow one to list higher order interactions having the property of topological continuity within the full molecule of interest. When, that list of interactions is unpacked, as a kernel expansion, which weights the relative importance of each kernel in an expression for the total molecular energy, high accuracy, and a further significant reduction in computational effort results. A KEM molecular energy calculation based upon the HF/STO3G chemical model, is applied to the protein insulin, as an illustration. PMID:21243065

  4. Kernel map compression for speeding the execution of kernel-based methods.

    PubMed

    Arif, Omar; Vela, Patricio A

    2011-06-01

    The use of Mercer kernel methods in statistical learning theory provides for strong learning capabilities, as seen in kernel principal component analysis and support vector machines. Unfortunately, after learning, the computational complexity of execution through a kernel is of the order of the size of the training set, which is quite large for many applications. This paper proposes a two-step procedure for arriving at a compact and computationally efficient execution procedure. After learning in the kernel space, the proposed extension exploits the universal approximation capabilities of generalized radial basis function neural networks to efficiently approximate and replace the projections onto the empirical kernel map used during execution. Sample applications demonstrate significant compression of the kernel representation with graceful performance loss.

  5. Relationship between cyanogenic compounds in kernels, leaves, and roots of sweet and bitter kernelled almonds.

    PubMed

    Dicenta, F; Martínez-Gómez, P; Grané, N; Martín, M L; León, A; Cánovas, J A; Berenguer, V

    2002-03-27

    The relationship between the levels of cyanogenic compounds (amygdalin and prunasin) in kernels, leaves, and roots of 5 sweet-, 5 slightly bitter-, and 5 bitter-kernelled almond trees was determined. Variability was observed among the genotypes for these compounds. Prunasin was found only in the vegetative part (roots and leaves) for all genotypes tested. Amygdalin was detected only in the kernels, mainly in bitter genotypes. In general, bitter-kernelled genotypes had higher levels of prunasin in their roots than nonbitter ones, but the correlation between cyanogenic compounds in the different parts of plants was not high. While prunasin seems to be present in most almond roots (with a variable concentration) only bitter-kernelled genotypes are able to transform it into amygdalin in the kernel. Breeding for prunasin-based resistance to the buprestid beetle Capnodis tenebrionis L. is discussed.

  6. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more...

  7. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...

  8. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a lot...

  9. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  10. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  11. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a lot...

  12. Slowing Down Fast Mapping: Redefining the Dynamics of Word Learning.

    PubMed

    Kucker, Sarah C; McMurray, Bob; Samuelson, Larissa K

    2015-06-01

    In this article, we review literature on word learning and propose a theoretical account of how lexical knowledge and word use emerge and develop over time. We contend that the developing lexical system is built on processes that support children's in-the-moment word usage interacting with processes that create long-term learning. We argue for a new characterization of word learning in which simple mechanisms like association and competition, and the interaction between the two, guide children's selection of referents and word use in the moment. This in turn strengthens and refines the network of relationships in the lexicon, improving referent selection and use in future encounters with words. By integrating in-the-moment word use with long-term learning through simple domain-general mechanisms, this account highlights the dynamic nature of word learning and creates a broader framework for understanding language and cognitive development more generally.

  13. Marine conservation: The race to fish slows down

    NASA Astrophysics Data System (ADS)

    Rosenberg, Andrew A.

    2017-04-01

    A fishery can allow participants to fish as hard as they can until its quota is reached, or allocate quota shares that can be caught at any time. A comparison of the systems in action reveals that shares slow the race to fish. See Letter p.223

  14. Does time really slow down during a frightening event?

    PubMed

    Stetson, Chess; Fiesta, Matthew P; Eagleman, David M

    2007-12-12

    Observers commonly report that time seems to have moved in slow motion during a life-threatening event. It is unknown whether this is a function of increased time resolution during the event, or instead an illusion of remembering an emotionally salient event. Using a hand-held device to measure speed of visual perception, participants experienced free fall for 31 m before landing safely in a net. We found no evidence of increased temporal resolution, in apparent conflict with the fact that participants retrospectively estimated their own fall to last 36% longer than others' falls. The duration dilation during a frightening event, and the lack of concomitant increase in temporal resolution, indicate that subjective time is not a single entity that speeds or slows, but instead is composed of separable subcomponents. Our findings suggest that time-slowing is a function of recollection, not perception: a richer encoding of memory may cause a salient event to appear, retrospectively, as though it lasted longer.

  15. Vitamin E slows down the progression of osteoarthritis

    PubMed Central

    LI, XI; DONG, ZHONGLI; ZHANG, FUHOU; DONG, JUNJIE; ZHANG, YUAN

    2016-01-01

    Osteoarthritis is a chronic degenerative joint disorder with the characteristics of articular cartilage destruction, subchondral bone alterations and synovitis. Clinical signs and symptoms of osteoarthritis include pain, stiffness, restricted motion and crepitus. It is the major cause of joint dysfunction in developed nations and has enormous social and economic consequences. Current treatments focus on symptomatic relief, however, they lack efficacy in controlling the progression of this disease, which is a leading cause of disability. Vitamin E is safe to use and may delay the progression of osteoarthritis by acting on several aspects of the disease. In this review, how vitamin E may promote the maintenance of skeletal muscle and the regulation of nucleic acid metabolism to delay osteoarthritis progression is explored. In addition, how vitamin E may maintain the function of sex organs and the stability of mast cells, thus conferring a greater resistance to the underlying disease process is also discussed. Finally, the protective effect of vitamin E on the subchondral vascular system, which decreases the reactive remodeling in osteoarthritis, is reviewed. PMID:27347011

  16. Intermittent Flow In Yield Stress Fluids Slows Down Chaotic Mixing

    NASA Astrophysics Data System (ADS)

    Boujlel, Jalila; Wendell, Dawn; Gouillart, Emmanuelle; Pigeonneau, Franck; Jop, Pierre; Laboratoire Surface du Verre et Interfaces Team

    2013-11-01

    Many mixing situations involve fluids with non-Newtonian properties: mixing of building materials such as concrete or mortar are based on fluids that have shear- thinning rheological properties. Lack of correct mixing can waste time and money, or lead to products with defects. When fluids are stirred and mixed together at low Reynolds number, the fluid particles should undergo chaotic trajectories to be well mixed by the so-called chaotic advection resulting from the flow. Previous work to characterize chaotic mixing in many different geometries has primarily focused on Newtonian fluids. First studies into non-Newtonian chaotic advection often utilize idealized mixing geometries such as cavity flows or journal bearing flows for numerical studies. Here, we present experimental results of chaotic mixing of yield stress fluids with non-Newtonian fluids using rod-stirring protocol with rotating vessel. We describe the various steps of the mixing and determine their dependence on the fluid rheology and speeds of rotation of the rods and the vessel. We show how the mixing of yield-stress fluids by chaotic advection is reduced compared to the mixing of Newtonian fluids and explain our results, bringing to light the relevant mechanisms: the presence of fluid that only flows intermittently, a phenomenon enhanced by the yield stress, and the importance of the peripheral region. This result is confirmed via numerical simulations.

  17. Misplaced helix slows down ultrafast pressure-jump protein folding

    PubMed Central

    Prigozhin, Maxim B.; Liu, Yanxin; Wirth, Anna Jean; Kapoor, Shobhna; Winter, Roland; Schulten, Klaus; Gruebele, Martin

    2013-01-01

    Using a newly developed microsecond pressure-jump apparatus, we monitor the refolding kinetics of the helix-stabilized five-helix bundle protein λ*YA, the Y22W/Q33Y/G46,48A mutant of λ-repressor fragment 6–85, from 3 μs to 5 ms after a 1,200-bar P-drop. In addition to a microsecond phase, we observe a slower 1.4-ms phase during refolding to the native state. Unlike temperature denaturation, pressure denaturation produces a highly reversible helix-coil-rich state. This difference highlights the importance of the denatured initial condition in folding experiments and leads us to assign a compact nonnative helical trap as the reason for slower P-jump–induced refolding. To complement the experiments, we performed over 50 μs of all-atom molecular dynamics P-drop refolding simulations with four different force fields. Two of the force fields yield compact nonnative states with misplaced α-helix content within a few microseconds of the P-drop. Our overall conclusion from experiment and simulation is that the pressure-denatured state of λ*YA contains mainly residual helix and little β-sheet; following a fast P-drop, at least some λ*YA forms misplaced helical structure within microseconds. We hypothesize that nonnative helix at helix-turn interfaces traps the protein in compact nonnative conformations. These traps delay the folding of at least some of the population for 1.4 ms en route to the native state. Based on molecular dynamics, we predict specific mutations at the helix-turn interfaces that should speed up refolding from the pressure-denatured state, if this hypothesis is correct. PMID:23620522

  18. Sacrificial tamper slows down sample explosion in FLASH diffraction experiments.

    PubMed

    Hau-Riege, Stefan P; Boutet, Sébastien; Barty, Anton; Bajt, Sasa; Bogan, Michael J; Frank, Matthias; Andreasson, Jakob; Iwan, Bianca; Seibert, M Marvin; Hajdu, Janos; Sakdinawat, Anne; Schulz, Joachim; Treusch, Rolf; Chapman, Henry N

    2010-02-12

    Intense and ultrashort x-ray pulses from free-electron lasers open up the possibility for near-atomic resolution imaging without the need for crystallization. Such experiments require high photon fluences and pulses shorter than the time to destroy the sample. We describe results with a new femtosecond pump-probe diffraction technique employing coherent 0.1 keV x rays from the FLASH soft x-ray free-electron laser. We show that the lifetime of a nanostructured sample can be extended to several picoseconds by a tamper layer to dampen and quench the sample explosion, making <1 nm resolution imaging feasible.

  19. Slowing Down: Age-Related Neurobiological Predictors of Processing Speed

    PubMed Central

    Eckert, Mark A.

    2011-01-01

    Processing speed, or the rate at which tasks can be performed, is a robust predictor of age-related cognitive decline and an indicator of independence among older adults. This review examines evidence for neurobiological predictors of age-related changes in processing speed, which is guided in part by our source based morphometry findings that unique patterns of frontal and cerebellar gray matter predict age-related variation in processing speed. These results, together with the extant literature on morphological predictors of age-related changes in processing speed, suggest that specific neural systems undergo declines and as a result slow processing speed. Future studies of processing speed – dependent neural systems will be important for identifying the etiologies for processing speed change and the development of interventions that mitigate gradual age-related declines in cognitive functioning and enhance healthy cognitive aging. PMID:21441995

  20. Does Time Really Slow Down during a Frightening Event?

    PubMed Central

    Stetson, Chess; Fiesta, Matthew P.; Eagleman, David M.

    2007-01-01

    Observers commonly report that time seems to have moved in slow motion during a life-threatening event. It is unknown whether this is a function of increased time resolution during the event, or instead an illusion of remembering an emotionally salient event. Using a hand-held device to measure speed of visual perception, participants experienced free fall for 31 m before landing safely in a net. We found no evidence of increased temporal resolution, in apparent conflict with the fact that participants retrospectively estimated their own fall to last 36% longer than others' falls. The duration dilation during a frightening event, and the lack of concomitant increase in temporal resolution, indicate that subjective time is not a single entity that speeds or slows, but instead is composed of separable subcomponents. Our findings suggest that time-slowing is a function of recollection, not perception: a richer encoding of memory may cause a salient event to appear, retrospectively, as though it lasted longer. PMID:18074019

  1. Slowing Down Surface Plasmons on a Moiré Surface

    NASA Astrophysics Data System (ADS)

    Kocabas, Askin; Senlik, S. Seckin; Aydinli, Atilla

    2009-02-01

    We have demonstrated slow propagation of surface plasmons on metallic Moiré surfaces. The phase shift at the node of the Moiré surface localizes the propagating surface plasmons and adjacent nodes form weakly coupled plasmonic cavities. Group velocities around vg=0.44c at the center of the coupled cavity band and almost a zero group velocity at the band edges are observed. A tight binding model is used to understand the coupling behavior. Furthermore, the sinusoidally modified amplitude about the node suppresses the radiation losses and reveals a relatively high quality factor (Q=103).

  2. Hydrodynamic interactions slow down crystallization of soft colloids.

    PubMed

    Roehm, Dominic; Kesselheim, Stefan; Arnold, Axel

    2014-08-14

    Colloidal suspensions are often argued to be an ideal model for studying phase transitions such as crystallization, as they have the advantage of tunable interactions and experimentally tractable time and length scales. Because crystallization is assumed to be unaffected by details of particle transport other than the bulk diffusion coefficient, findings are frequently argued to be transferable to pure melts without solvent. In this article, we present molecular dynamics simulations of crystallization in a suspension of colloids with Yukawa interactions which challenge this assumption. In order to investigate the role of hydrodynamic interactions mediated by the solvent, we model the solvent both implicitly and explicitly, using Langevin dynamics and the fluctuating lattice Boltzmann method, respectively. Our simulations show a significant reduction of the crystal growth velocity due to hydrodynamic interactions even at moderate hydrodynamic coupling. This slowdown is accompanied by a reduction of the width of the layering region in front of the growing crystal. Thus the dynamics of a colloidal suspension differ strongly from that of a melt, making it less useful as a model for solvent-free melts than previously thought.

  3. Slow down of actin depolymerization by cross-linking molecules.

    PubMed

    Schmoller, Kurt M; Semmrich, Christine; Bausch, Andreas R

    2011-02-01

    The ability to control the assembly and disassembly dynamics of actin filaments is an essential property of the cellular cytoskeleton. While many different proteins are known which accelerate the polymerization of monomers into filaments or promote their disintegration, much less is known on mechanisms which guarantee the kinetic stability of the cytoskeletal filaments. Previous studies indicate that cross-linking molecules might fulfill these stabilizing tasks, which in addition facilitates their ability to regulate the organization of cytoskeletal structures in vivo. The effect of depolymerization factors on such structures or the mechanism which leads finally to their disintegration remain unknown. Here, we use multiple depolymerization methods in order to directly demonstrate that cross-linking and bundling proteins effectively suppress the actin depolymerization in a concentration dependent manner. Even the actin depolymerizing factor cofilin is not sufficient to facilitate a fast disintegration of highly cross-linked actin networks unless molecular motors are used simultaneously. The drastic modification of actin kinetics by cross-linking molecules can be expected to have wide-ranging implications for our understanding of the cytoskeleton, where cross-linking molecules are omnipresent and essential.

  4. [Tripeptides slow down aging process in renal cell culture].

    PubMed

    Khavinson, V Kh; Tarnovskaia, S I; Lin'kova, N S; Poliakova, V O; Durnova, A O; Nichik, T E; Kvetnoĭ, I M; D'iakonov, M M; Iakutseni, P P

    2014-01-01

    The mechanism of geroprotective effect of peptides AED and EDL was studied in ageing renal cell culture. Peptide AED and EDL increase cell proliferation, decreasing expression of marker of aging p16, p21, p53 and increasing expression of SIRT-6 in young and aged renal cell culture. The reduction of SIRT-6 synthesis in cell is one of the causes of cell senescence. On the basis of experimental data models of interaction of peptides with various sites of DNA were constructed. Both peptides form most energetically favorable complexes with d(ATATATATAT)2 sequences in minor groove of DNA. It is shown that interaction of peptides AED and EDL with DNA is the cause of gene expression, encoded marker of ageing in renal cells.

  5. Gastropods slow down succession and maintain diversity in cryptogam communities.

    PubMed

    Boch, Steffen; Prati, Daniel; Fischer, Markus

    2016-09-01

    Herbivore effects on diversity and succession were often studied in plants, but not in cryptogams. Besides direct herbivore effects on cryptogams, we expected indirect effects by changes in competitive interactions among cryptogams. Therefore, we conducted a long-term gastropod exclusion experiment testing for grazing effects on epiphytic cryptogam communities. We estimated the grazing damage, cover and diversity of cryptogams before gastropods were excluded and three and six years thereafter. Gastropod herbivory pronouncedly affected cryptogams, except for bryophytes, strongly depending on host tree species and duration of gastropod exclusion. On control trees, gastropod grazing regulated the growth of algae and non-lichenized fungi and thereby maintained a high lichen diversity and cover. On European beech, the release from gastropod grazing temporarily increased lichen vitality, cover, and species richness, but later caused rapid succession where algae and fungi overgrew lichens and thereby reduced their cover and diversity compared with the control. On Norway spruce, without gastropods lichen richness decreased and lichen cover increased compared with the control. Our findings highlight the importance of long-term exclusion experiments to disentangle short-term, direct effects from longer-term, indirect effects via changes in competitive relationships between taxa. We further demonstrated that gastropod feeding maintains the diversity of cryptogam communities.

  6. Monitoring accelerations with GPS in football: time to slow down?

    PubMed

    Buchheit, Martin; Al Haddad, Hani; Simpson, Ben M; Palazzi, Dino; Bourdon, Pitre C; Di Salvo, Valter; Mendez-Villanueva, Alberto

    2014-05-01

    The aims of the current study were to examine the magnitude of between-GPS-models differences in commonly reported running-based measures in football, examine between-units variability, and assess the effect of software updates on these measures. Fifty identical-brand GPS units (15 SPI-proX and 35 SPIproX2, 15 Hz, GPSports, Canberra, Australia) were attached to a custom-made plastic sled towed by a player performing simulated match running activities. GPS data collected during training sessions over 4 wk from 4 professional football players (N = 53 files) were also analyzed before and after 2 manufacturer-supplied software updates. There were substantial differences between the different models (eg, standardized difference for the number of acceleration >4 m/s2 = 2.1; 90% confidence limits [1.4, 2.7], with 100% chance of a true difference). Between-units variations ranged from 1% (maximal speed) to 56% (number of deceleration >4 m/s2). Some GPS units measured 2-6 times more acceleration/deceleration occurrences than others. Software updates did not substantially affect the distance covered at different speeds or peak speed reached, but 1 of the updates led to large and small decreases in the occurrence of accelerations (-1.24; -1.32, -1.15) and decelerations (-0.45; -0.48, -0.41), respectively. Practitioners are advised to apply care when comparing data collected with different models or units or when updating their software. The metrics of accelerations and decelerations show the most variability in GPS monitoring and must be interpreted cautiously.

  7. Can Lionel Messi's brain slow down time passing?

    PubMed

    Jafari, Sajad; Smith, Leslie Samuel

    2016-01-01

    It seems that seeing others in slow-motion by heroes does not belong only to movies. When Lionel Messi plays football, you can hardly see anything from him that other players cannot do. Then why he is not stoppable really? It seems the answer may be that opponents do not have enough time to do what they want; because in Messi's neural system, time passes slower. In differential equations that model a single neuron, this speed can be generated by multiplying an equal term in all equations. Or maybe interactions between neurons and the structure of neural networks play this role.

  8. Using Paramagnetism to Slow Down Nuclear Relaxation in Protein NMR.

    PubMed

    Orton, Henry W; Kuprov, Ilya; Loh, Choy-Theng; Otting, Gottfried

    2016-12-01

    Paramagnetic metal ions accelerate nuclear spin relaxation; this effect is widely used for distance measurement and called paramagnetic relaxation enhancement (PRE). Theoretical predictions established that, under special circumstances, it is also possible to achieve a reduction in nuclear relaxation rates (negative PRE). This situation would occur if the mechanism of nuclear relaxation in the diamagnetic state is counterbalanced by a paramagnetic relaxation mechanism caused by the metal ion. Here we report the first experimental evidence for such a cross-correlation effect. Using a uniformly (15)N-labeled mutant of calbindin D9k loaded with either Tm(3+) or Tb(3+), reduced R1 and R2 relaxation rates of backbone (15)N spins were observed compared with the diamagnetic reference (the same protein loaded with Y(3+)). The effect arises from the compensation of the chemical shift anisotropy tensor by the anisotropic dipolar shielding generated by the unpaired electron spin.

  9. Slow Down to Brake: Effects of Tapering Epinephrine on Potassium.

    PubMed

    Veerbhadran, Sivaprasad; Nayagam, Asher Ennis; Ramraj, Sandeep; Raghavan, Jaganathan

    2016-07-01

    Hyperkalemia is not an uncommon complication of cardiac surgical procedures. Intractable hyperkalemia is a difficult situation that can even lead to death. We report on a postoperative case in a patient in whom a sudden decrease of epinephrine led to intractable hyperkalemia and cardiac arrest. We wish to draw the reader's attention to the issue that sudden discontinuation of epinephrine can lead to dangerous hyperkalemia.

  10. Cholesterol homeostasis: a key to prevent or slow down neurodegeneration.

    PubMed

    Anchisi, Laura; Dessì, Sandra; Pani, Alessandra; Mandas, Antonella

    2012-01-01

    Neurodegeneration, a common feature for many brain disorders, has severe consequences on the mental and physical health of an individual. Typically human neurodegenerative diseases are devastating illnesses that predominantly affect elderly people, progress slowly, and lead to disability and premature death; however they may occur at all ages. Despite extensive research and investments, current therapeutic interventions against these disorders treat solely the symptoms. Therefore, since the underlying mechanisms of damage to neurons are similar, in spite of etiology and background heterogeneous, it will be of interest to identify possible trigger point of neurodegeneration enabling development of drugs and/or prevention strategies that target many disorders simultaneously. Among the factors that have been identified so far to cause neurodegeneration, failures in cholesterol homeostasis are indubitably the best investigated. The aim of this review is to critically discuss some of the main results reported in the recent years in this field mainly focusing on the mechanisms that, by recovering perturbations of cholesterol homeostasis in neuronal cells, may correct clinically relevant features occurring in different neurodegenerative disorders and, in this regard, also debate the current potential therapeutic interventions.

  11. Discriminant Kernel Assignment for Image Coding.

    PubMed

    Deng, Yue; Zhao, Yanyu; Ren, Zhiquan; Kong, Youyong; Bao, Feng; Dai, Qionghai

    2017-06-01

    This paper proposes discriminant kernel assignment (DKA) in the bag-of-features framework for image representation. DKA slightly modifies existing kernel assignment to learn width-variant Gaussian kernel functions to perform discriminant local feature assignment. When directly applying gradient-descent method to solve DKA, the optimization may contain multiple time-consuming reassignment implementations in iterations. Accordingly, we introduce a more practical way to locally linearize the DKA objective and the difficult task is cast as a sequence of easier ones. Since DKA only focuses on the feature assignment part, it seamlessly collaborates with other discriminative learning approaches, e.g., discriminant dictionary learning or multiple kernel learning, for even better performances. Experimental evaluations on multiple benchmark datasets verify that DKA outperforms other image assignment approaches and exhibits significant efficiency in feature coding.

  12. Kernel-Based Equiprobabilistic Topographic Map Formation.

    PubMed

    Van Hulle MM

    1998-09-15

    We introduce a new unsupervised competitive learning rule, the kernel-based maximum entropy learning rule (kMER), which performs equiprobabilistic topographic map formation in regular, fixed-topology lattices, for use with nonparametric density estimation as well as nonparametric regression analysis. The receptive fields of the formal neurons are overlapping radially symmetric kernels, compatible with radial basis functions (RBFs); but unlike other learning schemes, the radii of these kernels do not have to be chosen in an ad hoc manner: the radii are adapted to the local input density, together with the weight vectors that define the kernel centers, so as to produce maps of which the neurons have an equal probability to be active (equiprobabilistic maps). Both an "online" and a "batch" version of the learning rule are introduced, which are applied to nonparametric density estimation and regression, respectively. The application envisaged is blind source separation (BSS) from nonlinear, noisy mixtures.

  13. Bergman kernel from the lowest Landau level

    NASA Astrophysics Data System (ADS)

    Klevtsov, S.

    2009-07-01

    We use path integral representation for the density matrix, projected on the lowest Landau level, to generalize the expansion of the Bergman kernel on Kähler manifold to the case of arbitrary magnetic field.

  14. Quantum kernel applications in medicinal chemistry.

    PubMed

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.

  15. KITTEN Lightweight Kernel 0.1 Beta

    SciTech Connect

    Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne; VanDyke, John; Hudson, Trammell

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.

  16. TICK: Transparent Incremental Checkpointing at Kernel Level

    SciTech Connect

    Petrini, Fabrizio; Gioiosa, Roberto

    2004-10-25

    TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5

  17. Weighted Bergman Kernels and Quantization}

    NASA Astrophysics Data System (ADS)

    Engliš, Miroslav

    Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion for x,y near z, where φ(x,y) is an almost-analytic extension of &\\phi(x)=φ(x,x) and similarly for ψ. Further, . If in addition Ω is of finite type, φ,ψ behave reasonably at the boundary, and - log φ, - log ψ are strictly plurisubharmonic on Ω, we obtain also an analogous asymptotic expansion for the Berezin transform and give applications to the Berezin quantization. Finally, for Ω smoothly bounded and strictly pseudoconvex and φ a smooth strictly plurisubharmonic defining function for Ω, we also obtain results on the Berezin-Toeplitz quantization.

  18. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  19. RKF-PCA: robust kernel fuzzy PCA.

    PubMed

    Heo, Gyeongyong; Gader, Paul; Frigui, Hichem

    2009-01-01

    Principal component analysis (PCA) is a mathematical method that reduces the dimensionality of the data while retaining most of the variation in the data. Although PCA has been applied in many areas successfully, it suffers from sensitivity to noise and is limited to linear principal components. The noise sensitivity problem comes from the least-squares measure used in PCA and the limitation to linear components originates from the fact that PCA uses an affine transform defined by eigenvectors of the covariance matrix and the mean of the data. In this paper, a robust kernel PCA method that extends the kernel PCA and uses fuzzy memberships is introduced to tackle the two problems simultaneously. We first introduce an iterative method to find robust principal components, called Robust Fuzzy PCA (RF-PCA), which has a connection with robust statistics and entropy regularization. The RF-PCA method is then extended to a non-linear one, Robust Kernel Fuzzy PCA (RKF-PCA), using kernels. The modified kernel used in the RKF-PCA satisfies the Mercer's condition, which means that the derivation of the K-PCA is also valid for the RKF-PCA. Formal analyses and experimental results suggest that the RKF-PCA is an efficient non-linear dimension reduction method and is more noise-robust than the original kernel PCA.

  20. Einstein Critical-Slowing-Down is Siegel CyberWar Denial-of-Access Queuing/Pinning/ Jamming/Aikido Via Siegel DIGIT-Physics BEC ``Intersection''-BECOME-UNION Barabasi Network/GRAPH-Physics BEC: Strutt/Rayleigh-Siegel Percolation GLOBALITY-to-LOCALITY Phase-Transition Critical-Phenomenon

    NASA Astrophysics Data System (ADS)

    Buick, Otto; Falcon, Pat; Alexander, G.; Siegel, Edward Carl-Ludwig

    2013-03-01

    Einstein[Dover(03)] critical-slowing-down(CSD)[Pais, Subtle in The Lord; Life & Sci. of Albert Einstein(81)] is Siegel CyberWar denial-of-access(DOA) operations-research queuing theory/pinning/jamming/.../Read [Aikido, Aikibojitsu & Natural-Law(90)]/Aikido(!!!) phase-transition critical-phenomenon via Siegel DIGIT-Physics (Newcomb[Am.J.Math. 4,39(1881)]-{Planck[(1901)]-Einstein[(1905)])-Poincare[Calcul Probabilités(12)-p.313]-Weyl [Goett.Nachr.(14); Math.Ann.77,313 (16)]-{Bose[(24)-Einstein[(25)]-Fermi[(27)]-Dirac[(1927)]}-``Benford''[Proc.Am.Phil.Soc. 78,4,551 (38)]-Kac[Maths.Stat.-Reasoning(55)]-Raimi[Sci.Am. 221,109 (69)...]-Jech[preprint, PSU(95)]-Hill[Proc.AMS 123,3,887(95)]-Browne[NYT(8/98)]-Antonoff-Smith-Siegel[AMS Joint-Mtg.,S.-D.(02)] algebraic-inversion to yield ONLY BOSE-EINSTEIN QUANTUM-statistics (BEQS) with ZERO-digit Bose-Einstein CONDENSATION(BEC) ``INTERSECTION''-BECOME-UNION to Barabasi[PRL 876,5632(01); Rev.Mod.Phys.74,47(02)...] Network /Net/GRAPH(!!!)-physics BEC: Strutt/Rayleigh(1881)-Polya(21)-``Anderson''(58)-Siegel[J.Non-crystalline-Sol.40,453(80)

  1. Kernel-Based Reconstruction of Graph Signals

    NASA Astrophysics Data System (ADS)

    Romero, Daniel; Ma, Meng; Giannakis, Georgios B.

    2017-02-01

    A number of applications in engineering, social sciences, physics, and biology involve inference over networks. In this context, graph signals are widely encountered as descriptors of vertex attributes or features in graph-structured data. Estimating such signals in all vertices given noisy observations of their values on a subset of vertices has been extensively analyzed in the literature of signal processing on graphs (SPoG). This paper advocates kernel regression as a framework generalizing popular SPoG modeling and reconstruction and expanding their capabilities. Formulating signal reconstruction as a regression task on reproducing kernel Hilbert spaces of graph signals permeates benefits from statistical learning, offers fresh insights, and allows for estimators to leverage richer forms of prior information than existing alternatives. A number of SPoG notions such as bandlimitedness, graph filters, and the graph Fourier transform are naturally accommodated in the kernel framework. Additionally, this paper capitalizes on the so-called representer theorem to devise simpler versions of existing Thikhonov regularized estimators, and offers a novel probabilistic interpretation of kernel methods on graphs based on graphical models. Motivated by the challenges of selecting the bandwidth parameter in SPoG estimators or the kernel map in kernel-based methods, the present paper further proposes two multi-kernel approaches with complementary strengths. Whereas the first enables estimation of the unknown bandwidth of bandlimited signals, the second allows for efficient graph filter selection. Numerical tests with synthetic as well as real data demonstrate the merits of the proposed methods relative to state-of-the-art alternatives.

  2. Oecophylla longinoda (Hymenoptera: Formicidae) Lead to Increased Cashew Kernel Size and Kernel Quality.

    PubMed

    Anato, F M; Sinzogan, A A C; Offenberg, J; Adandonon, A; Wargui, R B; Deguenon, J M; Ayelo, P M; Vayssières, J-F; Kossou, D K

    2017-03-03

    Weaver ants, Oecophylla spp., are known to positively affect cashew, Anacardium occidentale L., raw nut yield, but their effects on the kernels have not been reported. We compared nut size and the proportion of marketable kernels between raw nuts collected from trees with and without ants. Raw nuts collected from trees with weaver ants were 2.9% larger than nuts from control trees (i.e., without weaver ants), leading to 14% higher proportion of marketable kernels. On trees with ants, the kernel: raw nut ratio from nuts damaged by formic acid was 4.8% lower compared with nondamaged nuts from the same trees. Weaver ants provided three benefits to cashew production by increasing yields, yielding larger nuts, and by producing greater proportions of marketable kernel mass.

  3. A new Mercer sigmoid kernel for clinical data classification.

    PubMed

    Carrington, André M; Fieguth, Paul W; Chen, Helen H

    2014-01-01

    In classification with Support Vector Machines, only Mercer kernels, i.e. valid kernels, such as the Gaussian RBF kernel, are widely accepted and thus suitable for clinical data. Practitioners would also like to use the sigmoid kernel, a non-Mercer kernel, but its range of validity is difficult to determine, and even within range its validity is in dispute. Despite these shortcomings the sigmoid kernel is used by some, and two kernels in the literature attempt to emulate and improve upon it. We propose the first Mercer sigmoid kernel, that is therefore trustworthy for the classification of clinical data. We show the similarity between the Mercer sigmoid kernel and the sigmoid kernel and, in the process, identify a normalization technique that improves the classification accuracy of the latter. The Mercer sigmoid kernel achieves the best mean accuracy on three clinical data sets, detecting melanoma in skin lesions better than the most popular kernels; while with non-clinical data sets it has no significant difference in median accuracy as compared with the Gaussian RBF kernel. It consistently classifies some points correctly that the Gaussian RBF kernel does not and vice versa.

  4. Online Sequential Extreme Learning Machine With Kernels.

    PubMed

    Scardapane, Simone; Comminiello, Danilo; Scarpiniti, Michele; Uncini, Aurelio

    2015-09-01

    The extreme learning machine (ELM) was recently proposed as a unifying framework for different families of learning algorithms. The classical ELM model consists of a linear combination of a fixed number of nonlinear expansions of the input vector. Learning in ELM is hence equivalent to finding the optimal weights that minimize the error on a dataset. The update works in batch mode, either with explicit feature mappings or with implicit mappings defined by kernels. Although an online version has been proposed for the former, no work has been done up to this point for the latter, and whether an efficient learning algorithm for online kernel-based ELM exists remains an open problem. By explicating some connections between nonlinear adaptive filtering and ELM theory, in this brief, we present an algorithm for this task. In particular, we propose a straightforward extension of the well-known kernel recursive least-squares, belonging to the kernel adaptive filtering (KAF) family, to the ELM framework. We call the resulting algorithm the kernel online sequential ELM (KOS-ELM). Moreover, we consider two different criteria used in the KAF field to obtain sparse filters and extend them to our context. We show that KOS-ELM, with their integration, can result in a highly efficient algorithm, both in terms of obtained generalization error and training time. Empirical evaluations demonstrate interesting results on some benchmarking datasets.

  5. Analog forecasting with dynamics-adapted kernels

    NASA Astrophysics Data System (ADS)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  6. Kernel bandwidth optimization in spike rate estimation.

    PubMed

    Shimazaki, Hideaki; Shinomoto, Shigeru

    2010-08-01

    Kernel smoother and a time-histogram are classical tools for estimating an instantaneous rate of spike occurrences. We recently established a method for selecting the bin width of the time-histogram, based on the principle of minimizing the mean integrated square error (MISE) between the estimated rate and unknown underlying rate. Here we apply the same optimization principle to the kernel density estimation in selecting the width or "bandwidth" of the kernel, and further extend the algorithm to allow a variable bandwidth, in conformity with data. The variable kernel has the potential to accurately grasp non-stationary phenomena, such as abrupt changes in the firing rate, which we often encounter in neuroscience. In order to avoid possible overfitting that may take place due to excessive freedom, we introduced a stiffness constant for bandwidth variability. Our method automatically adjusts the stiffness constant, thereby adapting to the entire set of spike data. It is revealed that the classical kernel smoother may exhibit goodness-of-fit comparable to, or even better than, that of modern sophisticated rate estimation methods, provided that the bandwidth is selected properly for a given set of spike data, according to the optimization methods presented here.

  7. The connection between regularization operators and support vector kernels.

    PubMed

    Smola, Alex J.; Schölkopf, Bernhard; Müller, Klaus Robert

    1998-06-01

    In this paper a correspondence is derived between regularization operators used in regularization networks and support vector kernels. We prove that the Green's Functions associated with regularization operators are suitable support vector kernels with equivalent regularization properties. Moreover, the paper provides an analysis of currently used support vector kernels in the view of regularization theory and corresponding operators associated with the classes of both polynomial kernels and translation invariant kernels. The latter are also analyzed on periodical domains. As a by-product we show that a large number of radial basis functions, namely conditionally positive definite functions, may be used as support vector kernels.

  8. Fusion and kernel type selection in adaptive image retrieval

    NASA Astrophysics Data System (ADS)

    Doloc-Mihu, Anca; Raghavan, Vijay V.

    2007-04-01

    In this work we investigate the relationships between features representing images, fusion schemes for these features and kernel types used in an Web-based Adaptive Image Retrieval System. Using the Kernel Rocchio learning method, several kernels having polynomial and Gaussian forms are applied to general images represented by annotations and by color histograms in RGB and HSV color spaces. We propose different fusion schemes, which incorporate kernel selector component(s). We perform experiments to study the relationships between a concatenated vector and several kernel types. Experimental results show that an appropriate kernel could significantly improve the performance of the retrieval system.

  9. Robust C-Loss Kernel Classifiers.

    PubMed

    Xu, Guibiao; Hu, Bao-Gang; Principe, Jose C

    2016-12-29

    The correntropy-induced loss (C-loss) function has the nice property of being robust to outliers. In this paper, we study the C-loss kernel classifier with the Tikhonov regularization term, which is used to avoid overfitting. After using the half-quadratic optimization algorithm, which converges much faster than the gradient optimization algorithm, we find out that the resulting C-loss kernel classifier is equivalent to an iterative weighted least square support vector machine (LS-SVM). This relationship helps explain the robustness of iterative weighted LS-SVM from the correntropy and density estimation perspectives. On the large-scale data sets which have low-rank Gram matrices, we suggest to use incomplete Cholesky decomposition to speed up the training process. Moreover, we use the representer theorem to improve the sparseness of the resulting C-loss kernel classifier. Experimental results confirm that our methods are more robust to outliers than the existing common classifiers.

  10. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  11. Fast generation of sparse random kernel graphs

    SciTech Connect

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.

  12. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  13. Kernel bandwidth estimation for nonparametric modeling.

    PubMed

    Bors, Adrian G; Nasios, Nikolaos

    2009-12-01

    Kernel density estimation is a nonparametric procedure for probability density modeling, which has found several applications in various fields. The smoothness and modeling ability of the functional approximation are controlled by the kernel bandwidth. In this paper, we describe a Bayesian estimation method for finding the bandwidth from a given data set. The proposed bandwidth estimation method is applied in three different computational-intelligence methods that rely on kernel density estimation: 1) scale space; 2) mean shift; and 3) quantum clustering. The third method is a novel approach that relies on the principles of quantum mechanics. This method is based on the analogy between data samples and quantum particles and uses the SchrOdinger potential as a cost function. The proposed methodology is used for blind-source separation of modulated signals and for terrain segmentation based on topography information.

  14. Phenolic constituents of shea (Vitellaria paradoxa) kernels.

    PubMed

    Maranz, Steven; Wiesman, Zeev; Garti, Nissim

    2003-10-08

    Analysis of the phenolic constituents of shea (Vitellaria paradoxa) kernels by LC-MS revealed eight catechin compounds-gallic acid, catechin, epicatechin, epicatechin gallate, gallocatechin, epigallocatechin, gallocatechin gallate, and epigallocatechin gallate-as well as quercetin and trans-cinnamic acid. The mean kernel content of the eight catechin compounds was 4000 ppm (0.4% of kernel dry weight), with a 2100-9500 ppm range. Comparison of the profiles of the six major catechins from 40 Vitellaria provenances from 10 African countries showed that the relative proportions of these compounds varied from region to region. Gallic acid was the major phenolic compound, comprising an average of 27% of the measured total phenols and exceeding 70% in some populations. Colorimetric analysis (101 samples) of total polyphenols extracted from shea butter into hexane gave an average of 97 ppm, with the values for different provenances varying between 62 and 135 ppm of total polyphenols.

  15. Fractal Weyl law for Linux Kernel architecture

    NASA Astrophysics Data System (ADS)

    Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.

    2011-01-01

    We study the properties of spectrum and eigenstates of the Google matrix of a directed network formed by the procedure calls in the Linux Kernel. Our results obtained for various versions of the Linux Kernel show that the spectrum is characterized by the fractal Weyl law established recently for systems of quantum chaotic scattering and the Perron-Frobenius operators of dynamical maps. The fractal Weyl exponent is found to be ν ≈ 0.65 that corresponds to the fractal dimension of the network d ≈ 1.3. An independent computation of the fractal dimension by the cluster growing method, generalized for directed networks, gives a close value d ≈ 1.4. The eigenmodes of the Google matrix of Linux Kernel are localized on certain principal nodes. We argue that the fractal Weyl law should be generic for directed networks with the fractal dimension d < 2.

  16. Tile-Compressed FITS Kernel for IRAF

    NASA Astrophysics Data System (ADS)

    Seaman, R.

    2011-07-01

    The Flexible Image Transport System (FITS) is a ubiquitously supported standard of the astronomical community. Similarly, the Image Reduction and Analysis Facility (IRAF), developed by the National Optical Astronomy Observatory, is a widely used astronomical data reduction package. IRAF supplies compatibility with FITS format data through numerous tools and interfaces. The most integrated of these is IRAF's FITS image kernel that provides access to FITS from any IRAF task that uses the basic IMIO interface. The original FITS kernel is a complex interface of purpose-built procedures that presents growing maintenance issues and lacks recent FITS innovations. A new FITS kernel is being developed at NOAO that is layered on the CFITSIO library from the NASA Goddard Space Flight Center. The simplified interface will minimize maintenance headaches as well as add important new features such as support for the FITS tile-compressed (fpack) format.

  17. A kernel-based approach for biomedical named entity recognition.

    PubMed

    Patra, Rakesh; Saha, Sujan Kumar

    2013-01-01

    Support vector machine (SVM) is one of the popular machine learning techniques used in various text processing tasks including named entity recognition (NER). The performance of the SVM classifier largely depends on the appropriateness of the kernel function. In the last few years a number of task-specific kernel functions have been proposed and used in various text processing tasks, for example, string kernel, graph kernel, tree kernel and so on. So far very few efforts have been devoted to the development of NER task specific kernel. In the literature we found that the tree kernel has been used in NER task only for entity boundary detection or reannotation. The conventional tree kernel is unable to execute the complete NER task on its own. In this paper we have proposed a kernel function, motivated by the tree kernel, which is able to perform the complete NER task. To examine the effectiveness of the proposed kernel, we have applied the kernel function on the openly available JNLPBA 2004 data. Our kernel executes the complete NER task and achieves reasonable accuracy.

  18. A dynamic kernel modifier for linux

    SciTech Connect

    Minnich, R. G.

    2002-09-03

    Dynamic Kernel Modifier, or DKM, is a kernel module for Linux that allows user-mode programs to modify the execution of functions in the kernel without recompiling or modifying the kernel source in any way. Functions may be traced, either function entry only or function entry and exit; nullified; or replaced with some other function. For the tracing case, function execution results in the activation of a watchpoint. When the watchpoint is activated, the address of the function is logged in a FIFO buffer that is readable by external applications. The watchpoints are time-stamped with the resolution of the processor high resolution timers, which on most modem processors are accurate to a single processor tick. DKM is very similar to earlier systems such as the SunOS trace device or Linux TT. Unlike these two systems, and other similar systems, DKM requires no kernel modifications. DKM allows users to do initial probing of the kernel to look for performance problems, or even to resolve potential problems by turning functions off or replacing them. DKM watchpoints are not without cost: it takes about 200 nanoseconds to make a log entry on an 800 Mhz Pentium-Ill. The overhead numbers are actually competitive with other hardware-based trace systems, although it has less 'Los Alamos National Laboratory is operated by the University of California for the National Nuclear Security Administration of the United States Department of Energy under contract W-7405-ENG-36. accuracy than an In-Circuit Emulator such as the American Arium. Once the user has zeroed in on a problem, other mechanisms with a higher degree of accuracy can be used.

  19. Experimental study of turbulent flame kernel propagation

    SciTech Connect

    Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve

    2008-07-15

    Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{sub j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)

  20. Kernel abortion in maize. II. Distribution of /sup 14/C among kernel carboydrates

    SciTech Connect

    Hanft, J.M.; Jones, R.J.

    1986-06-01

    This study was designed to compare the uptake and distribution of /sup 14/C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35/sup 0/C were transferred to (/sup 14/C)sucrose media 10 days after pollination. Kernels cultured at 35/sup 0/C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on (/sup 14/C)sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35/sup 0/C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35/sup 0/C compared to kernels cultured at 30/sup 0/C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35/sup 0/C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30/sup 0/C (89%). Kernels cultured at 35/sup 0/C had a correspondingly higher proportion of /sup 14/C in endosperm fructose, glucose, and sucrose.

  1. Full Waveform Inversion Using Waveform Sensitivity Kernels

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    2013-04-01

    We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver

  2. Volatile compound formation during argan kernel roasting.

    PubMed

    El Monfalouti, Hanae; Charrouf, Zoubida; Giordano, Manuela; Guillaume, Dominique; Kartah, Badreddine; Harhar, Hicham; Gharby, Saïd; Denhez, Clément; Zeppa, Giuseppe

    2013-01-01

    Virgin edible argan oil is prepared by cold-pressing argan kernels previously roasted at 110 degrees C for up to 25 minutes. The concentration of 40 volatile compounds in virgin edible argan oil was determined as a function of argan kernel roasting time. Most of the volatile compounds begin to be formed after 15 to 25 minutes of roasting. This suggests that a strictly controlled roasting time should allow the modulation of argan oil taste and thus satisfy different types of consumers. This could be of major importance considering the present booming use of edible argan oil.

  3. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  4. Regularization techniques for PSF-matching kernels - I. Choice of kernel basis

    NASA Astrophysics Data System (ADS)

    Becker, A. C.; Homrighausen, D.; Connolly, A. J.; Genovese, C. R.; Owen, R.; Bickerton, S. J.; Lupton, R. H.

    2012-09-01

    We review current methods for building point spread function (PSF)-matching kernels for the purposes of image subtraction or co-addition. Such methods use a linear decomposition of the kernel on a series of basis functions. The correct choice of these basis functions is fundamental to the efficiency and effectiveness of the matching - the chosen bases should represent the underlying signal using a reasonably small number of shapes, and/or have a minimum number of user-adjustable tuning parameters. We examine methods whose bases comprise multiple Gauss-Hermite polynomials, as well as a form-free basis composed of delta-functions. Kernels derived from delta-functions are unsurprisingly shown to be more expressive; they are able to take more general shapes and perform better in situations where sum-of-Gaussian methods are known to fail. However, due to its many degrees of freedom (the maximum number allowed by the kernel size) this basis tends to overfit the problem and yields noisy kernels having large variance. We introduce a new technique to regularize these delta-function kernel solutions, which bridges the gap between the generality of delta-function kernels and the compactness of sum-of-Gaussian kernels. Through this regularization we are able to create general kernel solutions that represent the intrinsic shape of the PSF-matching kernel with only one degree of freedom, the strength of the regularization λ. The role of λ is effectively to exchange variance in the resulting difference image with variance in the kernel itself. We examine considerations in choosing the value of λ, including statistical risk estimators and the ability of the solution to predict solutions for adjacent areas. Both of these suggest moderate strengths of λ between 0.1 and 1.0, although this optimization is likely data set dependent. This model allows for flexible representations of the convolution kernel that have significant predictive ability and will prove useful in implementing

  5. Accuracy of Reduced and Extended Thin-Wire Kernels

    SciTech Connect

    Burke, G J

    2008-11-24

    Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.

  6. Analysis of maize ( Zea mays ) kernel density and volume using microcomputed tomography and single-kernel near-infrared spectroscopy.

    PubMed

    Gustin, Jeffery L; Jackson, Sean; Williams, Chekeria; Patel, Anokhee; Armstrong, Paul; Peter, Gary F; Settles, A Mark

    2013-11-20

    Maize kernel density affects milling quality of the grain. Kernel density of bulk samples can be predicted by near-infrared reflectance (NIR) spectroscopy, but no accurate method to measure individual kernel density has been reported. This study demonstrates that individual kernel density and volume are accurately measured using X-ray microcomputed tomography (μCT). Kernel density was significantly correlated with kernel volume, air space within the kernel, and protein content. Embryo density and volume did not influence overall kernel density. Partial least-squares (PLS) regression of μCT traits with single-kernel NIR spectra gave stable predictive models for kernel density (R(2) = 0.78, SEP = 0.034 g/cm(3)) and volume (R(2) = 0.86, SEP = 2.88 cm(3)). Density and volume predictions were accurate for data collected over 10 months based on kernel weights calculated from predicted density and volume (R(2) = 0.83, SEP = 24.78 mg). Kernel density was significantly correlated with bulk test weight (r = 0.80), suggesting that selection of dense kernels can translate to improved agronomic performance.

  7. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  8. Kernel maximum autocorrelation factor and minimum noise fraction transformations.

    PubMed

    Nielsen, Allan Aasbjerg

    2011-03-01

    This paper introduces kernel versions of maximum autocorrelation factor (MAF) analysis and minimum noise fraction (MNF) analysis. The kernel versions are based upon a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version, the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA), kernel MAF, and kernel MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. Three examples show the very successful application of kernel MAF/MNF analysis to: 1) change detection in DLR 3K camera data recorded 0.7 s apart over a busy motorway, 2) change detection in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt to even abruptly varying multi and hypervariate backgrounds and focus on extreme observations.

  9. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    SciTech Connect

    Charles Barnes; CLay Richardson; Scott Nagley; John Hunn; Eric Shaber

    2010-10-01

    Babcock and Wilcox (B&W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-µm, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B&W produced 425-µm, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B&W also produced 500-µm, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B&W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  10. End-use quality of soft kernel durum wheat

    USDA-ARS?s Scientific Manuscript database

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  11. Reduction of complex signaling networks to a representative kernel.

    PubMed

    Kim, Jeong-Rae; Kim, Junil; Kwon, Yung-Keun; Lee, Hwang-Yeol; Heslop-Harrison, Pat; Cho, Kwang-Hyun

    2011-05-31

    The network of biomolecular interactions that occurs within cells is large and complex. When such a network is analyzed, it can be helpful to reduce the complexity of the network to a "kernel" that maintains the essential regulatory functions for the output under consideration. We developed an algorithm to identify such a kernel and showed that the resultant kernel preserves the network dynamics. Using an integrated network of all of the human signaling pathways retrieved from the KEGG (Kyoto Encyclopedia of Genes and Genomes) database, we identified this network's kernel and compared the properties of the kernel to those of the original network. We found that the percentage of essential genes to the genes encoding nodes outside of the kernel was about 10%, whereas ~32% of the genes encoding nodes within the kernel were essential. In addition, we found that 95% of the kernel nodes corresponded to Mendelian disease genes and that 93% of synthetic lethal pairs associated with the network were contained in the kernel. Genes corresponding to nodes in the kernel had low evolutionary rates, were ubiquitously expressed in various tissues, and were well conserved between species. Furthermore, kernel genes included many drug targets, suggesting that other kernel nodes may be potential drug targets. Owing to the simplification of the entire network, the efficient modeling of a large-scale signaling network and an understanding of the core structure within a complex framework become possible.

  12. NIRS method for precise identification of Fusarium damaged wheat kernels

    USDA-ARS?s Scientific Manuscript database

    Development of scab resistant wheat varieties may be enhanced by non-destructive evaluation of kernels for Fusarium damaged kernels (FDKs) and deoxynivalenol (DON) levels. Fusarium infection generally affects kernel appearance, but insect damage and other fungi can cause similar symptoms. Also, some...

  13. Thermomechanical property of rice kernels studied by DMA

    USDA-ARS?s Scientific Manuscript database

    The thermomechanical property of the rice kernels was investigated using a dynamic mechanical analyzer (DMA). The length change of rice kernel with a loaded constant force along the major axis direction was detected during temperature scanning. The thermomechanical transition occurred in rice kernel...

  14. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  15. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  16. Multiple spectral kernel learning and a gaussian complexity computation.

    PubMed

    Reyhani, Nima

    2013-07-01

    Multiple kernel learning (MKL) partially solves the kernel selection problem in support vector machines and similar classifiers by minimizing the empirical risk over a subset of the linear combination of given kernel matrices. For large sample sets, the size of the kernel matrices becomes a numerical issue. In many cases, the kernel matrix is of low-efficient rank. However, the low-rank property is not efficiently utilized in MKL algorithms. Here, we suggest multiple spectral kernel learning that efficiently uses the low-rank property by finding a kernel matrix from a set of Gram matrices of a few eigenvectors from all given kernel matrices, called a spectral kernel set. We provide a new bound for the gaussian complexity of the proposed kernel set, which depends on both the geometry of the kernel set and the number of Gram matrices. This characterization of the complexity implies that in an MKL setting, adding more kernels may not monotonically increase the complexity, while previous bounds show otherwise.

  17. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  18. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a..., packaging, transporting, or holding food, subject to the provisions of this section. (a) Tamarind seed...

  19. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  20. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  1. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  2. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  3. Arbitrary-resolution global sensitivity kernels

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Fournier, A.; Dahlen, F.

    2007-12-01

    Extracting observables out of any part of a seismogram (e.g. including diffracted phases such as Pdiff) necessitates the knowledge of 3-D time-space wavefields for the Green functions that form the backbone of Fréchet sensitivity kernels. While known for a while, this idea is still computationally intractable in 3-D, facing major simulation and storage issues when high-frequency wavefields are considered at the global scale. We recently developed a new "collapsed-dimension" spectral-element method that solves the 3-D system of elastodynamic equations in a 2-D space, based on exploring symmetry considerations of the seismic-wave radiation patterns. We will present the technical background on the computation of waveform kernels, various examples of time- and frequency-dependent sensitivity kernels and subsequently extracted time-window kernels (e.g. banana- doughnuts). Given the computationally light-weighted 2-D nature, we will explore some crucial parameters such as excitation type, source time functions, frequency, azimuth, discontinuity locations, and phase type, i.e. an a priori view into how, when, and where seismograms carry 3-D Earth signature. A once-and-for-all database of 2-D waveforms for various source depths shall then serve as a complete set of global time-space sensitivity for a given spherically symmetric background model, thereby allowing for tomographic inversions with arbitrary frequencies, observables, and phases.

  4. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... washing: Provided, That the presence of web or frass shall not be considered serious damage for the...

  5. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 8 2014-01-01 2014-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... washing: Provided, That the presence of web or frass shall not be considered serious damage for the...

  6. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 8 2011-01-01 2011-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... washing: Provided, That the presence of web or frass shall not be considered serious damage for the...

  7. Kernel Temporal Differences for Neural Decoding

    PubMed Central

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  8. Kernel temporal differences for neural decoding.

    PubMed

    Bae, Jihye; Sanchez Giraldo, Luis G; Pohlmeyer, Eric A; Francis, Joseph T; Sanchez, Justin C; Príncipe, José C

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces.

  9. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  10. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  11. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  12. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  13. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  14. Symbol recognition with kernel density matching.

    PubMed

    Zhang, Wan; Wenyin, Liu; Zhang, Kun

    2006-12-01

    We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.

  15. Spark Ignited Turbulent Flame Kernel Growth

    SciTech Connect

    Santavicca, D.A.

    1995-06-01

    An experimental study of the effects of spark power and of incomplete fuel-air mixing on spark-ignited flame kernel growth was conducted in turbulent propane-air mixtures at 1 atm, 300K conditions. The results showed that increased spark power resulted in an increased growth rate, where the effect of short duration breakdown sparks was found to persist for times of the order of milliseconds. The effectiveness of increased spark power was found to be less at high turbulence and high dilution conditions. Increased spark power had a greater effect on the 0-5 mm burn time than on the 5-13 mm burn time, in part because of the effect of breakdown energy on the initial size of the flame kernel. And finally, when spark power was increased by shortening the spark duration while keeping the effective energy the same there was a significant increase in the misfire rate, however when the spark power was further increased by increasing the breakdown energy the misfire rate dropped to zero. The results also showed that fluctuations in local mixture strength due to incomplete fuel-air mixing cause the flame kernel surface to become wrinkled and distorted; and that the amount of wrinkling increases as the degree of incomplete fuel-air mixing increases. Incomplete fuel-air mixing was also found to result in a significant increase in cyclic variations in the flame kernel growth. The average flame kernel growth rates for the premixed and the incompletely mixed cases were found to be within the experimental uncertainty except for the 33%-RMS-fluctuation case where the growth rate was significantly lower. The premixed and 6%-RMS-fluctuation cases had a 0% misfire rate. The misfire rates were 1% and 2% for the 13%-RMS-fluctuation and 24%-RMS-fluctuation cases, respectively; however, it drastically increased to 23% in the 33%-RMS-fluctuation case.

  16. Kernel weights optimization for error diffusion halftoning method

    NASA Astrophysics Data System (ADS)

    Fedoseev, Victor

    2015-02-01

    This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.

  17. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  18. The pre-image problem in kernel methods.

    PubMed

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  19. Difference image analysis: automatic kernel design using information criteria

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.

    2016-03-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.

  20. Learning bounds for kernel regression using effective data dimensionality.

    PubMed

    Zhang, Tong

    2005-09-01

    Kernel methods can embed finite-dimensional data into infinite-dimensional feature spaces. In spite of the large underlying feature dimensionality, kernel methods can achieve good generalization ability. This observation is often wrongly interpreted, and it has been used to argue that kernel learning can magically avoid the "curse-of-dimensionality" phenomenon encountered in statistical estimation problems. This letter shows that although using kernel representation, one can embed data into an infinite-dimensional feature space; the effective dimensionality of this embedding, which determines the learning complexity of the underlying kernel machine, is usually small. In particular, we introduce an algebraic definition of a scale-sensitive effective dimension associated with a kernel representation. Based on this quantity, we derive upper bounds on the generalization performance of some kernel regression methods. Moreover, we show that the resulting convergent rates are optimal under various circumstances.

  1. Efficient $\\chi ^{2}$ Kernel Linearization via Random Feature Maps.

    PubMed

    Yuan, Xiao-Tong; Wang, Zhenzhen; Deng, Jiankang; Liu, Qingshan

    2016-11-01

    Explicit feature mapping is an appealing way to linearize additive kernels, such as χ(2) kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of χ(2) kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize χ(2) kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to χ(2) multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the χ(2) kernel SVMs at almost no cost of testing accuracy.

  2. A Novel Framework for Learning Geometry-Aware Kernels.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo

    2016-05-01

    The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.

  3. Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets.

    PubMed

    Wang, Shitong; Wang, Jun; Chung, Fu-lai

    2014-01-01

    Kernel methods such as the standard support vector machine and support vector regression trainings take O(N(3)) time and O(N(2)) space complexities in their naïve implementations, where N is the training set size. It is thus computationally infeasible in applying them to large data sets, and a replacement of the naive method for finding the quadratic programming (QP) solutions is highly desirable. By observing that many kernel methods can be linked up with kernel density estimate (KDE) which can be efficiently implemented by some approximation techniques, a new learning method called fast KDE (FastKDE) is proposed to scale up kernel methods. It is based on establishing a connection between KDE and the QP problems formulated for kernel methods using an entropy-based integrated-squared-error criterion. As a result, FastKDE approximation methods can be applied to solve these QP problems. In this paper, the latest advance in fast data reduction via KDE is exploited. With just a simple sampling strategy, the resulted FastKDE method can be used to scale up various kernel methods with a theoretical guarantee that their performance does not degrade a lot. It has a time complexity of O(m(3)) where m is the number of the data points sampled from the training set. Experiments on different benchmarking data sets demonstrate that the proposed method has comparable performance with the state-of-art method and it is effective for a wide range of kernel methods to achieve fast learning in large data sets.

  4. Inverse of the string theory KLT kernel

    NASA Astrophysics Data System (ADS)

    Mizera, Sebastian

    2017-06-01

    The field theory Kawai-Lewellen-Tye (KLT) kernel, which relates scattering amplitudes of gravitons and gluons, turns out to be the inverse of a matrix whose components are bi-adjoint scalar partial amplitudes. In this note we propose an analogous construction for the string theory KLT kernel. We present simple diagrammatic rules for the computation of the α'-corrected bi-adjoint scalar amplitudes that are exact in α'. We find compact expressions in terms of graphs, where the standard Feynman propagators 1 /p 2 are replaced by either 1 /sin(π α' p 2 /2) or 1 /tan(π α' p 2 /2), as determined by a recursive procedure. We demonstrate how the same object can be used to conveniently expand open string partial amplitudes in a BCJ basis.

  5. Motion Blur Kernel Estimation via Deep Learning.

    PubMed

    Xu, Xiangyu; Pan, Jinshan; Zhang, Yu-Jin; Yang, Ming-Hsuan

    2017-09-18

    The success of the state-of-the-art deblurring methods mainly depends on restoration of sharp edges in a coarse-tofine kernel estimation process. In this paper, we propose to learn a deep convolutional neural network for extracting sharp edges from blurred images. Motivated by the success of the existing filtering based deblurring methods, the proposed model consists of two stages: suppressing extraneous details and enhancing sharp edges. We show that the two-stage model simplifies the learning process and effectively restores sharp edges. Facilitated by the learned sharp edges, the proposed deblurring algorithm does not require any coarse-to-fine strategy or edge selection, thereby significantly simplifying kernel estimation and reducing computation load. Extensive experimental results on challenging blurry images demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of visual quality and run-time.

  6. Wilson Dslash Kernel From Lattice QCD Optimization

    SciTech Connect

    Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  7. Bergman kernel and complex singularity exponent

    NASA Astrophysics Data System (ADS)

    Chen, Boyong; Lee, Hanjin

    2009-12-01

    We give a precise estimate of the Bergman kernel for the model domain defined by $\\Omega_F=\\{(z,w)\\in \\mathbb{C}^{n+1}:{\\rm Im}w-|F(z)|^2>0\\},$ where $F=(f_1,...,f_m)$ is a holomorphic map from $\\mathbb{C}^n$ to $\\mathbb{C}^m$, in terms of the complex singularity exponent of $F$.

  8. Control Transfer in Operating System Kernels

    DTIC Science & Technology

    1994-05-13

    the Programming Symposium, pages 181-203, 1974. [Leffler et al. 89] S. Leffler, M. McKusick, M. Karels, and J. Quarterman. The Design and...increased modularity in operating systems only increases the importance of control transfer. My thesis is that a programming language abstraction...continuations provide allows the kernel designer when necessary to choose implementation performance over convenience, without affecting the design of

  9. The Palomar kernel-phase experiment: testing kernel phase interferometry for ground-based astronomical observations

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin; Tuthill, Peter; Hinkley, Sasha; Ireland, Michael J.; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D.; Martinache, Frantz

    2016-01-01

    At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently developed adaptive optics post-processing technique, called kernel-phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here, we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the α Ophiuchi binary system near periastron, using the Palomar 200-Inch Telescope. This is the first case in which kernel phase has been used with a full aperture to resolve a system close to the diffraction limit with ground-based extreme adaptive optics observations. Excellent agreement in astrometric quantities is found between kernel phase and masking, and kernel phase significantly outperforms PSF fitting and bispectral analysis, demonstrating its viability as an alternative to conventional non-redundant masking under appropriate conditions.

  10. Kernel methods for phenotyping complex plant architecture.

    PubMed

    Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien

    2014-02-07

    The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.

  11. Balancing continuous covariates based on Kernel densities.

    PubMed

    Ma, Zhenjun; Hu, Feifang

    2013-03-01

    The balance of important baseline covariates is essential for convincing treatment comparisons. Stratified permuted block design and minimization are the two most commonly used balancing strategies, both of which require the covariates to be discrete. Continuous covariates are typically discretized in order to be included in the randomization scheme. But breakdown of continuous covariates into subcategories often changes the nature of the covariates and makes distributional balance unattainable. In this article, we propose to balance continuous covariates based on Kernel density estimations, which keeps the continuity of the covariates. Simulation studies show that the proposed Kernel-Minimization can achieve distributional balance of both continuous and categorical covariates, while also keeping the group size well balanced. It is also shown that the Kernel-Minimization is less predictable than stratified permuted block design and minimization. Finally, we apply the proposed method to redesign the NINDS trial, which has been a source of controversy due to imbalance of continuous baseline covariates. Simulation shows that imbalances such as those observed in the NINDS trial can be generally avoided through the implementation of the new method. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Kernel Non-Rigid Structure from Motion

    PubMed Central

    Gotardo, Paulo F. U.; Martinez, Aleix M.

    2013-01-01

    Non-rigid structure from motion (NRSFM) is a difficult, underconstrained problem in computer vision. The standard approach in NRSFM constrains 3D shape deformation using a linear combination of K basis shapes; the solution is then obtained as the low-rank factorization of an input observation matrix. An important but overlooked problem with this approach is that non-linear deformations are often observed; these deformations lead to a weakened low-rank constraint due to the need to use additional basis shapes to linearly model points that move along curves. Here, we demonstrate how the kernel trick can be applied in standard NRSFM. As a result, we model complex, deformable 3D shapes as the outputs of a non-linear mapping whose inputs are points within a low-dimensional shape space. This approach is flexible and can use different kernels to build different non-linear models. Using the kernel trick, our model complements the low-rank constraint by capturing non-linear relationships in the shape coefficients of the linear model. The net effect can be seen as using non-linear dimensionality reduction to further compress the (shape) space of possible solutions. PMID:24002226

  13. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.

  14. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  15. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  16. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  17. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  18. Convolution kernel design and efficient algorithm for sampling density correction.

    PubMed

    Johnson, Kenneth O; Pipe, James G

    2009-02-01

    Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally efficient algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the algorithm are extended to 3D. Copyright 2009 Wiley-Liss, Inc.

  19. A Kernel-based Account of Bibliometric Measures

    NASA Astrophysics Data System (ADS)

    Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji

    The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.

  20. Isolation of bacterial endophytes from germinated maize kernels.

    PubMed

    Rijavec, Tomaz; Lapanje, Ales; Dermastia, Marina; Rupnik, Maja

    2007-06-01

    The germination of surface-sterilized maize kernels under aseptic conditions proved to be a suitable method for isolation of kernel-associated bacterial endophytes. Bacterial strains identified by partial 16S rRNA gene sequencing as Pantoea sp., Microbacterium sp., Frigoribacterium sp., Bacillus sp., Paenibacillus sp., and Sphingomonas sp. were isolated from kernels of 4 different maize cultivars. Genus Pantoea was associated with a specific maize cultivar. The kernels of this cultivar were often overgrown with the fungus Lecanicillium aphanocladii; however, those exhibiting Pantoea growth were never colonized with it. Furthermore, the isolated bacterium strain inhibited fungal growth in vitro.

  1. Geometric tree kernels: classification of COPD from airway tree geometry.

    PubMed

    Feragen, Aasa; Petersen, Jens; Grimm, Dominik; Dirksen, Asger; Pedersen, Jesper Holst; Borgwardt, Karsten; de Bruijne, Marleen

    2013-01-01

    Methodological contributions: This paper introduces a family of kernels for analyzing (anatomical) trees endowed with vector valued measurements made along the tree. While state-of-the-art graph and tree kernels use combinatorial tree/graph structure with discrete node and edge labels, the kernels presented in this paper can include geometric information such as branch shape, branch radius or other vector valued properties. In addition to being flexible in their ability to model different types of attributes, the presented kernels are computationally efficient and some of them can easily be computed for large datasets (N - 10.000) of trees with 30 - 600 branches. Combining the kernels with standard machine learning tools enables us to analyze the relation between disease and anatomical tree structure and geometry. Experimental results: The kernels are used to compare airway trees segmented from low-dose CT, endowed with branch shape descriptors and airway wall area percentage measurements made along the tree. Using kernelized hypothesis testing we show that the geometric airway trees are significantly differently distributed in patients with Chronic Obstructive Pulmonary Disease (COPD) than in healthy individuals. The geometric tree kernels also give a significant increase in the classification accuracy of COPD from geometric tree structure endowed with airway wall thickness measurements in comparison with state-of-the-art methods, giving further insight into the relationship between airway wall thickness and COPD. Software: Software for computing kernels and statistical tests is available at http://image.diku.dk/aasa/software.php.

  2. Influence of wheat kernel physical properties on the pulverizing process.

    PubMed

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  3. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  4. Model-based online learning with kernels.

    PubMed

    Li, Guoqi; Wen, Changyun; Li, Zheng Guo; Zhang, Aimin; Yang, Feng; Mao, Kezhi

    2013-03-01

    New optimization models and algorithms for online learning with Kernels (OLK) in classification, regression, and novelty detection are proposed in a reproducing Kernel Hilbert space. Unlike the stochastic gradient descent algorithm, called the naive online Reg minimization algorithm (NORMA), OLK algorithms are obtained by solving a constrained optimization problem based on the proposed models. By exploiting the techniques of the Lagrange dual problem like Vapnik's support vector machine (SVM), the solution of the optimization problem can be obtained iteratively and the iteration process is similar to that of the NORMA. This further strengthens the foundation of OLK and enriches the research area of SVM. We also apply the obtained OLK algorithms to problems in classification, regression, and novelty detection, including real time background substraction, to show their effectiveness. It is illustrated that, based on the experimental results of both classification and regression, the accuracy of OLK algorithms is comparable with traditional SVM-based algorithms, such as SVM and least square SVM (LS-SVM), and with the state-of-the-art algorithms, such as Kernel recursive least square (KRLS) method and projectron method, while it is slightly higher than that of NORMA. On the other hand, the computational cost of the OLK algorithm is comparable with or slightly lower than existing online methods, such as above mentioned NORMA, KRLS, and projectron methods, but much lower than that of SVM-based algorithms. In addition, different from SVM and LS-SVM, it is possible for OLK algorithms to be applied to non-stationary problems. Also, the applicability of OLK in novelty detection is illustrated by simulation results.

  5. Oil point pressure of Indian almond kernels

    NASA Astrophysics Data System (ADS)

    Aregbesola, O.; Olatunde, G.; Esuola, S.; Owolarafe, O.

    2012-07-01

    The effect of preprocessing conditions such as moisture content, heating temperature, heating time and particle size on oil point pressure of Indian almond kernel was investigated. Results showed that oil point pressure was significantly (P < 0.05) affected by above mentioned parameters. It was also observed that oil point pressure reduced with increase in heating temperature and heating time for both coarse and fine particles. Furthermore, an increase in moisture content resulted in increased oil point pressure for coarse particles while there was a reduction in oil point pressure with increase in moisture content for fine particles.

  6. Neutron scattering kernel for solid deuterium

    NASA Astrophysics Data System (ADS)

    Granada, J. R.

    2009-06-01

    A new scattering kernel to describe the interaction of slow neutrons with solid deuterium was developed. The main characteristics of that system are contained in the formalism, including the lattice's density of states, the Young-Koppel quantum treatment of the rotations, and the internal molecular vibrations. The elastic processes involving coherent and incoherent contributions are fully described, as well as the spin-correlation effects. The results from the new model are compared with the best available experimental data, showing very good agreement.

  7. Verification of Chare-kernel programs

    SciTech Connect

    Bhansali, S.; Kale, L.V. )

    1989-01-01

    Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.

  8. Fixed kernel regression for voltammogram feature extraction

    NASA Astrophysics Data System (ADS)

    Acevedo Rodriguez, F. J.; López-Sastre, R. J.; Gil-Jiménez, P.; Ruiz-Reyes, N.; Maldonado Bascón, S.

    2009-12-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals.

  9. Prediction of kernel density of corn using single-kernel near infrared spectroscopy

    USDA-ARS?s Scientific Manuscript database

    Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...

  10. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Analysis of maize (Zea mays) kernel density and volume using micro-computed tomography and single-kernel near infrared spectroscopy

    USDA-ARS?s Scientific Manuscript database

    Maize kernel density impacts milling quality of the grain due to kernel hardness. Harder kernels are correlated with higher test weight and are more resistant to breakage during harvest and transport. Softer kernels, in addition to being susceptible to mechanical damage, are also prone to pathogen ...

  12. Delimiting areas of endemism through kernel interpolation.

    PubMed

    Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  13. Delimiting Areas of Endemism through Kernel Interpolation

    PubMed Central

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  14. Generalized Langevin equation with tempered memory kernel

    NASA Astrophysics Data System (ADS)

    Liemert, André; Sandev, Trifce; Kantz, Holger

    2017-01-01

    We study a generalized Langevin equation for a free particle in presence of a truncated power-law and Mittag-Leffler memory kernel. It is shown that in presence of truncation, the particle from subdiffusive behavior in the short time limit, turns to normal diffusion in the long time limit. The case of harmonic oscillator is considered as well, and the relaxation functions and the normalized displacement correlation function are represented in an exact form. By considering external time-dependent periodic force we obtain resonant behavior even in case of a free particle due to the influence of the environment on the particle movement. Additionally, the double-peak phenomenon in the imaginary part of the complex susceptibility is observed. It is obtained that the truncation parameter has a huge influence on the behavior of these quantities, and it is shown how the truncation parameter changes the critical frequencies. The normalized displacement correlation function for a fractional generalized Langevin equation is investigated as well. All the results are exact and given in terms of the three parameter Mittag-Leffler function and the Prabhakar generalized integral operator, which in the kernel contains a three parameter Mittag-Leffler function. Such kind of truncated Langevin equation motion can be of high relevance for the description of lateral diffusion of lipids and proteins in cell membranes.

  15. Transcriptome analysis of Ginkgo biloba kernels

    PubMed Central

    He, Bing; Gu, Yincong; Xu, Meng; Wang, Jianwen; Cao, Fuliang; Xu, Li-an

    2015-01-01

    Ginkgo biloba is a dioecious species native to China with medicinally and phylogenetically important characteristics; however, genomic resources for this species are limited. In this study, we performed the first transcriptome sequencing for Ginkgo kernels at five time points using Illumina paired-end sequencing. Approximately 25.08-Gb clean reads were obtained, and 68,547 unigenes with an average length of 870 bp were generated by de novo assembly. Of these unigenes, 29,987 (43.74%) were annotated in publicly available plant protein database. A total of 3,869 genes were identified as significantly differentially expressed, and enrichment analysis was conducted at different time points. Furthermore, metabolic pathway analysis revealed that 66 unigenes were responsible for terpenoid backbone biosynthesis, with up to 12 up-regulated unigenes involved in the biosynthesis of ginkgolide and bilobalide. Differential gene expression analysis together with real-time PCR experiments indicated that the synthesis of bilobalide may have interfered with the ginkgolide synthesis process in the kernel. These data can remarkably expand the existing transcriptome resources of Ginkgo, and provide a valuable platform to reveal more on developmental and metabolic mechanisms of this species. PMID:26500663

  16. Scientific Computing Kernels on the Cell Processor

    SciTech Connect

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  17. Bergman kernel, balanced metrics and black holes

    NASA Astrophysics Data System (ADS)

    Klevtsov, Semyon

    In this thesis we explore the connections between the Kahler geometry and Landau levels on compact manifolds. We rederive the expansion of the Bergman kernel on Kahler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory. The physics interpretation of this result is as an expansion of the projector of wavefunctions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kahler form. This is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry. We also generalize this expansion to supersymmetric quantum mechanics and more general magnetic fields, and explore its applications. These include the quantum Hall effect in curved space, the balanced metrics and Kahler gravity. In particular, we conjecture that for a probe in a BPS black hole in type II strings compactified on Calabi-Yau manifolds, the moduli space metric is the balanced metric.

  18. End-use quality of soft kernel durum wheat

    USDA-ARS?s Scientific Manuscript database

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat is known for its very hard texture, which influences how it is milled and for what products it is well suited. We developed soft kernel durum wheat lines via Ph1b-mediated homoeologous recombination with Dr. Leonard Joppa...

  19. Ambered kernels in stenospermocarpic fruit of eastern black walnut

    Treesearch

    Michele R. Warmund; J.W. Van Sambeek

    2014-01-01

    "Ambers" is a term used to describe poorly filled, shriveled eastern black walnut (Juglans nigra L.) kernels with a dark brown or black-colored pellicle that are unmarketable. Studies were conducted to determine the incidence of ambered black walnut kernels and to ascertain when symptoms were apparent in specific tissues. The occurrence of...

  20. Parametric kernel-driven active contours for image segmentation

    NASA Astrophysics Data System (ADS)

    Wu, Qiongzhi; Fang, Jiangxiong

    2012-10-01

    We investigated a parametric kernel-driven active contour (PKAC) model, which implicitly transfers kernel mapping and piecewise constant to modeling the image data via kernel function. The proposed model consists of curve evolution functional with three terms: global kernel-driven and local kernel-driven terms, which evaluate the deviation of the mapped image data within each region from the piecewise constant model, and a regularization term expressed as the length of the evolution curves. In the local kernel-driven term, the proposed model can effectively segment images with intensity inhomogeneity by incorporating the local image information. By balancing the weight between the global kernel-driven term and the local kernel-driven term, the proposed model can segment the images with either intensity homogeneity or intensity inhomogeneity. To ensure the smoothness of the level set function and reduce the computational cost, the distance regularizing term is applied to penalize the deviation of the level set function and eliminate the requirement of re-initialization. Compared with the local image fitting model and local binary fitting model, experimental results show the advantages of the proposed method in terms of computational efficiency and accuracy.

  1. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  2. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  3. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  4. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined kernel...

  5. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  6. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined kernel...

  7. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  8. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined kernel...

  9. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined kernel...

  10. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  11. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined kernel...

  12. Evidence-Based Kernels: Fundamental Units of Behavioral Influence

    ERIC Educational Resources Information Center

    Embry, Dennis D.; Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…

  13. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  14. Sugar uptake into kernels of tunicate tassel-seed maize

    SciTech Connect

    Thomas, P.A.; Felker, F.C.; Crawford, C.G. )

    1990-05-01

    A maize (Zea mays L.) strain expressing both the tassel-seed (Ts-5) and tunicate (Tu) characters was developed which produces glume-covered kernels on the tassel, often born on 7-10 mm pedicels. Vigorous plants produce up to 100 such kernels interspersed with additional sessile kernels. This floral unit provides a potentially valuable experimental system for studying sugar uptake into developing maize seeds. When detached kernels (with glumes and pedicel intact) are placed in incubation solution, fluid flows up the pedicel and into the glumes, entering the pedicel apoplast near the kernel base. The unusual anatomical features of this maize strain permit experimental access to the pedicel apoplast with much less possibility of kernel base tissue damage than with kernels excised from the cob. ({sup 14}C)Fructose incorporation into soluble and insoluble fractions of endosperm increased for 8 days. Endosperm uptake of sucrose, fructose, and D-glucose was significantly greater than that of L-glucose. Fructose uptake was significantly inhibited by CCCP, DNP, and PCMBS. These results suggest the presence of an active, non-diffusion component of sugar transport in maize kernels.

  15. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  16. High speed sorting of Fusarium-damaged wheat kernels

    USDA-ARS?s Scientific Manuscript database

    Recent studies have found that resistance to Fusarium fungal infection can be inherited in wheat from one generation to another. However, there is not yet available a cost effective method to separate Fusarium-damaged wheat kernels from undamaged kernels so that wheat breeders can take advantage of...

  17. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  18. Integral Transform Methods: A Critical Review of Various Kernels

    NASA Astrophysics Data System (ADS)

    Orlandini, Giuseppina; Turro, Francesco

    2017-03-01

    Some general remarks about integral transform approaches to response functions are made. Their advantage for calculating cross sections at energies in the continuum is stressed. In particular we discuss the class of kernels that allow calculations of the transform by matrix diagonalization. A particular set of such kernels, namely the wavelets, is tested in a model study.

  19. Evidence-Based Kernels: Fundamental Units of Behavioral Influence

    ERIC Educational Resources Information Center

    Embry, Dennis D.; Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…

  20. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  1. Computing the roots of complex orthogonal and kernel polynomials

    SciTech Connect

    Saylor, P.E.; Smolarski, D.C.

    1988-01-01

    A method is presented to compute the roots of complex orthogonal and kernel polynomials. An important application of complex kernel polynomials is the acceleration of iterative methods for the solution of nonsymmetric linear equations. In the real case, the roots of orthogonal polynomials coincide with the eigenvalues of the Jacobi matrix, a symmetric tridiagonal matrix obtained from the defining three-term recurrence relationship for the orthogonal polynomials. In the real case kernel polynomials are orthogonal. The Stieltjes procedure is an algorithm to compute the roots of orthogonal and kernel polynomials bases on these facts. In the complex case, the Jacobi matrix generalizes to a Hessenberg matrix, the eigenvalues of which are roots of either orthogonal or kernel polynomials. The resulting algorithm generalizes the Stieljes procedure. It may not be defined in the case of kernel polynomials, a consequence of the fact that they are orthogonal with respect to a nonpositive bilinear form. (Another consequence is that kernel polynomials need not be of exact degree.) A second algorithm that is always defined is presented for kernel polynomials. Numerical examples are described.

  2. Building kernels from binary strings for image matching.

    PubMed

    Odone, Francesca; Barla, Annalisa; Verri, Alessandro

    2005-02-01

    In the statistical learning framework, the use of appropriate kernels may be the key for substantial improvement in solving a given problem. In essence, a kernel is a similarity measure between input points satisfying some mathematical requirements and possibly capturing the domain knowledge. In this paper, we focus on kernels for images: we represent the image information content with binary strings and discuss various bitwise manipulations obtained using logical operators and convolution with nonbinary stencils. In the theoretical contribution of our work, we show that histogram intersection is a Mercer's kernel and we determine the modifications under which a similarity measure based on the notion of Hausdorff distance is also a Mercer's kernel. In both cases, we determine explicitly the mapping from input to feature space. The presented experimental results support the relevance of our analysis for developing effective trainable systems.

  3. OSKI: A Library of Automatically Tuned Sparse Matrix Kernels

    SciTech Connect

    Vuduc, R; Demmel, J W; Yelick, K A

    2005-07-19

    The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.

  4. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  5. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  6. A Robustness Testing Campaign for IMA-SP Partitioning Kernels

    NASA Astrophysics Data System (ADS)

    Grixti, Stephen; Lopez Trecastro, Jorge; Sammut, Nicholas; Zammit-Mangion, David

    2015-09-01

    With time and space partitioned architectures becoming increasingly appealing to the European space sector, the dependability of partitioning kernel technology is a key factor to its applicability in European Space Agency projects. This paper explores the potential of the data type fault model, which injects faults through the Application Program Interface, in partitioning kernel robustness testing. This fault injection methodology has been tailored to investigate its relevance in uncovering vulnerabilities within partitioning kernels and potentially contributing towards fault removal campaigns within this domain. This is demonstrated through a robustness testing case study of the XtratuM partitioning kernel for SPARC LEON3 processors. The robustness campaign exposed a number of vulnerabilities in XtratuM, exhibiting the potential benefits of using such a methodology for the robustness assessment of partitioning kernels.

  7. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  8. An edge-adapting Laplacian kernel for nonlinear diffusion filters.

    PubMed

    Hajiaboli, Mohammad Reza; Ahmad, M Omair; Wang, Chunyan

    2012-04-01

    In this paper, first, a new Laplacian kernel is developed to integrate into it the anisotropic behavior to control the process of forward diffusion in horizontal and vertical directions. It is shown that, although the new kernel reduces the process of edge distortion, it nonetheless produces artifacts in the processed image. After examining the source of this problem, an analytical scheme is devised to obtain a spatially varying kernel that adapts itself to the diffusivity function. The proposed spatially varying Laplacian kernel is then used in various nonlinear diffusion filters starting from the classical Perona-Malik filter to the more recent ones. The effectiveness of the new kernel in terms of quantitative and qualitative measures is demonstrated by applying it to noisy images.

  9. Learning kernels from biological networks by maximizing entropy.

    PubMed

    Tsuda, Koji; Noble, William Stafford

    2004-08-04

    The diffusion kernel is a general method for computing pairwise distances among all nodes in a graph, based on the sum of weighted paths between each pair of nodes. This technique has been used successfully, in conjunction with kernel-based learning methods, to draw inferences from several types of biological networks. We show that computing the diffusion kernel is equivalent to maximizing the von Neumann entropy, subject to a global constraint on the sum of the Euclidean distances between nodes. This global constraint allows for high variance in the pairwise distances. Accordingly, we propose an alternative, locally constrained diffusion kernel, and we demonstrate that the resulting kernel allows for more accurate support vector machine prediction of protein functional classifications from metabolic and protein-protein interaction networks. Supplementary results and data are available at noble.gs.washington.edu/proj/maxent

  10. OSKI: A library of automatically tuned sparse matrix kernels

    NASA Astrophysics Data System (ADS)

    Vuduc, Richard; Demmel, James W.; Yelick, Katherine A.

    2005-01-01

    The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decisionmaking process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.

  11. Triso coating development progress for uranium nitride kernels

    SciTech Connect

    Jolly, Brian C.; Lindemer, Terrence; Terrani, Kurt A.

    2015-08-01

    In support of fully ceramic matrix (FCM) fuel development [1-2], coating development work is ongoing at the Oak Ridge National Laboratory (ORNL) to produce tri-structural isotropic (TRISO) coated fuel particles with UN kernels [3]. The nitride kernels are used to increase fissile density in these SiC-matrix fuel pellets with details described elsewhere [4]. The advanced gas reactor (AGR) program at ORNL used fluidized bed chemical vapor deposition (FBCVD) techniques for TRISO coating of UCO (two phase mixture of UO2 and UCx) kernels [5]. Similar techniques were employed for coating of the UN kernels, however significant changes in processing conditions were required to maintain acceptable coating properties due to physical property and dimensional differences between the UCO and UN kernels (Table 1).

  12. A novel extended kernel recursive least squares algorithm.

    PubMed

    Zhu, Pingping; Chen, Badong; Príncipe, José C

    2012-08-01

    In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.

  13. Kernel descriptors for chest x-ray analysis

    NASA Astrophysics Data System (ADS)

    Orbán, Gergely Gy.; Horváth, Gábor

    2017-03-01

    In this study, we address the problem of lesion classification in radiographic scans. We adapt image kernel functions to be applicable for high-resolution, grayscale images to improve the classification accuracy of a support vector machine. We take existing kernel functions inspired by the histogram of oriented gradients, and derive an approximation that can be evaluated in linear time of the image size instead of the original quadratic complexity, enabling highresolution input. Moreover, we propose a new variant inspired by the matched filter, to better utilize intensity space. The new kernels are improved to be scale-invariant and combined with a Gaussian kernel built from handcrafted image features. We introduce a simple multiple kernel learning framework that is robust when one of the kernels, in the current case the image feature kernel, dominates the others. The combined kernel is input to a support vector classifier. We tested our method on lesion classification both in chest radiographs and digital tomosynthesis scans. The radiographs originated from a database including 364 patients with lung nodules and 150 healthy cases. The digital tomosynthesis scans were obtained by simulation using 91 CT scans from the LIDC-IDRI database as input. The new kernels showed good separation capability: ROC AuC was in [0.827, 0.853] for the radiograph database and 0.763 for the tomosynthesis scans. Adding the new kernels to the image-feature-based classifier significantly improved accuracy: AuC increased from 0.958 to 0.967 and from 0.788 to 0.801 for the two applications.

  14. 3-D sensitivity kernels of the Rayleigh wave ellipticity

    NASA Astrophysics Data System (ADS)

    Maupin, Valérie

    2017-10-01

    The ellipticity of the Rayleigh wave at the surface depends on the seismic structure beneath and in the vicinity of the seismological station where it is measured. We derive here the expression and compute the 3-D kernels that describe this dependence with respect to S-wave velocity, P-wave velocity and density. Near-field terms as well as coupling to Love waves are included in the expressions. We show that the ellipticity kernels are the difference between the amplitude kernels of the radial and vertical components of motion. They show maximum values close to the station, but with a complex pattern, even when smoothing in a finite-frequency range is used to remove the oscillatory pattern present in mono-frequency kernels. In order to follow the usual data processing flow, we also compute and analyse the kernels of the ellipticity averaged over incoming wave backazimuth. The kernel with respect to P-wave velocity has the simplest lateral variation and is in good agreement with commonly used 1-D kernels. The kernels with respect to S-wave velocity and density are more complex and we have not been able to find a good correlation between the 3-D and 1-D kernels. Although it is clear that the ellipticity is mostly sensitive to the structure within half-a-wavelength of the station, the complexity of the kernels within this zone prevents simple approximations like a depth dependence times a lateral variation to be useful in the inversion of the ellipticity.

  15. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects.

  16. Image quality of mixed convolution kernel in thoracic computed tomography

    PubMed Central

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-01-01

    Abstract The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images. Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test. Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001). The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT. PMID:27858910

  17. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  18. A visualization tool for the kernel-driven model with improved ability in data analysis and kernel assessment

    NASA Astrophysics Data System (ADS)

    Dong, Yadong; Jiao, Ziti; Zhang, Hu; Bai, Dongni; Zhang, Xiaoning; Li, Yang; He, Dandan

    2016-10-01

    The semi-empirical, kernel-driven Bidirectional Reflectance Distribution Function (BRDF) model has been widely used for many aspects of remote sensing. With the development of the kernel-driven model, there is a need to further assess the performance of newly developed kernels. The use of visualization tools can facilitate the analysis of model results and the assessment of newly developed kernels. However, the current version of the kernel-driven model does not contain a visualization function. In this study, a user-friendly visualization tool, named MaKeMAT, was developed specifically for the kernel-driven model. The POLDER-3 and CAR BRDF datasets were used to demonstrate the applicability of MaKeMAT. The visualization of inputted multi-angle measurements enhances understanding of multi-angle measurements and allows the choice of measurements with good representativeness. The visualization of modeling results facilitates the assessment of newly developed kernels. The study shows that the visualization tool MaKeMAT can promote the widespread application of the kernel-driven model.

  19. Feasibility of near infrared spectroscopy for analyzing corn kernel damage and viability of soybean and corn kernels

    USDA-ARS?s Scientific Manuscript database

    The current US corn grading system accounts for the portion of damaged kernels, which is measured by time-consuming and inaccurate visual inspection. Near infrared spectroscopy (NIRS), a non-destructive and fast analytical method, was tested as a tool for discriminating corn kernels with heat and f...

  20. Privacy preserving RBF kernel support vector machine.

    PubMed

    Li, Haoran; Xiong, Li; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2014-01-01

    Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data.

  1. Privacy Preserving RBF Kernel Support Vector Machine

    PubMed Central

    Xiong, Li; Ohno-Machado, Lucila

    2014-01-01

    Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data. PMID:25013805

  2. On the Kernelization Complexity of Colorful Motifs

    NASA Astrophysics Data System (ADS)

    Ambalath, Abhimanyu M.; Balasundaram, Radheshyam; Rao H., Chintan; Koppula, Venkata; Misra, Neeldhara; Philip, Geevarghese; Ramanujan, M. S.

    The Colorful Motif problem asks if, given a vertex-colored graph G, there exists a subset S of vertices of G such that the graph induced by G on S is connected and contains every color in the graph exactly once. The problem is motivated by applications in computational biology and is also well-studied from the theoretical point of view. In particular, it is known to be NP-complete even on trees of maximum degree three [Fellows et al, ICALP 2007]. In their pioneering paper that introduced the color-coding technique, Alon et al. [STOC 1995] show, inter alia, that the problem is FPT on general graphs. More recently, Cygan et al. [WG 2010] showed that Colorful Motif is NP-complete on comb graphs, a special subclass of the set of trees of maximum degree three. They also showed that the problem is not likely to admit polynomial kernels on forests.

  3. Kernel density estimation using graphical processing unit

    NASA Astrophysics Data System (ADS)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  4. Context quantization by kernel Fisher discriminant.

    PubMed

    Xu, Mantao; Wu, Xiaolin; Fränti, Pasi

    2006-01-01

    Optimal context quantizers for minimum conditional entropy can be constructed by dynamic programming in the probability simplex space. The main difficulty, operationally, is the resulting complex quantizer mapping function in the context space, in which the conditional entropy coding is conducted. To overcome this difficulty, we propose new algorithms for designing context quantizers in the context space based on the multiclass Fisher discriminant and the kernel Fisher discriminant (KFD). In particular, the KFD can describe linearly nonseparable quantizer cells by projecting input context vectors onto a high-dimensional curve, in which these cells become better separable. The new algorithms outperform the previous linear Fisher discriminant method for context quantization. They approach the minimum empirical conditional entropy context quantizer designed in the probability simplex space, but with a practical implementation that employs a simple scalar quantizer mapping function rather than a large lookup table.

  5. Learning molecular energies using localized graph kernels.

    PubMed

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-21

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  6. Learning molecular energies using localized graph kernels

    NASA Astrophysics Data System (ADS)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  7. Heat kernel methods for Lifshitz theories

    NASA Astrophysics Data System (ADS)

    Barvinsky, Andrei O.; Blas, Diego; Herrero-Valea, Mario; Nesterov, Dmitry V.; Pérez-Nadal, Guillem; Steinwachs, Christian F.

    2017-06-01

    We study the one-loop covariant effective action of Lifshitz theories using the heat kernel technique. The characteristic feature of Lifshitz theories is an anisotropic scaling between space and time. This is enforced by the existence of a preferred foliation of space-time, which breaks Lorentz invariance. In contrast to the relativistic case, covariant Lifshitz theories are only invariant under diffeomorphisms preserving the foliation structure. We develop a systematic method to reduce the calculation of the effective action for a generic Lifshitz operator to an algorithm acting on known results for relativistic operators. In addition, we present techniques that drastically simplify the calculation for operators with special properties. We demonstrate the efficiency of these methods by explicit applications.

  8. Labeled Graph Kernel for Behavior Analysis

    PubMed Central

    Zhao, Ruiqi; Martinez, Aleix M.

    2016-01-01

    Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data. PMID:26415154

  9. The flare kernel in the impulsive phase

    NASA Technical Reports Server (NTRS)

    Dejager, C.

    1986-01-01

    The impulsive phase of a flare is characterized by impulsive bursts of X-ray and microwave radiation, related to impulsive footpoint heating up to 50 or 60 MK, by upward gas velocities (150 to 400 km/sec) and by a gradual increase of the flare's thermal energy content. These phenomena, as well as non-thermal effects, are all related to the impulsive energy injection into the flare. The available observations are also quantitatively consistent with a model in which energy is injected into the flare by beams of energetic electrons, causing ablation of chromospheric gas, followed by convective rise of gas. Thus, a hole is burned into the chromosphere; at the end of impulsive phase of an average flare the lower part of that hole is situated about 1800 km above the photosphere. H alpha and other optical and UV line emission is radiated by a thin layer (approx. 20 km) at the bottom of the flare kernel. The upward rising and outward streaming gas cools down by conduction in about 45 s. The non-thermal effects in the initial phase are due to curtailing of the energy distribution function by escape of energetic electrons. The single flux tube model of a flare does not fit with these observations; instead we propose the spaghetti-bundle model. Microwave and gamma-ray observations suggest the occurrence of dense flare knots of approx. 800 km diameter, and of high temperature. Future observations should concentrate on locating the microwave/gamma-ray sources, and on determining the kernel's fine structure and the related multi-loop structure of the flaring area.

  10. Labeled Graph Kernel for Behavior Analysis.

    PubMed

    Zhao, Ruiqi; Martinez, Aleix M

    2016-08-01

    Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data.

  11. Computed tomography coronary stent imaging with iterative reconstruction: a trade-off study between medium kernel and sharp kernel.

    PubMed

    Zhou, Qijing; Jiang, Biao; Dong, Fei; Huang, Peiyu; Liu, Hongtao; Zhang, Minming

    2014-01-01

    To evaluate the improvement of iterative reconstruction in image space (IRIS) technique in computed tomographic (CT) coronary stent imaging with sharp kernel, and to make a trade-off analysis. Fifty-six patients with 105 stents were examined by 128-slice dual-source CT coronary angiography (CTCA). Images were reconstructed using standard filtered back projection (FBP) and IRIS with both medium kernel and sharp kernel applied. Image noise and the stent diameter were investigated. Image noise was measured both in background vessel and in-stent lumen as objective image evaluation. Image noise score and stent score were performed as subjective image evaluation. The CTCA images reconstructed with IRIS were associated with significant noise reduction compared to that of CTCA images reconstructed using FBP technique in both of background vessel and in-stent lumen (the background noise decreased by approximately 25.4% ± 8.2% in medium kernel (P kernel (P kernel (P kernel (P kernel showed better visualization of the stent struts and in-stent lumen than that with medium kernel. Iterative reconstruction in image space reconstruction can effectively reduce the image noise and improve image quality. The sharp kernel images constructed with iterative reconstruction are considered the optimal images to observe coronary stents in this study.

  12. Equivalence of kernel machine regression and kernel distance covariance for multidimensional phenotype association studies.

    PubMed

    Hua, Wen-Yu; Ghosh, Debashis

    2015-09-01

    Associating genetic markers with a multidimensional phenotype is an important yet challenging problem. In this work, we establish the equivalence between two popular methods: kernel-machine regression (KMR), and kernel distance covariance (KDC). KMR is a semiparametric regression framework that models covariate effects parametrically and genetic markers non-parametrically, while KDC represents a class of methods that include distance covariance (DC) and Hilbert-Schmidt independence criterion (HSIC), which are nonparametric tests of independence. We show that the equivalence between the score test of KMR and the KDC statistic under certain conditions can lead to a novel generalization of the KDC test that incorporates covariates. Our contributions are 3-fold: (1) establishing the equivalence between KMR and KDC; (2) showing that the principles of KMR can be applied to the interpretation of KDC; (3) the development of a broader class of KDC statistics, where the class members are statistics corresponding to different kernel combinations. Finally, we perform simulation studies and an analysis of real data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. The ADNI study suggest that SNPs of FLJ16124 exhibit pairwise interaction effects that are strongly correlated to the changes of brain region volumes. © 2015, The International Biometric Society.

  13. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    PubMed

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  14. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  15. Relaxation and diffusion models with non-singular kernels

    NASA Astrophysics Data System (ADS)

    Sun, HongGuang; Hao, Xiaoxiao; Zhang, Yong; Baleanu, Dumitru

    2017-02-01

    Anomalous relaxation and diffusion processes have been widely quantified by fractional derivative models, where the definition of the fractional-order derivative remains a historical debate due to its limitation in describing different kinds of non-exponential decays (e.g. stretched exponential decay). Meanwhile, many efforts by mathematicians and engineers have been made to overcome the singularity of power function kernel in its definition. This study first explores physical properties of relaxation and diffusion models where the temporal derivative was defined recently using an exponential kernel. Analytical analysis shows that the Caputo type derivative model with an exponential kernel cannot characterize non-exponential dynamics well-documented in anomalous relaxation and diffusion. A legitimate extension of the previous derivative is then proposed by replacing the exponential kernel with a stretched exponential kernel. Numerical tests show that the Caputo type derivative model with the stretched exponential kernel can describe a much wider range of anomalous diffusion than the exponential kernel, implying the potential applicability of the new derivative in quantifying real-world, anomalous relaxation and diffusion processes.

  16. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    PubMed

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  17. Stochastic subset selection for learning with kernel machines.

    PubMed

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  18. Gaussian kernel based anatomically-aided diffuse optical tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Baikejiang, Reheman; Zhang, Wei; Li, Changqing

    2017-02-01

    Image reconstruction in diffuse optical tomography (DOT) is challenging because its inverse problem is nonlinear, ill-posed and ill-conditioned. Anatomical guidance from high spatial resolution imaging modalities can substantially improve the quality of reconstructed DOT images. In this paper, inspired by the kernel methods in machine learning, we propose the kernel method to introduce anatomical information into the DOT image reconstruction algorithm. In this kernel method, optical absorption coefficient at each finite element node is represented as a function of a set of features obtained from anatomical images such as computed tomography (CT). The kernel based image model is directly incorporated into the forward model of DOT, which exploits the sparseness of the image in the feature space. Compared with Laplacian approaches to include structural priors, the proposed method does not require the image segmentation of distinct regions. The proposed kernel method is validated with numerical simulations of 3D DOT reconstruction using synthetic CT data. We added 15% Gaussian noise onto both the numerical DOT measurements and the simulated CT image. We have also validated the proposed method by agar phantom experiment with anatomical guidance from a CT scan. We have studied the effects of voxel size and number of nearest neighborhood size in kernel method on the reconstructed DOT images. Our results indicate that the spatial resolution and the accuracy of the reconstructed DOT images have been improved substantially after applying the anatomical guidance with the proposed kernel method.

  19. Widely Linear Complex-Valued Kernel Methods for Regression

    NASA Astrophysics Data System (ADS)

    Boloix-Tortosa, Rafael; Murillo-Fuentes, Juan Jose; Santos, Irene; Perez-Cruz, Fernando

    2017-10-01

    Usually, complex-valued RKHS are presented as an straightforward application of the real-valued case. In this paper we prove that this procedure yields a limited solution for regression. We show that another kernel, here denoted as pseudo kernel, is needed to learn any function in complex-valued fields. Accordingly, we derive a novel RKHS to include it, the widely RKHS (WRKHS). When the pseudo-kernel cancels, WRKHS reduces to complex-valued RKHS of previous approaches. We address the kernel and pseudo-kernel design, paying attention to the kernel and the pseudo-kernel being complex-valued. In the experiments included we report remarkable improvements in simple scenarios where real a imaginary parts have different similitude relations for given inputs or cases where real and imaginary parts are correlated. In the context of these novel results we revisit the problem of non-linear channel equalization, to show that the WRKHS helps to design more efficient solutions.

  20. Spectrum-based kernel length estimation for Gaussian process classification.

    PubMed

    Wang, Liang; Li, Chuan

    2014-06-01

    Recent studies have shown that Gaussian process (GP) classification, a discriminative supervised learning approach, has achieved competitive performance in real applications compared with most state-of-the-art supervised learning methods. However, the problem of automatic model selection in GP classification, involving the kernel function form and the corresponding parameter values (which are unknown in advance), remains a challenge. To make GP classification a more practical tool, this paper presents a novel spectrum analysis-based approach for model selection by refining the GP kernel function to match the given input data. Specifically, we target the problem of GP kernel length scale estimation. Spectrums are first calculated analytically from the kernel function itself using the autocorrelation theorem as well as being estimated numerically from the training data themselves. Then, the kernel length scale is automatically estimated by equating the two spectrum values, i.e., the kernel function spectrum equals to the estimated training data spectrum. Compared with the classical Bayesian method for kernel length scale estimation via maximizing the marginal likelihood (which is time consuming and could suffer from multiple local optima), extensive experimental results on various data sets show that our proposed method is both efficient and accurate.

  1. Training Lp norm multiple kernel learning in the primal.

    PubMed

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method.

  2. Gaussian kernel width optimization for sparse Bayesian learning.

    PubMed

    Mohsenzadeh, Yalda; Sheikhzadeh, Hamid

    2015-04-01

    Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters.

  3. Dropping macadamia nuts-in-shell reduces kernel roasting quality.

    PubMed

    Walton, David A; Wallace, Helen M

    2010-10-01

    Macadamia nuts ('nuts-in-shell') are subjected to many impacts from dropping during postharvest handling, resulting in damage to the raw kernel. The effect of dropping on roasted kernel quality is unknown. Macadamia nuts-in-shell were dropped in various combinations of moisture content, number of drops and receiving surface in three experiments. After dropping, samples from each treatment and undropped controls were dry oven-roasted for 20 min at 130 °C, and kernels were assessed for colour, mottled colour and surface damage. Dropping nuts-in-shell onto a bed of nuts-in-shell at 3% moisture content or 20% moisture content increased the percentage of dark roasted kernels. Kernels from nuts dropped first at 20%, then 10% moisture content, onto a metal plate had increased mottled colour. Dropping nuts-in-shell at 3% moisture content onto nuts-in-shell significantly increased surface damage. Similarly, surface damage increased for kernels dropped onto a metal plate at 20%, then at 10% moisture content. Postharvest dropping of macadamia nuts-in-shell causes concealed cellular damage to kernels, the effects not evident until roasting. This damage provides the reagents needed for non-enzymatic browning reactions. Improvements in handling, such as reducing the number of drops and improving handling equipment, will reduce cellular damage and after-roast darkening. Copyright © 2010 Society of Chemical Industry.

  4. Machine learning algorithms for damage detection: Kernel-based approaches

    NASA Astrophysics Data System (ADS)

    Santos, Adam; Figueiredo, Eloi; Silva, M. F. M.; Sales, C. S.; Costa, J. C. W. A.

    2016-02-01

    This paper presents four kernel-based algorithms for damage detection under varying operational and environmental conditions, namely based on one-class support vector machine, support vector data description, kernel principal component analysis and greedy kernel principal component analysis. Acceleration time-series from an array of accelerometers were obtained from a laboratory structure and used for performance comparison. The main contribution of this study is the applicability of the proposed algorithms for damage detection as well as the comparison of the classification performance between these algorithms and other four ones already considered as reliable approaches in the literature. All proposed algorithms revealed to have better classification performance than the previous ones.

  5. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    PubMed

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  6. A Generalized Kernel Consensus-Based Robust Estimator

    PubMed Central

    Wang, Hanzi; Mirota, Daniel; Hager, Gregory D.

    2010-01-01

    In this paper, we present a new Adaptive-Scale Kernel Consensus (ASKC) robust estimator as a generalization of the popular and state-of-the-art robust estimators such as RANdom SAmple Consensus (RANSAC), Adaptive Scale Sample Consensus (ASSC), and Maximum Kernel Density Estimator (MKDE). The ASKC framework is grounded on and unifies these robust estimators using nonparametric kernel density estimation theory. In particular, we show that each of these methods is a special case of ASKC using a specific kernel. Like these methods, ASKC can tolerate more than 50 percent outliers, but it can also automatically estimate the scale of inliers. We apply ASKC to two important areas in computer vision, robust motion estimation and pose estimation, and show comparative results on both synthetic and real data. PMID:19926908

  7. Intelligent classification methods of grain kernels using computer vision analysis

    NASA Astrophysics Data System (ADS)

    Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo

    2011-06-01

    In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.

  8. Hash subgraph pairwise kernel for protein-protein interaction extraction.

    PubMed

    Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian; Li, Yanpeng

    2012-01-01

    Extracting protein-protein interaction (PPI) from biomedical literature is an important task in biomedical text mining (BioTM). In this paper, we propose a hash subgraph pairwise (HSP) kernel-based approach for this task. The key to the novel kernel is to use the hierarchical hash labels to express the structural information of subgraphs in a linear time. We apply the graph kernel to compute dependency graphs representing the sentence structure for protein-protein interaction extraction task, which can efficiently make use of full graph structural information, and particularly capture the contiguous topological and label information ignored before. We evaluate the proposed approach on five publicly available PPI corpora. The experimental results show that our approach significantly outperforms all-path kernel approach on all five corpora and achieves state-of-the-art performance.

  9. Kernel-based Linux emulation for Plan 9.

    SciTech Connect

    Minnich, Ronald G.

    2010-09-01

    CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9. In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.

  10. Inheritance of Kernel Color in Corn: Explanations and Investigations.

    ERIC Educational Resources Information Center

    Ford, Rosemary H.

    2000-01-01

    Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)

  11. On the asymptotic expansion of the Bergman kernel

    NASA Astrophysics Data System (ADS)

    Seto, Shoo

    Let (L, h) → (M, o) be a polarized Kahler manifold. We define the Bergman kernel for H0(M, Lk), holomorphic sections of the high tensor powers of the line bundle L. In this thesis, we will study the asymptotic expansion of the Bergman kernel. We will consider the on-diagonal, near-diagonal and far off-diagonal, using L2 estimates to show the existence of the asymptotic expansion and computation of the coefficients for the on and near-diagonal case, and a heat kernel approach to show the exponential decay of the off-diagonal of the Bergman kernel for noncompact manifolds assuming only a lower bound on Ricci curvature and C2 regularity of the metric.

  12. Bilinear analysis for kernel selection and nonlinear feature extraction.

    PubMed

    Yang, Shu; Yan, Shuicheng; Zhang, Chao; Tang, Xiaoou

    2007-09-01

    This paper presents a unified criterion, Fisher + kernel criterion (FKC), for feature extraction and recognition. This new criterion is intended to extract the most discriminant features in different nonlinear spaces, and then, fuse these features under a unified measurement. Thus, FKC can simultaneously achieve nonlinear discriminant analysis and kernel selection. In addition, we present an efficient algorithm Fisher + kernel analysis (FKA), which utilizes the bilinear analysis, to optimize the new criterion. This FKA algorithm can alleviate the ill-posed problem existed in traditional kernel discriminant analysis (KDA), and usually, has no singularity problem. The effectiveness of our proposed algorithm is validated by a series of face-recognition experiments on several different databases.

  13. Inheritance of Kernel Color in Corn: Explanations and Investigations.

    ERIC Educational Resources Information Center

    Ford, Rosemary H.

    2000-01-01

    Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)

  14. Nonlinear hyperspectral unmixing based on constrained multiple kernel NMF

    NASA Astrophysics Data System (ADS)

    Cui, Jiantao; Li, Xiaorun; Zhao, Liaoying

    2014-05-01

    Nonlinear spectral unmixing constitutes an important field of research for hyperspectral imagery. An unsupervised nonlinear spectral unmixing algorithm, namely multiple kernel constrained nonnegative matrix factorization (MKCNMF) is proposed by coupling multiple-kernel selection with kernel NMF. Additionally, a minimum endmemberwise distance constraint and an abundance smoothness constraint are introduced to alleviate the uniqueness problem of NMF in the algorithm. In the MKCNMF, two problems of optimizing matrices and selecting the proper kernel are jointly solved. The performance of the proposed unmixing algorithm is evaluated via experiments based on synthetic and real hyperspectral data sets. The experimental results demonstrate that the proposed method outperforms some existing unmixing algorithms in terms of spectral angle distance (SAD) and abundance fractions.

  15. Resummed memory kernels in generalized system-bath master equations.

    PubMed

    Mavros, Michael G; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  16. Resummed memory kernels in generalized system-bath master equations

    NASA Astrophysics Data System (ADS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  17. Kernel generalized neighbor discriminant embedding for SAR automatic target recognition

    NASA Astrophysics Data System (ADS)

    Huang, Yulin; Pei, Jifang; Yang, Jianyu; Wang, Tao; Yang, Haiguang; Wang, Bing

    2014-12-01

    In this paper, we propose a new supervised feature extraction algorithm in synthetic aperture radar automatic target recognition (SAR ATR), called generalized neighbor discriminant embedding (GNDE). Based on manifold learning, GNDE integrates class and neighborhood information to enhance discriminative power of extracted feature. Besides, the kernelized counterpart of this algorithm is also proposed, called kernel-GNDE (KGNDE). The experiment in this paper shows that the proposed algorithms have better recognition performance than PCA and KPCA.

  18. CADCAM 024. DOEDEF KERNEL user's guide. Version 1. 3

    SciTech Connect

    Ames, A.L.

    1986-09-01

    The Department of Energy Data Exchange Format (DOEDEF) Subgroup is developing a software environment for effective translation of CAD based product definition between dissimilar CAD systems within the DOE Weapons Complex based on the Initial Graphics Exchange Specification (IGES). The DOEDEF KERNEL is a set of callable procedures and functions which support the writing of procedures for modifying IGES-based CAD data in a RIM database. This document describes the interface to the procedures within KERNEL. 6 refs., 5 figs.

  19. The Weighted Super Bergman Kernels Over the Supermatrix Spaces

    NASA Astrophysics Data System (ADS)

    Feng, Zhiming

    2015-12-01

    The purpose of this paper is threefold. Firstly, using Howe duality for , we obtain integral formulas of the super Schur functions with respect to the super standard Gaussian distributions. Secondly, we give explicit expressions of the super Szegö kernels and the weighted super Bergman kernels for the Cartan superdomains of type I. Thirdly, combining these results, we obtain duality relations of integrals over the unitary groups and the Cartan superdomains, and the marginal distributions of the weighted measure.

  20. Nonlinear stochastic system identification of skin using volterra kernels.

    PubMed

    Chen, Yi; Hunter, Ian W

    2013-04-01

    Volterra kernel stochastic system identification is a technique that can be used to capture and model nonlinear dynamics in biological systems, including the nonlinear properties of skin during indentation. A high bandwidth and high stroke Lorentz force linear actuator system was developed and used to test the mechanical properties of bulk skin and underlying tissue in vivo using a non-white input force and measuring an output position. These short tests (5 s) were conducted in an indentation configuration normal to the skin surface and in an extension configuration tangent to the skin surface. Volterra kernel solution methods were used including a fast least squares procedure and an orthogonalization solution method. The practical modifications, such as frequency domain filtering, necessary for working with low-pass filtered inputs are also described. A simple linear stochastic system identification technique had a variance accounted for (VAF) of less than 75%. Representations using the first and second Volterra kernels had a much higher VAF (90-97%) as well as a lower Akaike information criteria (AICc) indicating that the Volterra kernel models were more efficient. The experimental second Volterra kernel matches well with results from a dynamic-parameter nonlinearity model with fixed mass as a function of depth as well as stiffness and damping that increase with depth into the skin. A study with 16 subjects showed that the kernel peak values have mean coefficients of variation (CV) that ranged from 3 to 8% and showed that the kernel principal components were correlated with location on the body, subject mass, body mass index (BMI), and gender. These fast and robust methods for Volterra kernel stochastic system identification can be applied to the characterization of biological tissues, diagnosis of skin diseases, and determination of consumer product efficacy.

  1. Resummed memory kernels in generalized system-bath master equations

    SciTech Connect

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  2. Local Kernel for Brains Classification in Schizophrenia

    NASA Astrophysics Data System (ADS)

    Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.

    In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.

  3. Kernel MAD Algorithm for Relative Radiometric Normalization

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Tang, Ping; Hu, Changmiao

    2016-06-01

    The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  4. The Dynamic Kernel Scheduler-Part 1

    NASA Astrophysics Data System (ADS)

    Adelmann, Andreas; Locans, Uldis; Suter, Andreas

    2016-10-01

    Emerging processor architectures such as GPUs and Intel MICs provide a huge performance potential for high performance computing. However developing software that uses these hardware accelerators introduces additional challenges for the developer. These challenges may include exposing increased parallelism, handling different hardware designs, and using multiple development frameworks in order to utilise devices from different vendors. The Dynamic Kernel Scheduler (DKS) is being developed in order to provide a software layer between the host application and different hardware accelerators. DKS handles the communication between the host and the device, schedules task execution, and provides a library of built-in algorithms. Algorithms available in the DKS library will be written in CUDA, OpenCL, and OpenMP. Depending on the available hardware, the DKS can select the appropriate implementation of the algorithm. The first DKS version was created using CUDA for the Nvidia GPUs and OpenMP for Intel MIC. DKS was further integrated into OPAL (Object-oriented Parallel Accelerator Library) in order to speed up a parallel FFT based Poisson solver and Monte Carlo simulations for particle-matter interaction used for proton therapy degrader modelling. DKS was also used together with Minuit2 for parameter fitting, where χ2 and max-log-likelihood functions were offloaded to the hardware accelerator. The concepts of the DKS, first results, and plans for the future will be shown in this paper.

  5. Kernel spectral clustering with memory effect

    NASA Astrophysics Data System (ADS)

    Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.

    2013-05-01

    Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.

  6. Learning molecular energies using localized graph kernels

    DOE PAGES

    Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos

    2017-03-21

    We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less

  7. GKS. Minimal Graphical Kernel System C Binding

    SciTech Connect

    Simons, R.W.

    1985-10-01

    GKS (the Graphical Kernel System) is both an American National Standard (ANS) and an ISO international standard graphics package. It conforms to ANS X3.124-1985 and to the May 1985 draft proposal for the GKS C Language Binding standard under development by the X3H3 Technical Committee. This implementation includes level ma (the lowest level of the ANS) and some routines from level mb. The following graphics capabilities are supported: two-dimensional lines, markers, text, and filled areas; control over color, line type, and character height and alignment; multiple simultaneous workstations and multiple transformations; and locator and choice input. Tektronix 4014 and 4115 terminals are supported, and support for other devices may be added. Since this implementation was developed under UNIX, it uses makefiles, C shell scripts, the ar library maintainer, editor scripts, and other UNIX utilities. Therefore, implementing it under another operating system may require considerable effort. Also included with GKS is the small plot package (SPP), a direct descendant of the WEASEL plot package developed at Sandia. SPP is built on the GKS; therefore, all of the capabilities of GKS are available. It is not necessary to use GKS functions, since entire plots can be produced using only SPP functions, but the addition of GKS will give the programmer added power and flexibility. SPP provides single-call plot commands, linear and logarithmic axis commands, control for optional plotting of tick marks and tick mark labels, and permits plotting of data with or without markers and connecting lines.

  8. Protoribosome by quantum kernel energy method.

    PubMed

    Huang, Lulu; Krupkin, Miri; Bashan, Anat; Yonath, Ada; Massa, Lou

    2013-09-10

    Experimental evidence suggests the existence of an RNA molecular prebiotic entity, called by us the "protoribosome," which may have evolved in the RNA world before evolution of the genetic code and proteins. This vestige of the RNA world, which possesses all of the capabilities required for peptide bond formation, seems to be still functioning in the heart of all of the contemporary ribosome. Within the modern ribosome this remnant includes the peptidyl transferase center. Its highly conserved nucleotide sequence is suggestive of its robustness under diverse environmental conditions, and hence on its prebiotic origin. Its twofold pseudosymmetry suggests that this entity could have been a dimer of self-folding RNA units that formed a pocket within which two activated amino acids might be accommodated, similar to the binding mode of modern tRNA molecules that carry amino acids or peptidyl moieties. Using quantum mechanics and crystal coordinates, this work studies the question of whether the putative protoribosome has properties necessary to function as an evolutionary precursor to the modern ribosome. The quantum model used in the calculations is density functional theory--B3LYP/3-21G*, implemented using the kernel energy method to make the computations practical and efficient. It occurs that the necessary conditions that would characterize a practicable protoribosome--namely (i) energetic structural stability and (ii) energetically stable attachment to substrates--are both well satisfied.

  9. Generalized Bergman kernels and geometric quantization

    NASA Astrophysics Data System (ADS)

    Tuynman, G. M.

    1987-03-01

    In geometric quantization it is well known that, if f is an observable and F a polarization on a symplectic manifold (M,ω), then the condition ``Xf leaves F invariant'' (where Xf denotes the Hamiltonian vector field associated to f ) is sufficient to guarantee that one does not have to compute the BKS kernel explicitly in order to know the corresponding quantum operator. It is shown in this paper that this condition on f can be weakened to ``Xf leaves F+F° invariant''and the corresponding quantum operator is then given implicitly by formula (4.8); in particular when F is a (positive) Kähler polarization, all observables can be quantized ``directly'' and moreover, an ``explicit'' formula for the corresponding quantum operator is derived (Theorem 5.8). Applying this to the phase space R2n one obtains a quantization prescription which ressembles the normal ordering of operators in quantum field theory. When we translate this prescription to the usual position representation of quantum mechanics, the result is (a.o) that the operator associated to a classical potential is multiplication by a function which is essentially the convolution of the potential function with a Gaussian function of width ℏ, instead of multiplication by the potential itself.

  10. Enhanced FMAM based on empirical kernel map.

    PubMed

    Wang, Min; Chen, Songcan

    2005-05-01

    The existing morphological auto-associative memory models based on the morphological operations, typically including morphological auto-associative memories (auto-MAM) proposed by Ritter et al. and our fuzzy morphological auto-associative memories (auto-FMAM), have many attractive advantages such as unlimited storage capacity, one-shot recall speed and good noise-tolerance to single erosive or dilative noise. However, they suffer from the extreme vulnerability to noise of mixing erosion and dilation, resulting in great degradation on recall performance. To overcome this shortcoming, we focus on FMAM and propose an enhanced FMAM (EFMAM) based on the empirical kernel map. Although it is simple, EFMAM can significantly improve the auto-FMAM with respect to the recognition accuracy under hybrid-noise and computational effort. Experiments conducted on the thumbnail-sized faces (28 x 23 and 14 x 11) scaled from the ORL database show the average accuracies of 92%, 90%, and 88% with 40 classes under 10%, 20%, and 30% randomly generated hybrid-noises, respectively, which are far higher than the auto-FMAM (67%, 46%, 31%) under the same noise levels.

  11. Kernel-based identification of regulatory modules.

    PubMed

    Schultheiss, Sebastian J

    2010-01-01

    The challenge of identifying cis-regulatory modules (CRMs) is an important milestone for the ultimate goal of understanding transcriptional regulation in eukaryotic cells. It has been approached, among others, by motif-finding algorithms that identify overrepresented motifs in regulatory sequences. These methods succeed in finding single, well-conserved motifs, but fail to identify combinations of degenerate binding sites, like the ones often found in CRMs. We have developed a method that combines the abilities of existing motif finding with the discriminative power of a machine learning technique to model the regulation of genes (Schultheiss et al. (2009) Bioinformatics 25, 2126-2133). Our software is called KIRMES: , which stands for kernel-based identification of regulatory modules in eukaryotic sequences. Starting from a set of genes thought to be co-regulated, KIRMES: can identify the key CRMs responsible for this behavior and can be used to determine for any other gene not included on that list if it is also regulated by the same mechanism. Such gene sets can be derived from microarrays, chromatin immunoprecipitation experiments combined with next-generation sequencing or promoter/whole genome microarrays. The use of an established machine learning method makes the approach fast to use and robust with respect to noise. By providing easily understood visualizations for the results returned, they become interpretable and serve as a starting point for further analysis. Even for complex regulatory relationships, KIRMES: can be a helpful tool in directing the design of biological experiments.

  12. Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.

    PubMed

    Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K

    2016-03-01

    Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.

  13. Sparse kernel learning with LASSO and Bayesian inference algorithm.

    PubMed

    Gao, Junbin; Kwan, Paul W; Shi, Daming

    2010-03-01

    Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two separate recent papers [Gao, J., Antolovich, M., & Kwan, P. H. (2008). L1 LASSO and its Bayesian inference. In W. Wobcke, & M. Zhang (Eds.), Lecture notes in computer science: Vol. 5360 (pp. 318-324); Wang, G., Yeung, D. Y., & Lochovsky, F. (2007). The kernel path in kernelized LASSO. In International conference on artificial intelligence and statistics (pp. 580-587). San Juan, Puerto Rico: MIT Press]. This paper is concerned with learning kernels under the LASSO formulation via adopting a generative Bayesian learning and inference approach. A new robust learning algorithm is proposed which produces a sparse kernel model with the capability of learning regularized parameters and kernel hyperparameters. A comparison with state-of-the-art methods for constructing sparse regression models such as the relevance vector machine (RVM) and the local regularization assisted orthogonal least squares regression (LROLS) is given. The new algorithm is also demonstrated to possess considerable computational advantages. Copyright 2009 Elsevier Ltd. All rights reserved.

  14. Searching for efficient Markov chain Monte Carlo proposal kernels.

    PubMed

    Yang, Ziheng; Rodríguez, Carlos E

    2013-11-26

    Markov chain Monte Carlo (MCMC) or the Metropolis-Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis-Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals.

  15. Omnibus risk assessment via accelerated failure time kernel machine modeling.

    PubMed

    Sinnott, Jennifer A; Cai, Tianxi

    2013-12-01

    Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.

  16. Proteome analysis of the almond kernel (Prunus dulcis).

    PubMed

    Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu

    2016-08-01

    Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  17. Multiple Kernel Learning for Visual Object Recognition: A Review.

    PubMed

    Bucak, Serhat S; Rong Jin; Jain, Anil K

    2014-07-01

    Multiple kernel learning (MKL) is a principled approach for selecting and combining kernels for a given recognition task. A number of studies have shown that MKL is a useful tool for object recognition, where each image is represented by multiple sets of features and MKL is applied to combine different feature sets. We review the state-of-the-art for MKL, including different formulations and algorithms for solving the related optimization problems, with the focus on their applications to object recognition. One dilemma faced by practitioners interested in using MKL for object recognition is that different studies often provide conflicting results about the effectiveness and efficiency of MKL. To resolve this, we conduct extensive experiments on standard datasets to evaluate various approaches to MKL for object recognition. We argue that the seemingly contradictory conclusions offered by studies are due to different experimental setups. The conclusions of our study are: (i) given a sufficient number of training examples and feature/kernel types, MKL is more effective for object recognition than simple kernel combination (e.g., choosing the best performing kernel or average of kernels); and (ii) among the various approaches proposed for MKL, the sequential minimal optimization, semi-infinite programming, and level method based ones are computationally most efficient.

  18. An Ensemble Approach to Building Mercer Kernels with Prior Information

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  19. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  20. A Gabor-Block-Based Kernel Discriminative Common Vector Approach Using Cosine Kernels for Human Face Recognition

    PubMed Central

    Kar, Arindam; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita; Kundu, Mahantapas

    2012-01-01

    In this paper a nonlinear Gabor Wavelet Transform (GWT) discriminant feature extraction approach for enhanced face recognition is proposed. Firstly, the low-energized blocks from Gabor wavelet transformed images are extracted. Secondly, the nonlinear discriminating features are analyzed and extracted from the selected low-energized blocks by the generalized Kernel Discriminative Common Vector (KDCV) method. The KDCV method is extended to include cosine kernel function in the discriminating method. The KDCV with the cosine kernels is then applied on the extracted low-energized discriminating feature vectors to obtain the real component of a complex quantity for face recognition. In order to derive positive kernel discriminative vectors, we apply only those kernel discriminative eigenvectors that are associated with nonzero eigenvalues. The feasibility of the low-energized Gabor-block-based generalized KDCV method with cosine kernel function models has been successfully tested for classification using the L 1, L2 distance measures; and the cosine similarity measure on both frontal and pose-angled face recognition. Experimental results on the FRAV2D and the FERET database demonstrate the effectiveness of this new approach. PMID:23365559