Science.gov

Sample records for kernels slowing-down

  1. Slowing down bubbles with sound

    NASA Astrophysics Data System (ADS)

    Poulain, Cedric; Dangla, Remie; Guinard, Marion

    2009-11-01

    We present experimental evidence that a bubble moving in a fluid in which a well-chosen acoustic noise is superimposed can be significantly slowed down even for moderate acoustic pressure. Through mean velocity measurements, we show that a condition for this effect to occur is for the acoustic noise spectrum to match or overlap the bubble's fundamental resonant mode. We render the bubble's oscillations and translational movements using high speed video. We show that radial oscillations (Rayleigh-Plesset type) have no effect on the mean velocity, while above a critical pressure, a parametric type instability (Faraday waves) is triggered and gives rise to nonlinear surface oscillations. We evidence that these surface waves are subharmonic and responsible for the bubble's drag increase. When the acoustic intensity is increased, Faraday modes interact and the strongly nonlinear oscillations behave randomly, leading to a random behavior of the bubble's trajectory and consequently to a higher slow down. Our observations may suggest new strategies for bubbly flow control, or two-phase microfluidic devices. It might also be applicable to other elastic objects, such as globules, cells or vesicles, for medical applications such as elasticity-based sorting.

  2. Is cosmic acceleration slowing down?

    SciTech Connect

    Shafieloo, Arman; Sahni, Varun; Starobinsky, Alexei A.

    2009-11-15

    We investigate the course of cosmic expansion in its recent past using the Constitution SN Ia sample, along with baryon acoustic oscillations (BAO) and cosmic microwave background (CMB) data. Allowing the equation of state of dark energy (DE) to vary, we find that a coasting model of the universe (q{sub 0}=0) fits the data about as well as Lambda cold dark matter. This effect, which is most clearly seen using the recently introduced Om diagnostic, corresponds to an increase of Om and q at redshifts z < or approx. 0.3. This suggests that cosmic acceleration may have already peaked and that we are currently witnessing its slowing down. The case for evolving DE strengthens if a subsample of the Constitution set consisting of SNLS+ESSENCE+CfA SN Ia data is analyzed in combination with BAO+CMB data. The effect we observe could correspond to DE decaying into dark matter (or something else)

  3. PT -symmetric slowing down of decoherence

    NASA Astrophysics Data System (ADS)

    Gardas, Bartłomiej; Deffner, Sebastian; Saxena, Avadh

    2016-10-01

    We investigate P T -symmetric quantum systems ultraweakly coupled to an environment. We find that such open systems evolve under P T -symmetric, purely dephasing and unital dynamics. The dynamical map describing the evolution is then determined explicitly using a quantum canonical transformation. Furthermore, we provide an explanation of why P T -symmetric dephasing-type interactions lead to a critical slowing down of decoherence. This effect is further exemplified with an experimentally relevant system, a P T -symmetric qubit easily realizable, e.g., in optical or microcavity experiments.

  4. Lead Slowing Down Spectrometer Status Report

    SciTech Connect

    Warren, Glen A.; Anderson, Kevin K.; Bonebrake, Eric; Casella, Andrew M.; Danon, Yaron; Devlin, M.; Gavron, Victor A.; Haight, R. C.; Imel, G. R.; Kulisek, Jonathan A.; O'Donnell, J. M.; Weltz, Adam

    2012-06-07

    This report documents the progress that has been completed in the first half of FY2012 in the MPACT-funded Lead Slowing Down Spectrometer project. Significant progress has been made on the algorithm development. We have an improve understanding of the experimental responses in LSDS for fuel-related material. The calibration of the ultra-depleted uranium foils was completed, but the results are inconsistent from measurement to measurement. Future work includes developing a conceptual model of an LSDS system to assay plutonium in used fuel, improving agreement between simulations and measurement, design of a thorium fission chamber, and evaluation of additional detector techniques.

  5. PT-symmetric slowing down of decoherence

    DOE PAGES

    Gardas, Bartlomiej; Deffner, Sebastian; Saxena, Avadh Behari

    2016-10-27

    Here, we invesmore » tigate PT-symmetric quantum systems ultraweakly coupled to an environment. We find that such open systems evolve under PT-symmetric, purely dephasing and unital dynamics. The dynamical map describing the evolution is then determined explicitly using a quantum canonical transformation. Furthermore, we provide an explanation of why PT-symmetric dephasing-type interactions lead to a critical slowing down of decoherence. This effect is further exemplified with an experimentally relevant system, a PT-symmetric qubit easily realizable, e.g., in optical or microcavity experiments.« less

  6. Lead Slowing Down Spectrometer Research Plans

    SciTech Connect

    Warren, Glen A.; Kulisek, Jonathan A.; Gavron, Victor; Danon, Yaron; Weltz, Adam; Harris, Jason; Stewart, T.

    2013-03-22

    The MPACT-funded Lead Slowing Down Spectrometry (LSDS) project has been evaluating the feasibility of using LSDS techniques to assay fissile isotopes in used nuclear fuel assemblies. The approach has the potential to provide considerable improvement in the assay of fissile isotopic masses in fuel assemblies compared to other non-destructive techniques in a direct and independent manner. The LSDS collaborations suggests that the next step to in empirically testing the feasibility is to conduct measurements on fresh fuel assemblies to understand investigate self-attenuation and fresh mixed-oxide (MOX) fuel rodlets so we may betterto understand extraction of masses for 235U and 239Pu. While progressing toward these goals, the collaboration also strongly suggests the continued development of enabling technology such as detector development and algorithm development, thatwhich could provide significant performance benefits.

  7. A Comprehensive Investigation on the Slowing Down of Cosmic Acceleration

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Hu, Yazhou; Li, Miao; Li, Nan

    2016-04-01

    Shafieloo et al. first proposed the possibility that the current cosmic acceleration (CA) is slowing down. However, this is rather counterintuitive because a slowing down CA cannot be accommodated in most mainstream cosmological models. In this work, by exploring the evolutionary trajectories of the dark energy equation of state w(z) and deceleration parameter q(z), we present a comprehensive investigation on the slowing down of CA from both the theoretical and the observational sides. For the theoretical side, we study the impact of different w(z) using six parametrization models, and then we discuss the effects of spatial curvature. For the observational side, we investigate the effects of different type Ia supernovae (SNe Ia), baryon acoustic oscillation (BAO), and cosmic microwave background (CMB) data. We find that (1) the evolution of CA is insensitive to the specific form of w(z); in contrast, a non-flat universe favors a slowing down CA more than a flat universe. (2) SNLS3 SNe Ia data sets favor a slowing down CA at a 1σ confidence level, while JLA SNe Ia samples prefer an eternal CA; in contrast, the effects of different BAO data are negligible. (3) Compared with CMB distance prior data, full CMB data favor a slowing down CA more. (4) Due to the low significance, the slowing down of CA is still a theoretical possibility that cannot be confirmed by the current observations.

  8. Critical Slowing Down Governs the Transition to Neuron Spiking

    PubMed Central

    Meisel, Christian; Klaus, Andreas; Kuehn, Christian; Plenz, Dietmar

    2015-01-01

    Many complex systems have been found to exhibit critical transitions, or so-called tipping points, which are sudden changes to a qualitatively different system state. These changes can profoundly impact the functioning of a system ranging from controlled state switching to a catastrophic break-down; signals that predict critical transitions are therefore highly desirable. To this end, research efforts have focused on utilizing qualitative changes in markers related to a system’s tendency to recover more slowly from a perturbation the closer it gets to the transition—a phenomenon called critical slowing down. The recently studied scaling of critical slowing down offers a refined path to understand critical transitions: to identify the transition mechanism and improve transition prediction using scaling laws. Here, we outline and apply this strategy for the first time in a real-world system by studying the transition to spiking in neurons of the mammalian cortex. The dynamical system approach has identified two robust mechanisms for the transition from subthreshold activity to spiking, saddle-node and Hopf bifurcation. Although theory provides precise predictions on signatures of critical slowing down near the bifurcation to spiking, quantitative experimental evidence has been lacking. Using whole-cell patch-clamp recordings from pyramidal neurons and fast-spiking interneurons, we show that 1) the transition to spiking dynamically corresponds to a critical transition exhibiting slowing down, 2) the scaling laws suggest a saddle-node bifurcation governing slowing down, and 3) these precise scaling laws can be used to predict the bifurcation point from a limited window of observation. To our knowledge this is the first report of scaling laws of critical slowing down in an experiment. They present a missing link for a broad class of neuroscience modeling and suggest improved estimation of tipping points by incorporating scaling laws of critical slowing down as a

  9. A New Approach to Charged Particle Slowing Down and Dispersion

    SciTech Connect

    Stevens, David E.

    2016-03-24

    The process by which super-thermal ions slow down against background Coulomb potentials arises in many fields of study. In particular, this is one of the main mechanisms by which the mass and energy from the reaction products of fusion reactions is deposited back into the background. Many of these fields are characterized by length and time scales that are the same magnitude as the range and duration of the trajectory of these particles, before they thermalize into the background. This requires numerical simulation of this slowing down process through numerically integrating the velocities and energies of these particles. This paper first presents a simple introduction to the required plasma physics, followed by the description of the numerical integration used to integrate a beam of particles. This algorithm is unique in that it combines in an integrated manner both a second-order integration of the slowing down with the particle beam dispersion. These two processes are typically computed in isolation from each other. A simple test problem of a beam of alpha particles slowing down against an inert background of deuterium and tritium with varying properties of both the beam and the background illustrate the utility of the algorithm. This is followed by conclusions and appendices. The appendices define the notation, units, and several useful identities.

  10. Report on First Activations with the Lead Slowing Down Spectrometer

    SciTech Connect

    Warren, Glen A.; Mace, Emily K.; Pratt, Sharon L.; Stave, Sean; Woodring, Mitchell L.

    2011-03-03

    On Feb. 17 and 18 2011, six items were irradiated with neutrons using the Lead Slowing Down Spectrometer. After irradiation, dose measurements and gamma-spectrometry measurements were completed on all of the samples. No contamination was found on the samples, and all but one provided no dose. Gamma-spectroscopy measurements qualitatively agreed with expectations based on the materials, with the exception of silver. We observed activation in the room in general, mostly due to 56Mn and 24Na. Most of the activation was short lived, with half-lives on the scale of hours, except for 198Au which has a half-life of 2.7 d.

  11. Slowing down light using a dendritic cell cluster metasurface waveguide

    NASA Astrophysics Data System (ADS)

    Fang, Z. H.; Chen, H.; Yang, F. S.; Luo, C. R.; Zhao, X. P.

    2016-11-01

    Slowing down or even stopping light is the first task to realising optical information transmission and storage. Theoretical studies have revealed that metamaterials can slow down or even stop light; however, the difficulty of preparing metamaterials that operate in visible light hinders progress in the research of slowing or stopping light. Metasurfaces provide a new opportunity to make progress in such research. In this paper, we propose a dendritic cell cluster metasurface consisting of dendritic structures. The simulation results show that dendritic structure can realise abnormal reflection and refraction effects. Single- and double-layer dendritic metasurfaces that respond in visible light were prepared by electrochemical deposition. Abnormal Goos-Hänchen (GH) shifts were experimentally obtained. The rainbow trapping effect was observed in a waveguide constructed using the dendritic metasurface sample. The incident white light was separated into seven colours ranging from blue to red light. The measured transmission energy in the waveguide showed that the energy escaping from the waveguide was zero at the resonant frequency of the sample under a certain amount of incident light. The proposed metasurface has a simple preparation process, functions in visible light, and can be readily extended to the infrared band and communication wavelengths.

  12. Slowing down light using a dendritic cell cluster metasurface waveguide.

    PubMed

    Fang, Z H; Chen, H; Yang, F S; Luo, C R; Zhao, X P

    2016-11-25

    Slowing down or even stopping light is the first task to realising optical information transmission and storage. Theoretical studies have revealed that metamaterials can slow down or even stop light; however, the difficulty of preparing metamaterials that operate in visible light hinders progress in the research of slowing or stopping light. Metasurfaces provide a new opportunity to make progress in such research. In this paper, we propose a dendritic cell cluster metasurface consisting of dendritic structures. The simulation results show that dendritic structure can realise abnormal reflection and refraction effects. Single- and double-layer dendritic metasurfaces that respond in visible light were prepared by electrochemical deposition. Abnormal Goos-Hänchen (GH) shifts were experimentally obtained. The rainbow trapping effect was observed in a waveguide constructed using the dendritic metasurface sample. The incident white light was separated into seven colours ranging from blue to red light. The measured transmission energy in the waveguide showed that the energy escaping from the waveguide was zero at the resonant frequency of the sample under a certain amount of incident light. The proposed metasurface has a simple preparation process, functions in visible light, and can be readily extended to the infrared band and communication wavelengths.

  13. Slowing down light using a dendritic cell cluster metasurface waveguide

    PubMed Central

    Fang, Z. H.; Chen, H.; Yang, F. S.; Luo, C. R.; Zhao, X. P.

    2016-01-01

    Slowing down or even stopping light is the first task to realising optical information transmission and storage. Theoretical studies have revealed that metamaterials can slow down or even stop light; however, the difficulty of preparing metamaterials that operate in visible light hinders progress in the research of slowing or stopping light. Metasurfaces provide a new opportunity to make progress in such research. In this paper, we propose a dendritic cell cluster metasurface consisting of dendritic structures. The simulation results show that dendritic structure can realise abnormal reflection and refraction effects. Single- and double-layer dendritic metasurfaces that respond in visible light were prepared by electrochemical deposition. Abnormal Goos-Hänchen (GH) shifts were experimentally obtained. The rainbow trapping effect was observed in a waveguide constructed using the dendritic metasurface sample. The incident white light was separated into seven colours ranging from blue to red light. The measured transmission energy in the waveguide showed that the energy escaping from the waveguide was zero at the resonant frequency of the sample under a certain amount of incident light. The proposed metasurface has a simple preparation process, functions in visible light, and can be readily extended to the infrared band and communication wavelengths. PMID:27886279

  14. Overcoming Critical Slowing Down in Quantum Monte Carlo Simulations

    NASA Astrophysics Data System (ADS)

    Evertz, Hans Gerd; Marcu, Mihai

    The classical d+1-dimensional spin systems used for the simulation of quantum spin systems in d dimensions are, quite generally, vertex models. Standard simulation methods for such models strongly suffer from critical slowing down. Recently, we developed the loop algorithm, a new type of cluster algorithm that to a large extent overcomes critical slowing down for vertex models. We present the basic ideas on the example of the F model, a special case of the 6-vertex model. Numerical results clearly demonstrate the effectiveness of the loop algorithm. Then, using the framework for cluster algorithms developed by Kandel and Domany, we explain how to adapt our algorithm to the cases of the 6-vertex model and the 8-vertex model, which are relevant for spin 1/2 systems. The techniqes presented here can be applied without modification to 2-dimensional spin 1/2 systems, provided that in the Suzuki-Trotter formula the Hamiltonian is broken up into 4 sums of link terms. Generalizations to more complicated situations (higher spins, different uses of the Suzuki-Trotter formula) are, at least in principle, straightforward.

  15. Critical slowing down in purely elastic 'snap-through' instabilities

    NASA Astrophysics Data System (ADS)

    Gomez, Michael; Moulton, Derek E.; Vella, Dominic

    2016-10-01

    Many elastic structures have two possible equilibrium states: from umbrellas that become inverted in a sudden gust of wind, to nanoelectromechanical switches, origami patterns and the hopper popper, which jumps after being turned inside-out. These systems typically transition from one state to the other via a rapid `snap-through’. Snap-through allows plants to gradually store elastic energy, before releasing it suddenly to generate rapid motions, as in the Venus flytrap. Similarly, the beak of the hummingbird snaps through to catch insects mid-flight, while technological applications are increasingly exploiting snap-through instabilities. In all of these scenarios, it is the ability to repeatedly generate fast motions that gives snap-through its utility. However, estimates of the speed of snap-through suggest that it should occur more quickly than is usually observed. Here, we study the dynamics of snap-through in detail, showing that, even without dissipation, the dynamics slow down close to the snap-through transition. This is reminiscent of the slowing down observed in critical phenomena, and provides a handheld demonstration of such phenomena, as well as a new tool for tuning dynamic responses in applications of elastic bistability.

  16. The promise of slow down ageing may come from curcumin.

    PubMed

    Sikora, E; Bielak-Zmijewska, A; Mosieniak, G; Piwocka, K

    2010-01-01

    No genes exist that have been selected to promote aging. The evolutionary theory of aging tells us that there is a trade-off between body maintenance and investment in reproduction. It is commonly acceptable that the ageing process is driven by the lifelong accumulation of molecular damages mainly due to reactive oxygen species (ROS) produced by mitochondria as well as random errors in DNA replication. Although ageing itself is not a disease, numerous diseases are age-related, such as cancer, Alzheimer's disease, atherosclerosis, metabolic disorders and others, likely caused by low grade inflammation driven by oxygen stress and manifested by increased level of pro-inflammatory cytokines such as IL-1, IL-6 and TNF-alpha, encoded by genes activated by the transcription factor NF-kappaB. It is believed that ageing is plastic and can be slowed down by caloric restriction as well as by some nutraceuticals. As the low grade inflammatory process is believed substantially to contribute to ageing, slowing ageing and postponing the onset of age-related diseases may be achieved by blocking the NF-kappaB-dependent inflammation. In this review we consider the possibility of the natural spice curcumin, a powerful antioxidant, anti-inflammatory agent and efficient inhibitor of NF-kappaB and the mTOR signaling pathway which overlaps that of NF-kappaB, to slow down ageing.

  17. Soil fauna slow down decomposition of leaf litter

    NASA Astrophysics Data System (ADS)

    Frouz, J.

    2009-04-01

    In one year incubation laboratory experiment, decomposition of alder, oak and willow litter was compared with decomposition of excrements of St marks Fly larvae (Bibio marci), produced from the same liter. Decomposition (amount of CO2 produced) was significantly higher in leas litter than in excrements. Invertebrates affect litter by many ways liter is fragmented mechanically during feeding exposed to alkaline environment and enzymes in the gut and coated by clay mineral during gut passage. In order to explore potential mechanisms that may be responsible for reduction of decomposition process 3 litter treatments with mimic certain aspects of invertebrate influence was prepared: fragmented litter, litter treated by alkaline solution and mixed with clay (kaolinite). Among those treatments Alkalization has the most strong effect on decomposition slow down.

  18. Report on Second Activations with the Lead Slowing Down Spectrometer

    SciTech Connect

    Stave, Sean C.; Mace, Emily K.; Pratt, Sharon L.; Warren, Glen A.

    2012-04-27

    Summary On August 18 and 19 2011, five items were irradiated with neutrons using the Lead Slowing Down Spectrometer (LSDS). After irradiation, dose measurements and gamma-spectrometry measurements were completed on all of the samples. No contamination was found on the samples, and all but one provided no dose. Gamma-spectroscopy measurements qualitatively agreed with expectations based on the materials. As during the first activation run, we observed activation in the room in general, mostly due to 56Mn and 24Na. Most of the activation of the samples was short lived, with half-lives on the scale of hours to days, except for 60Co which has a half-life of 5.3 y.

  19. Cosmic slowing down of acceleration for several dark energy parametrizations

    SciTech Connect

    Magaña, Juan; Cárdenas, Víctor H.; Motta, Verónica E-mail: victor.cardenas@uv.cl

    2014-10-01

    We further investigate slowing down of acceleration of the universe scenario for five parametrizations of the equation of state of dark energy using four sets of Type Ia supernovae data. In a maximal probability analysis we also use the baryon acoustic oscillation and cosmic microwave background observations. We found the low redshift transition of the deceleration parameter appears, independently of the parametrization, using supernovae data alone except for the Union 2.1 sample. This feature disappears once we combine the Type Ia supernovae data with high redshift data. We conclude that the rapid variation of the deceleration parameter is independent of the parametrization. We also found more evidence for a tension among the supernovae samples, as well as for the low and high redshift data.

  20. Less is more: improving proteostasis by translation slow down.

    PubMed

    Sherman, Michael Y; Qian, Shu-Bing

    2013-12-01

    Protein homeostasis, or proteostasis, refers to a proper balance between synthesis, maturation, and degradation of cellular proteins. A growing body of evidence suggests that the ribosome serves as a hub for co-translational folding, chaperone interaction, degradation, and stress response. Accordingly, in addition to the chaperone network and proteasome system, the ribosome has emerged as a major factor in protein homeostasis. Recent work revealed that high rates of elongation of translation negatively affect both the fidelity of translation and the co-translational folding of nascent polypeptides. Accordingly, by slowing down translation one can significantly improve protein folding. In this review, we discuss how to target translational processes to improve proteostasis and implications in treating protein misfolding diseases.

  1. Lead Slowing-Down Spectrometer Research at Lansce

    NASA Astrophysics Data System (ADS)

    Haight, R. C.; Bredeweg, T. A.; Devlin, M.; Gavron, A.; Jandel, M.; O'Donnell, J. M.; Wender, S. A.; Bélier, G.; Granier, T.; Laurent, B.; Taieb, J.; Danon, Y.; Thompson, J. T.

    2013-03-01

    The lead slowing-down spectrometer (LSDS) at Los Alamos is a 20 ton cube of lead with numerous channels, one for the proton beam from the LANSCE accelerator and others for samples and detectors. A pulsed spallation neutron source at the center of the cube is produced by the 800 MeV proton beam incident on an air-cooled tungsten target. Neutrons from this source are quickly downscattered by various reactions until their energies are less than the first excited state of 207Pb (0.57 MeV). After that, the neutrons slow down by elastic scattering where they lose on the average 1% of their energy per collision. The mean energy of the neutron distribution then changes with time as ~ 1/(t + to)2, where "to" is a constant. The low neutron absorption cross section of lead and multiple scattering of the neutrons leads to a very large neutron flux, approximately 1000 times that available in beams at the intense neutron source at the Lujan Center at LANSCE. Thus nuclear cross sections can be measured with very small samples, or conversely, very small cross sections can be measured with somewhat larger samples. Present research with the LSDS at LANSCE includes measuring fission cross sections on short-lived isotopes such as 237U, developing techniques to measure (n,p) and (n, α) cross sections, testing new types of detectors for use in the extreme radiation environment, and, in an applied context, assessing the possibility of measuring the isotopic content of actinide samples with the eventual goal of characterizing fresh and used reactor fuel rods.

  2. Do attractive interactions slow down diffusion in polymer nanocomposites?

    NASA Astrophysics Data System (ADS)

    Lin, Chia-Chun; Gam, Sangah; Meth, Jeffrey S.; Clarke, Nigel; Winey, Karen I.; Composto, Russell J.

    2013-03-01

    Diffusion of deuterated poly(methyl methacrylate) (dPMMA) is slowed down in PMMA matrix filled with spherical silica nanoparticles (NPs) ranging from 13 to 50 nm in diameter. NPs are well dispersed in the matrix up to 40 vol%. The normalized diffusion coefficients (D/D0) decrease as the volume fractions increases, and this decrease is stronger as NPs size decreases. When plotted against the confinement parameter, ID/2Rg, where ID is interparticle distance and 2Rg is probe size, D/D0 collapse onto a master curve. In the strongly confined region where ID < 2Rg, D/D0 decrease dramatically up to 80 %, whereas in the weakly confined region where ID > 2Rg, D/D0 decrease moderately. Even when ID is eight times larger than 2Rg, a 15 % reduction in the diffusion is observed. The master curve of this study, an attractive system, compared with a weakly interacting system previously studied, indicating attractive interactions do not significantly alter center of mass polymer diffusion in polymer nanocomposites.

  3. Slowing Down Downhill Folding: A Three-Probe Study

    SciTech Connect

    Kim, Seung Joong; Matsumura, Yoshitaka; Dumont, Charles; Kihara, Hiroshi; Gruebele, Martin

    2009-09-11

    The mutant Tyr{sup 22}Trp/Glu{sup 33}Tyr/Gly{sup 46}Ala/Gly{sup 48}Ala of {lambda} repressor fragment {lambda}6-85 was previously assigned as an incipient downhill folder. We slow down its folding in a cryogenic water-ethylene-glycol solvent (-18 to -28 C). The refolding kinetics are probed by small-angle x-ray scattering, circular dichroism, and fluorescence to measure the radius of gyration, the average secondary structure content, and the native packing around the single tryptophan residue. The main resolved kinetic phase of the mutant is probe independent and faster than the main phase observed for the pseudo-wild-type. Excess helical structure formed early on by the mutant may reduce the formation of turns and prevent the formation of compact misfolded states, speeding up the overall folding process. Extrapolation of our main cryogenic folding phase and previous T-jump measurements to 37 C yields nearly the same refolding rate as extrapolated by Oas and co-workers from NMR line-shape data. Taken together, all the data consistently indicate a folding speed limit of {approx}4.5 {micro}s for this fast folder.

  4. Ligands Slow Down Pure-Dephasing in Semiconductor Quantum Dots.

    PubMed

    Liu, Jin; Kilina, Svetlana V; Tretiak, Sergei; Prezhdo, Oleg V

    2015-09-22

    It is well-known experimentally and theoretically that surface ligands provide additional pathways for energy relaxation in colloidal semiconductor quantum dots (QDs). They increase the rate of inelastic charge-phonon scattering and provide trap sites for the charges. We show that, surprisingly, ligands have the opposite effect on elastic electron-phonon scattering. Our simulations demonstrate that elastic scattering slows down in CdSe QDs passivated with ligands compared to that in bare QDs. As a result, the pure-dephasing time is increased, and the homogeneous luminescence line width is decreased in the presence of ligands. The lifetime of quantum superpositions of single and multiple excitons increases as well, providing favorable conditions for multiple excitons generation (MEG). Ligands reduce the pure-dephasing rates by decreasing phonon-induced fluctuations of the electronic energy levels. Surface atoms are most mobile in QDs, and therefore, they contribute greatly to the electronic energy fluctuations. The mobility is reduced by interaction with ligands. A simple analytical model suggests that the differences between the bare and passivated QDs persist for up to 5 nm diameters. Both low-frequency acoustic and high-frequency optical phonons participate in the dephasing processes in bare QDs, while low-frequency acoustic modes dominate in passivated QDs. The theoretical predictions regarding the pure-dephasing time, luminescence line width, and MEG can be verified experimentally by studying QDs with different surface passivation.

  5. Lead Slowing Down Spectrometer FY2013 Annual Report

    SciTech Connect

    Warren, Glen A.; Kulisek, Jonathan A.; Gavron, Victor A.; Danon, Yaron; Weltz, Adam; Harris, Jason; Stewart, T.

    2013-10-29

    Executive Summary The Lead Slowing Down Spectrometry (LSDS) project, funded by the Materials Protection And Control Technology campaign, has been evaluating the feasibility of using LSDS techniques to assay fissile isotopes in used nuclear fuel assemblies. The approach has the potential to provide considerable improvement in the assay of fissile isotopic masses in fuel assemblies compared to other non-destructive techniques in a direct and independent manner. This report is a high level summary of the progress completed in FY2013. This progress included: • Fabrication of a 4He scintillator detector to detect fast neutrons in the LSDS operating environment. Testing of the detector will be conducted in FY2014. • Design of a large area 232Th fission chamber. • Analysis using the Los Alamos National Laboratory perturbation model estimated the required number of neutrons for an LSDS measurement to be 10 to the 16th source neutrons. • Application of the algorithms developed at Pacific Northwest National Laboratory to LSDS measurement data of various fissile samples conducted in 2012. The results concluded that the 235U could be measured to 2.7% and the 239Pu could be measured to 6.3%. Significant effort is yet needed to demonstrate the applicability of these algorithms for used-fuel assemblies, but the results reported here are encouraging in demonstrating that we are making progress toward that goal. • Development and cost-analysis of a research plan for the next critical demonstration measurements. The plan suggests measurements on fresh fuel sub assemblies as a means to experimentally test self-attenuation and the use of fresh mixed-oxide fuel as a means to test simultaneous measurement of 235U and 239Pu.

  6. Ketogenic diet slows down mitochondrial myopathy progression in mice.

    PubMed

    Ahola-Erkkilä, Sofia; Carroll, Christopher J; Peltola-Mjösund, Katja; Tulkki, Valtteri; Mattila, Ismo; Seppänen-Laakso, Tuulikki; Oresic, Matej; Tyynismaa, Henna; Suomalainen, Anu

    2010-05-15

    Mitochondrial dysfunction is a major cause of neurodegenerative and neuromuscular diseases of adult age and of multisystem disorders of childhood. However, no effective treatment exists for these progressive disorders. Cell culture studies suggested that ketogenic diet (KD), with low glucose and high fat content, could select against cells or mitochondria with mutant mitochondrial DNA (mtDNA), but proper patient trials are still lacking. We studied here the transgenic Deletor mouse, a disease model for progressive late-onset mitochondrial myopathy, accumulating mtDNA deletions during aging and manifesting subtle progressive respiratory chain (RC) deficiency. We found that these mice have widespread lipidomic and metabolite changes, including abnormal plasma phospholipid and free amino acid levels and ketone body production. We treated these mice with pre-symptomatic long-term and post-symptomatic shorter term KD. The effects of the diet for disease progression were followed by morphological, metabolomic and lipidomic tools. We show here that the diet decreased the amount of cytochrome c oxidase negative muscle fibers, a key feature in mitochondrial RC deficiencies, and prevented completely the formation of the mitochondrial ultrastructural abnormalities in the muscle. Furthermore, most of the metabolic and lipidomic changes were cured by the diet to wild-type levels. The diet did not, however, significantly affect the mtDNA quality or quantity, but rather induced mitochondrial biogenesis and restored liver lipid levels. Our results show that mitochondrial myopathy induces widespread metabolic changes, and that KD can slow down progression of the disease in mice. These results suggest that KD may be useful for mitochondrial late-onset myopathies.

  7. Critical slowing down and hyperuniformity on approach to jamming

    NASA Astrophysics Data System (ADS)

    Atkinson, Steven; Zhang, Ge; Hopkins, Adam B.; Torquato, Salvatore

    2016-07-01

    Hyperuniformity characterizes a state of matter that is poised at a critical point at which density or volume-fraction fluctuations are anomalously suppressed at infinite wavelengths. Recently, much attention has been given to the link between strict jamming (mechanical rigidity) and (effective or exact) hyperuniformity in frictionless hard-particle packings. However, in doing so, one must necessarily study very large packings in order to access the long-ranged behavior and to ensure that the packings are truly jammed. We modify the rigorous linear programming method of Donev et al. [J. Comput. Phys. 197, 139 (2004), 10.1016/j.jcp.2003.11.022] in order to test for jamming in putatively collectively and strictly jammed packings of hard disks in two dimensions. We show that this rigorous jamming test is superior to standard ways to ascertain jamming, including the so-called "pressure-leak" test. We find that various standard packing protocols struggle to reliably create packings that are jammed for even modest system sizes of N ≈103 bidisperse disks in two dimensions; importantly, these packings have a high reduced pressure that persists over extended amounts of time, meaning that they appear to be jammed by conventional tests, though rigorous jamming tests reveal that they are not. We present evidence that suggests that deviations from hyperuniformity in putative maximally random jammed (MRJ) packings can in part be explained by a shortcoming of the numerical protocols to generate exactly jammed configurations as a result of a type of "critical slowing down" as the packing's collective rearrangements in configuration space become locally confined by high-dimensional "bottlenecks" from which escape is a rare event. Additionally, various protocols are able to produce packings exhibiting hyperuniformity to different extents, but this is because certain protocols are better able to approach exactly jammed configurations. Nonetheless, while one should not generally

  8. The Pedagogy of Slowing Down: Teaching Talmud in a Summer Kollel

    ERIC Educational Resources Information Center

    Kanarek, Jane

    2010-01-01

    This article explores a set of practices in the teaching of Talmud called "the pedagogy of slowing down." Through the author's analysis of her own teaching in an intensive Talmud class, "the pedagogy of slowing down" emerges as a pedagogical and cultural model in which the students learn to read more closely and to investigate the multiplicity of…

  9. Anomalous versus Slowed-Down Brownian Diffusion in the Ligand-Binding Equilibrium

    PubMed Central

    Soula, Hédi; Caré, Bertrand; Beslon, Guillaume; Berry, Hugues

    2013-01-01

    Measurements of protein motion in living cells and membranes consistently report transient anomalous diffusion (subdiffusion) that converges back to a Brownian motion with reduced diffusion coefficient at long times after the anomalous diffusion regime. Therefore, slowed-down Brownian motion could be considered the macroscopic limit of transient anomalous diffusion. On the other hand, membranes are also heterogeneous media in which Brownian motion may be locally slowed down due to variations in lipid composition. Here, we investigate whether both situations lead to a similar behavior for the reversible ligand-binding reaction in two dimensions. We compare the (long-time) equilibrium properties obtained with transient anomalous diffusion due to obstacle hindrance or power-law-distributed residence times (continuous-time random walks) to those obtained with space-dependent slowed-down Brownian motion. Using theoretical arguments and Monte Carlo simulations, we show that these three scenarios have distinctive effects on the apparent affinity of the reaction. Whereas continuous-time random walks decrease the apparent affinity of the reaction, locally slowed-down Brownian motion and local hindrance by obstacles both improve it. However, only in the case of slowed-down Brownian motion is the affinity maximal when the slowdown is restricted to a subregion of the available space. Hence, even at long times (equilibrium), these processes are different and exhibit irreconcilable behaviors when the area fraction of reduced mobility changes. PMID:24209851

  10. Slow-down collisions and nonsequential double ionization in classical simulations.

    PubMed

    Panfili, R; Haan, S L; Eberly, J H

    2002-09-09

    We use classical simulations to analyze the dynamics of nonsequential double-electron short-pulse photoionization. We utilize a microcanonical ensemble of 10(5) two-electron "trajectories," a number large enough to provide large subensembles and even sub-subensembles associated with double ionization. We focus on key events in the final doubly ionized subensemble and back-analyze the subensemble's history, revealing a classical slow-down scenario for nonsequential double ionization. We analyze the dynamics of these slow-down collisions and find that a good phase match between the motions of the electrons can lead to very effective energy transfer, followed by escape over a suppressed barrier.

  11. 49 CFR 392.11 - Railroad grade crossings; slowing down required.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Railroad grade crossings; slowing down required... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit...

  12. 49 CFR 392.11 - Railroad grade crossings; slowing down required.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Railroad grade crossings; slowing down required... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit...

  13. ACTIV: Sandwich Detector Activity from In-Pile Slowing-Down Spectra Experiment

    SciTech Connect

    2013-08-01

    ACTIV calculates the activities of a sandwich detector, to be used for in-pile measurements in slowing-down spectra below a few keV. The effect of scattering with energy degradation in the filter and in the detectors has been included to a first approximation.

  14. "Slow Down, You Move Too Fast:" Literature Circles as Reflective Practice

    ERIC Educational Resources Information Center

    Sanacore, Joseph

    2013-01-01

    Becoming an effective literacy learner requires a bit of slowing down and appreciating the reflective nature of reading and writing. Literature circles support this instructional direction because they provide opportunities for immersing students in discussions that encourage their personal responses. When students feel their personal responses…

  15. Low energy slowing down of nanosize copper clusters on gold (1 1 1) surfaces

    NASA Astrophysics Data System (ADS)

    Lei, H.; Hou, Q.; Hou, M.

    2000-04-01

    The slowing down of copper clusters formed by 440 atoms on a gold (1 1 1) surface is studied in detail by means of molecular dynamics. The atomic classical molecular dynamics is based on the second moment approximation of the tight binding model and, in addition, accounts for the electron-phonon coupling in the frame of the Sommerfeld theory of metals. The slowing down energy range is 0-1 eV/atom, which is characteristic of low energy cluster beam deposition (LECBD). A pronounced epitaxy of the copper clusters is found. However, their morphology is significantly energy dependent. The structure and the radial pair correlation functions are used to study the details of the epitaxial properties as well as the pronounced relaxation in the interfacial cluster atom positions due to the lattice mismatch between copper and gold. The effect of the cluster and substrate average temperature is investigated and can be distinguished from the kinetic effect of the cluster impact.

  16. Resonance treatment using pin-based pointwise energy slowing-down method

    NASA Astrophysics Data System (ADS)

    Choi, Sooyoung; Lee, Changho; Lee, Deokjung

    2017-02-01

    A new resonance self-shielding method using a pointwise energy solution has been developed to overcome the drawbacks of the equivalence theory. The equivalence theory uses a crude resonance scattering source approximation, and assumes a spatially constant scattering source distribution inside a fuel pellet. These two assumptions cause a significant error, in that they overestimate the multi-group effective cross sections, especially for 238U. The new resonance self-shielding method solves pointwise energy slowing-down equations with a sub-divided fuel rod. The method adopts a shadowing effect correction factor and fictitious moderator material to model a realistic pointwise energy solution. The slowing-down solution is used to generate the multi-group cross section. With various light water reactor problems, it was demonstrated that the new resonance self-shielding method significantly improved accuracy in the reactor parameter calculation with no compromise in computation time, compared to the equivalence theory.

  17. Observation of slow down of polystyrene nanogels diffusivities in contact with swollen polystyrene brushes.

    PubMed

    Michailidou, V N; Loppinet, B; Vo, C D; Rühe, J; Tauer, K; Fytas, G

    2008-01-01

    The diffusion of dilute colloids in contact with swollen polymer brushes has been studied by evanescent wave dynamic light scattering. Two polystyrene nanogels with 16 nm and 42 nm radius were put into contact with three polystyrene brushes with varying grafting densities. Partial penetration of the nanogels within the brushes was revealed by the evanescent wave penetration depth-dependent scattering intensities. The experimental short-time diffusion coefficients of the penetrating particles were measured and found to strongly slow down as the nanoparticles get deeper into the brushes. The slow down is much more marked for the smaller (16 nm) nanogels, suggesting a size exclusion type of mechanism and the existence of a characteristic length scale present in the outer part of the brush.

  18. Critical slowing down of cluster algorithms for Ising models coupled to 2-d gravity

    NASA Astrophysics Data System (ADS)

    Bowick, Mark; Falcioni, Marco; Harris, Geoffrey; Marinari, Enzo

    1994-02-01

    We simulate single and multiple Ising models coupled to 2-d gravity using both the Swendsen-Wang and Wolff algorithms to update the spins. We study the integrated autocorrelation time and find that there is considerable critical slowing down, particularly in the magnetization. We argue that this is primarily due to the local nature of the dynamical triangulation algorithm and to the generation of a distribution of baby universes which inhibits cluster growth.

  19. Numerical studies of fast ion slowing down rates in cool magnetized plasma using LSP

    NASA Astrophysics Data System (ADS)

    Evans, Eugene S.; Kolmes, Elijah; Cohen, Samuel A.; Rognlien, Tom; Cohen, Bruce; Meier, Eric; Welch, Dale R.

    2016-10-01

    In MFE devices, rapid transport of fusion products from the core into the scrape-off layer (SOL) could perform the dual roles of energy and ash removal. The first-orbit trajectories of most fusion products from small field-reversed configuration (FRC) devices will traverse the SOL, allowing those particles to deposit their energy in the SOL and be exhausted along the open field lines. Thus, the fast ion slowing-down time should affect the energy balance of an FRC reactor and its neutron emissions. However, the dynamics of fast ion energy loss processes under the conditions expected in the FRC SOL (with ρe <λDe) are analytically complex, and not yet fully understood. We use LSP, a 3D electromagnetic PIC code, to examine the effects of SOL density and background B-field on the slowing-down time of fast ions in a cool plasma. As we use explicit algorithms, these simulations must spatially resolve both ρe and λDe, as well as temporally resolve both Ωe and ωpe, increasing computation time. Scaling studies of the fast ion charge (Z) and background plasma density are in good agreement with unmagnetized slowing down theory. Notably, Z-scaling represents a viable way to dramatically reduce the required CPU time for each simulation. This work was supported, in part, by DOE Contract Number DE-AC02-09CH11466.

  20. Slowing down of North Pacific climate variability and its implications for abrupt ecosystem change.

    PubMed

    Boulton, Chris A; Lenton, Timothy M

    2015-09-15

    Marine ecosystems are sensitive to stochastic environmental variability, with higher-amplitude, lower-frequency--i.e., "redder"--variability posing a greater threat of triggering large ecosystem changes. Here we show that fluctuations in the Pacific Decadal Oscillation (PDO) index have slowed down markedly over the observational record (1900-present), as indicated by a robust increase in autocorrelation. This "reddening" of the spectrum of climate variability is also found in regionally averaged North Pacific sea surface temperatures (SSTs), and can be at least partly explained by observed deepening of the ocean mixed layer. The progressive reddening of North Pacific climate variability has important implications for marine ecosystems. Ecosystem variables that respond linearly to climate forcing will have become prone to much larger variations over the observational record, whereas ecosystem variables that respond nonlinearly to climate forcing will have become prone to more frequent "regime shifts." Thus, slowing down of North Pacific climate variability can help explain the large magnitude and potentially the quick succession of well-known abrupt changes in North Pacific ecosystems in 1977 and 1989. When looking ahead, despite model limitations in simulating mixed layer depth (MLD) in the North Pacific, global warming is robustly expected to decrease MLD. This could potentially reverse the observed trend of slowing down of North Pacific climate variability and its effects on marine ecosystems.

  1. Hydrophobic molecules slow down the hydrogen-bond dynamics of water.

    PubMed

    Bakulin, Artem A; Pshenichnikov, Maxim S; Bakker, Huib J; Petersen, Christian

    2011-03-17

    We study the spectral and orientational dynamics of HDO molecules in solutions of tertiary-butyl-alcohol (TBA), trimethyl-amine-oxide (TMAO), and tetramethylurea (TMU) in isotopically diluted water (HDO:D(2)O and HDO:H(2)O). The spectral dynamics are studied with femtosecond two-dimensional infrared spectroscopy and the orientational dynamics with femtosecond polarization-resolved vibrational pump-probe spectroscopy. We observe a strong slowing down of the spectral diffusion around the central part of the absorption line that increases with increasing solute concentration. At low concentrations, the fraction of water showing slow spectral dynamics is observed to scale with the number of methyl groups, indicating that this effect is due to slow hydrogen-bond dynamics in the hydration shell of the methyl groups of the solute molecules. The slowing down of the vibrational frequency dynamics is strongly correlated with the slowing down of the orientational mobility of the water molecules. This correlation indicates that these effects have a common origin in the effect of hydrophobic molecular groups on the hydrogen-bond dynamics of water.

  2. Critical slowing down as early warning for the onset of collapse in mutualistic communities.

    PubMed

    Dakos, Vasilis; Bascompte, Jordi

    2014-12-09

    Tipping points are crossed when small changes in external conditions cause abrupt unexpected responses in the current state of a system. In the case of ecological communities under stress, the risk of approaching a tipping point is unknown, but its stakes are high. Here, we test recently developed critical slowing-down indicators as early-warning signals for detecting the proximity to a potential tipping point in structurally complex ecological communities. We use the structure of 79 empirical mutualistic networks to simulate a scenario of gradual environmental change that leads to an abrupt first extinction event followed by a sequence of species losses until the point of complete community collapse. We find that critical slowing-down indicators derived from time series of biomasses measured at the species and community level signal the proximity to the onset of community collapse. In particular, we identify specialist species as likely the best-indicator species for monitoring the proximity of a community to collapse. In addition, trends in slowing-down indicators are strongly correlated to the timing of species extinctions. This correlation offers a promising way for mapping species resilience and ranking species risk to extinction in a given community. Our findings pave the road for combining theory on tipping points with patterns of network structure that might prove useful for the management of a broad class of ecological networks under global environmental change.

  3. A quantitative model of application slow-down in multi-resource shared systems

    SciTech Connect

    Lim, Seung-Hwan; Kim, Youngjae

    2016-12-26

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job is characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.

  4. A quantitative model of application slow-down in multi-resource shared systems

    DOE PAGES

    Lim, Seung-Hwan; Kim, Youngjae

    2016-12-26

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job ismore » characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.« less

  5. Lattice Cell Calculations, Slowing Down Theory and Computer Code Wims; Vver Type Reactors

    NASA Astrophysics Data System (ADS)

    Moen, J.; Brekke, A.; Hall, C.

    1991-01-01

    The following sections are included: * INTRODUCTION * WIMS AS A TOOL FOR REACTOR CORE CALCULATIONS * GENERAL STRUCTURE OF THE WIMS CODE * WIMS APPROACH TO THE SLOWING DOWN CALCULATIONS * MULTIGROUP OSCOPIC CROSS SECTIONS, RESONANCE TREATMENT * DETERMINATION OF MULTIGROUP SPECTRA * PHYSICAL MODELS IN MAIN TRANSPORT CALCULATIONS * BURNUP CALCULATIONS * APPLICATION OF WIMSD-4 TO VVER TYPE LATTICES * FINAL REMARKS * REFERENCES * APPENDIX A: DANCOFF FACTOR - STANDARD APPROACH * APPENDIX B: FORMULAS FOR DANCOFF AND BELL FACTORS CALCULATIONS APPLIED IN PREWIM * APPENDIX C: CALCULATION OF ONE GROUP PROBABILITIES Pij IN AN ANNULAR SYSTEM * APPENDIX D: SCHAEFER'S METHOD

  6. Measurements with the high flux lead slowing-down spectrometer at LANL

    NASA Astrophysics Data System (ADS)

    Danon, Y.; Romano, C.; Thompson, J.; Watson, T.; Haight, R. C.; Wender, S. A.; Vieira, D. J.; Bond, E.; Wilhelmy, J. B.; O'Donnell, J. M.; Michaudon, A.; Bredeweg, T. A.; Schurman, T.; Rochman, D.; Granier, T.; Ethvignot, T.; Taieb, J.; Becker, J. A.

    2007-08-01

    A Lead Slowing-Down Spectrometer (LSDS) was recently installed at LANL [D. Rochman, R.C. Haight, J.M. O'Donnell, A. Michaudon, S.A. Wender, D.J. Vieira, E.M. Bond, T.A. Bredeweg, A. Kronenberg, J.B. Wilhelmy, T. Ethvignot, T. Granier, M. Petit, Y. Danon, Characteristics of a lead slowing-down spectrometer coupled to the LANSCE accelerator, Nucl. Instr. and Meth. A 550 (2005) 397]. The LSDS is comprised of a cube of pure lead 1.2 m on the side, with a spallation pulsed neutron source in its center. The LSDS is driven by 800 MeV protons with a time-averaged current of up to 1 μA, pulse widths of 0.05-0.25 μs and a repetition rate of 20-40 Hz. Spallation neutrons are created by directing the proton beam into an air-cooled tungsten target in the center of the lead cube. The neutrons slow down by scattering interactions with the lead and thus enable measurements of neutron-induced reaction rates as a function of the slowing-down time, which correlates to neutron energy. The advantage of an LSDS as a neutron spectrometer is that the neutron flux is 3-4 orders of magnitude higher than a standard time-of-flight experiment at the equivalent flight path, 5.6 m. The effective energy range is 0.1 eV to 100 keV with a typical energy resolution of 30% from 1 eV to 10 keV. The average neutron flux between 1 and 10 keV is about 1.7 × 109 n/cm2/s/μA. This high flux makes the LSDS an important tool for neutron-induced cross section measurements of ultra-small samples (nanograms) or of samples with very low cross sections. The LSDS at LANL was initially built in order to measure the fission cross section of the short-lived metastable isotope of U-235, however it can also be used to measure (n, α) and (n, p) reactions. Fission cross section measurements were made with samples of 235U, 236U, 238U and 239Pu. The smallest sample measured was 10 ng of 239Pu. Measurement of (n, α) cross section with 760 ng of Li-6 was also demonstrated. Possible future cross section measurements

  7. Critical slowing down exponents in structural glasses: Random orthogonal and related models

    NASA Astrophysics Data System (ADS)

    Caltagirone, F.; Ferrari, U.; Leuzzi, L.; Parisi, G.; Rizzo, T.

    2012-08-01

    An important prediction of mode-coupling theory is the relationship between the power-law decay exponents in the β regime and the consequent definition of the so-called exponent parameter λ. In the context of a certain class of mean-field glass models with quenched disorder, the physical meaning of λ has recently been understood, yielding a method to compute it exactly in a static framework. In this paper we exploit this new technique to compute the critical slowing down exponents for such models including, as special cases, the Sherrington-Kirkpatrick model, the p-spin model, and the random orthogonal model.

  8. Small but slow world: how network topology and burstiness slow down spreading.

    PubMed

    Karsai, M; Kivelä, M; Pan, R K; Kaski, K; Kertész, J; Barabási, A-L; Saramäki, J

    2011-02-01

    While communication networks show the small-world property of short paths, the spreading dynamics in them turns out slow. Here, the time evolution of information propagation is followed through communication networks by using empirical data on contact sequences and the susceptible-infected model. Introducing null models where event sequences are appropriately shuffled, we are able to distinguish between the contributions of different impeding effects. The slowing down of spreading is found to be caused mainly by weight-topology correlations and the bursty activity patterns of individuals.

  9. Geant4-DNA simulation of electron slowing-down spectra in liquid water

    NASA Astrophysics Data System (ADS)

    Incerti, S.; Kyriakou, I.; Tran, H. N.

    2017-04-01

    This work presents the simulation of monoenergetic electron slowing-down spectra in liquid water by the Geant4-DNA extension of the Geant4 Monte Carlo toolkit (release 10.2p01). These spectra are simulated for several incident energies using the most recent Geant4-DNA physics models, and they are compared to literature data. The influence of Auger electron production is discussed. For the first time, a dedicated Geant4-DNA example allowing such simulations is described and is provided to Geant4 users, allowing further verification of Geant4-DNA track structure simulation capabilities.

  10. Development for fissile assay in recycled fuel using lead slowing down spectrometer

    SciTech Connect

    Lee, Yong Deok; Je Park, C.; Kim, Ho-Dong; Song, Kee Chan

    2013-07-01

    A future nuclear energy system is under development to turn spent fuels produced by PWRs into fuels for a SFR (Sodium Fast Reactor) through the pyrochemical process. The knowledge of the isotopic fissile content of the new fuel is very important for fuel safety. A lead slowing down spectrometer (LSDS) is under development to analyze the fissile material content (Pu{sup 239}, Pu{sup 241} and U{sup 235}) of the fuel. The LSDS requires a neutron source, the neutrons will be slowed down through their passage in a lead medium and will finally enter the fuel and will induce fission reactions that will be analysed and the isotopic content of the fuel will be then determined. The issue is that the spent fuel emits intense gamma rays and neutrons by spontaneous fission. The threshold fission detector screens the prompt fast fission neutrons and as a result the LSDS is not influenced by the high level radiation background. The energy resolution of LSDS is good in the range 0.1 eV to 1 keV. It is also the range in which the fission reaction is the most discriminating for the considered fissile isotopes. An electron accelerator has been chosen to produce neutrons with an adequate target through (e{sup -},γ)(γ,n) reactions.

  11. Temporal variation in antibiotic environments slows down resistance evolution in pathogenic Pseudomonas aeruginosa

    PubMed Central

    Roemhild, Roderich; Barbosa, Camilo; Beardmore, Robert E; Jansen, Gunther; Schulenburg, Hinrich

    2015-01-01

    Antibiotic resistance is a growing concern to public health. New treatment strategies may alleviate the situation by slowing down the evolution of resistance. Here, we evaluated sequential treatment protocols using two fully independent laboratory-controlled evolution experiments with the human pathogen Pseudomonas aeruginosa PA14 and two pairs of clinically relevant antibiotics (doripenem/ciprofloxacin and cefsulodin/gentamicin). Our results consistently show that the sequential application of two antibiotics decelerates resistance evolution relative to monotherapy. Sequential treatment enhanced population extinction although we applied antibiotics at sublethal dosage. In both experiments, we identified an order effect of the antibiotics used in the sequential protocol, leading to significant variation in the long-term efficacy of the tested protocols. These variations appear to be caused by asymmetric evolutionary constraints, whereby adaptation to one drug slowed down adaptation to the other drug, but not vice versa. An understanding of such asymmetric constraints may help future development of evolutionary robust treatments against infectious disease. PMID:26640520

  12. Slowing down of ring polymer diffusion caused by inter-ring threading.

    PubMed

    Lee, Eunsang; Kim, Soree; Jung, YounJoon

    2015-06-01

    Diffusion of long ring polymers in a melt is much slower than the reorganization of their internal structures. While direct evidence for entanglements has not been observed in the long ring polymers unlike linear polymer melts, threading between the rings is suspected to be the main reason for slowing down of ring polymer diffusion. It is, however, difficult to define the threading configuration between two rings because the rings have no chain end. In this work, evidence for threading dynamics of ring polymers is presented by using molecular dynamics simulation and applying a novel analysis method. The simulation results are analyzed in terms of the statistics of persistence and exchange times that have proved useful in studying heterogeneous dynamics of glassy systems. It is found that the threading time of ring polymer melts increases more rapidly with the degree of polymerization than that of linear polymer melts. This indicates that threaded ring polymers cannot diffuse until an unthreading event occurs, which results in the slowing down of ring polymer diffusion.

  13. Critical slowing down associated with regime shifts in the US housing market

    NASA Astrophysics Data System (ADS)

    Tan, James Peng Lung; Cheong, Siew Siew Ann

    2014-02-01

    Complex systems are described by a large number of variables with strong and nonlinear interactions. Such systems frequently undergo regime shifts. Combining insights from bifurcation theory in nonlinear dynamics and the theory of critical transitions in statistical physics, we know that critical slowing down and critical fluctuations occur close to such regime shifts. In this paper, we show how universal precursors expected from such critical transitions can be used to forecast regime shifts in the US housing market. In the housing permit, volume of homes sold and percentage of homes sold for gain data, we detected strong early warning signals associated with a sequence of coupled regime shifts, starting from a Subprime Mortgage Loans transition in 2003-2004 and ending with the Subprime Crisis in 2007-2008. Weaker signals of critical slowing down were also detected in the US housing market data during the 1997-1998 Asian Financial Crisis and the 2000-2001 Technology Bubble Crisis. Backed by various macroeconomic data, we propose a scenario whereby hot money flowing back into the US during the Asian Financial Crisis fueled the Technology Bubble. When the Technology Bubble collapsed in 2000-2001, the hot money then flowed into the US housing market, triggering the Subprime Mortgage Loans transition in 2003-2004 and an ensuing sequence of transitions. We showed how this sequence of couple transitions unfolded in space and in time over the whole of US.

  14. UDCA slows down intestinal cell proliferation by inducing high and sustained ERK phosphorylation.

    PubMed

    Krishna-Subramanian, S; Hanski, M L; Loddenkemper, C; Choudhary, B; Pagès, G; Zeitz, M; Hanski, C

    2012-06-15

    Ursodeoxycholic acid (UDCA) attenuates colon carcinogenesis in humans and in animal models by an unknown mechanism. We investigated UDCA effects on normal intestinal epithelium in vivo and in vitro to identify the potential chemopreventive mechanism. Feeding of mice with 0.4% UDCA reduced cell proliferation to 50% and suppressed several potential proproliferatory genes including insulin receptor substrate 1 (Irs-1). A similar transcriptional response was observed in the rat intestinal cell line IEC-6 which was then used as an in vitro model. UDCA slowed down the proliferation of IEC-6 cells and induced sustained hyperphosphorylation of ERK1/ERK2 kinases which completely inhibited the proproliferatory effects of EGF and IGF-1. The hyperphosphorylation of ERK1 led to a transcriptional suppression of the Irs-1 gene. Both, the hyperphosphorylation of ERK as well as the suppression of Irs-1 were sufficient to inhibit proliferation of IEC-6 cells. ERK1/ERK2 inhibition in vitro or ERK1 elimination in vitro or in vivo abrogated the antiproliferatory effects of UDCA. We show that UDCA inhibits proliferation of nontransformed intestinal epithelial cells by inducing a sustained hyperphosphorylation of ERK1 kinase which slows down the cell cycle and reduces expression of Irs-1 protein. These data extend our understanding of the physiological and potentially chemopreventive effects of UDCA and identify new targets for chemoprevention.

  15. Early warning of climate tipping points from critical slowing down: comparing methods to improve robustness.

    PubMed

    Lenton, T M; Livina, V N; Dakos, V; van Nes, E H; Scheffer, M

    2012-03-13

    We address whether robust early warning signals can, in principle, be provided before a climate tipping point is reached, focusing on methods that seek to detect critical slowing down as a precursor of bifurcation. As a test bed, six previously analysed datasets are reconsidered, three palaeoclimate records approaching abrupt transitions at the end of the last ice age and three models of varying complexity forced through a collapse of the Atlantic thermohaline circulation. Approaches based on examining the lag-1 autocorrelation function or on detrended fluctuation analysis are applied together and compared. The effects of aggregating the data, detrending method, sliding window length and filtering bandwidth are examined. Robust indicators of critical slowing down are found prior to the abrupt warming event at the end of the Younger Dryas, but the indicators are less clear prior to the Bølling-Allerød warming, or glacial termination in Antarctica. Early warnings of thermohaline circulation collapse can be masked by inter-annual variability driven by atmospheric dynamics. However, rapidly decaying modes can be successfully filtered out by using a long bandwidth or by aggregating data. The two methods have complementary strengths and weaknesses and we recommend applying them together to improve the robustness of early warnings.

  16. Modeling resonance interference by 0-D slowing-down solution with embedded self-shielding method

    SciTech Connect

    Liu, Y.; Martin, W.; Kim, K. S.; Williams, M.

    2013-07-01

    The resonance integral table based methods employing conventional multigroup structure for the resonance self-shielding calculation have a common difficulty on treating the resonance interference. The problem arises due to the lack of sufficient energy dependence of the resonance cross sections when the calculation is performed in the multigroup structure. To address this, a resonance interference factor model has been proposed to account for the interference effect by comparing the interfered and non-interfered effective cross sections obtained from 0-D homogeneous slowing-down solutions by continuous-energy cross sections. A rigorous homogeneous slowing-down solver is developed with two important features for reducing the calculation time and memory requirement for practical applications. The embedded self-shielding method (ESSM) is chosen as the multigroup resonance self-shielding solver as an integral component of the interference method. The interference method is implemented in the DeCART transport code. Verification results show that the code system provides more accurate effective cross sections and multiplication factors than the conventional interference method for UO{sub 2} and MOX fuel cases. The additional computing time and memory for the interference correction is acceptable for the test problems including a depletion case with 87 isotopes in the fuel region. (authors)

  17. Synchronous slowing down in coupled logistic maps via random network topology

    NASA Astrophysics Data System (ADS)

    Wang, Sheng-Jun; Du, Ru-Hai; Jin, Tao; Wu, Xing-Sen; Qu, Shi-Xian

    2016-03-01

    The speed and paths of synchronization play a key role in the function of a system, which has not received enough attention up to now. In this work, we study the synchronization process of coupled logistic maps that reveals the common features of low-dimensional dissipative systems. A slowing down of synchronization process is observed, which is a novel phenomenon. The result shows that there are two typical kinds of transient process before the system reaches complete synchronization, which is demonstrated by both the coupled multiple-period maps and the coupled multiple-band chaotic maps. When the coupling is weak, the evolution of the system is governed mainly by the local dynamic, i.e., the node states are attracted by the stable orbits or chaotic attractors of the single map and evolve toward the synchronized orbit in a less coherent way. When the coupling is strong, the node states evolve in a high coherent way toward the stable orbit on the synchronized manifold, where the collective dynamics dominates the evolution. In a mediate coupling strength, the interplay between the two paths is responsible for the slowing down. The existence of different synchronization paths is also proven by the finite-time Lyapunov exponent and its distribution.

  18. A close look at axonal transport: Cargos slow down when crossing stationary organelles.

    PubMed

    Che, Daphne L; Chowdary, Praveen D; Cui, Bianxiao

    2016-01-01

    The bidirectional transport of cargos along the thin axon is fundamental for the structure, function and survival of neurons. Defective axonal transport has been linked to the mechanism of neurodegenerative diseases. In this paper, we study the effect of the local axonal environment to cargo transport behavior in neurons. Using dual-color fluorescence imaging in microfluidic neuronal devices, we quantify the transport dynamics of cargos when crossing stationary organelles such as non-moving endosomes and stationary mitochondria in the axon. We show that the axonal cargos tend to slow down, or pause transiently within the vicinity of stationary organelles. The slow-down effect is observed in both retrograde and anterograde transport directions of three different cargos (TrkA, lysosomes and TrkB). Our results agree with the hypothesis that bulky axonal structures can pose as steric hindrance for axonal transport. However, the results do not rule out the possibility that cellular mechanisms causing stationary organelles are also responsible for the delay in moving cargos at the same locations.

  19. Gel mesh as ``brake'' to slow down DNA translocation through solid-state nanopores

    NASA Astrophysics Data System (ADS)

    Tang, Zhipeng; Liang, Zexi; Lu, Bo; Li, Ji; Hu, Rui; Zhao, Qing; Yu, Dapeng

    2015-07-01

    Agarose gel is introduced onto the cis side of silicon nitride nanopores by a simple and low-cost method to slow down the speed of DNA translocation. DNA translocation speed is slowed by roughly an order of magnitude without losing signal to noise ratio for different DNA lengths and applied voltages in gel-meshed nanopores. The existence of the gel moves the center-of-mass position of the DNA conformation further from the nanopore center, contributing to the observed slowing of translocation speed. A reduced velocity fluctuation is also noted, which is beneficial for further applications of gel-meshed nanopores. The reptation model is considered in simulation and agrees well with the experimental results.Agarose gel is introduced onto the cis side of silicon nitride nanopores by a simple and low-cost method to slow down the speed of DNA translocation. DNA translocation speed is slowed by roughly an order of magnitude without losing signal to noise ratio for different DNA lengths and applied voltages in gel-meshed nanopores. The existence of the gel moves the center-of-mass position of the DNA conformation further from the nanopore center, contributing to the observed slowing of translocation speed. A reduced velocity fluctuation is also noted, which is beneficial for further applications of gel-meshed nanopores. The reptation model is considered in simulation and agrees well with the experimental results. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03084f

  20. Microdosimetry of the full slowing down of protons using Monte Carlo track structure simulations.

    PubMed

    Liamsuwan, T; Uehara, S; Nikjoo, H

    2015-09-01

    The article investigates two approaches in microdosimetric calculations based on Monte Carlo track structure (MCTS) simulations of a 160-MeV proton beam. In the first approach, microdosimetric parameters of the proton beam were obtained using the weighted sum of proton energy distributions and microdosimetric parameters of proton track segments (TSMs). In the second approach, phase spaces of energy depositions obtained using MCTS simulations in the full slowing down (FSD) mode were used for the microdosimetric calculations. Targets of interest were water cylinders of 2.3-100 nm in diameters and heights. Frequency-averaged lineal energies ([Formula: see text]) obtained using both approaches agreed within the statistical uncertainties. Discrepancies beyond this level were observed for dose-averaged lineal energies ([Formula: see text]) towards the Bragg peak region due to the small number of proton energies used in the TSM approach and different energy deposition patterns in the TSM and FSD of protons.

  1. Equilibrium and stability in a heliotron with anisotropic hot particle slowing-down distribution

    SciTech Connect

    Cooper, W. A.; Asahi, Y.; Narushima, Y.; Suzuki, Y.; Watanabe, K. Y.; Graves, J. P.; Isaev, M. Yu.

    2012-10-15

    The equilibrium and linear fluid Magnetohydrodynamic (MHD) stability in an inward-shifted large helical device heliotron configuration are investigated with the 3D ANIMEC and TERPSICHORE codes, respectively. A modified slowing-down distribution function is invoked to study anisotropic pressure conditions. An appropriate choice of coefficients and exponents allows the simulation of neutral beam injection in which the angle of injection is varied from parallel to perpendicular. The fluid stability analysis concentrates on the application of the Johnson-Kulsrud-Weimer energy principle. The growth rates are maximum at <{beta}>{approx}2%, decrease significantly at <{beta}>{approx}4.5%, do not vary significantly with variations of the injection angle and are similar to those predicted with a bi-Maxwellian hot particle distribution function model. Stability is predicted at <{beta}>{approx}2.5% with a sufficiently peaked energetic particle pressure profile. Electrostatic potential forms from the MHD instability necessary for guiding centre orbit following are calculated.

  2. Critical slowing down as early warning for the onset and termination of depression

    PubMed Central

    van de Leemput, Ingrid A.; Wichers, Marieke; Cramer, Angélique O. J.; Borsboom, Denny; Tuerlinckx, Francis; Kuppens, Peter; van Nes, Egbert H.; Viechtbauer, Wolfgang; Giltay, Erik J.; Aggen, Steven H.; Derom, Catherine; Jacobs, Nele; Kendler, Kenneth S.; van der Maas, Han L. J.; Neale, Michael C.; Peeters, Frenk; Thiery, Evert; Zachar, Peter; Scheffer, Marten

    2014-01-01

    About 17% of humanity goes through an episode of major depression at some point in their lifetime. Despite the enormous societal costs of this incapacitating disorder, it is largely unknown how the likelihood of falling into a depressive episode can be assessed. Here, we show for a large group of healthy individuals and patients that the probability of an upcoming shift between a depressed and a normal state is related to elevated temporal autocorrelation, variance, and correlation between emotions in fluctuations of autorecorded emotions. These are indicators of the general phenomenon of critical slowing down, which is expected to occur when a system approaches a tipping point. Our results support the hypothesis that mood may have alternative stable states separated by tipping points, and suggest an approach for assessing the likelihood of transitions into and out of depression. PMID:24324144

  3. High temperature slows down growth in tobacco hornworms (Manduca sexta larvae) under food restriction.

    PubMed

    Hayes, Matthew B; Jiao, Lihong; Tsao, Tsu-hsuan; King, Ian; Jennings, Michael; Hou, Chen

    2015-03-01

    When fed ad libitum (AL), ectothermic animals usually grow faster and have higher metabolic rate at higher ambient temperature. However, if food supply is limited, there is an energy tradeoff between growth and metabolism. Here we hypothesize that for ectothermic animals under food restriction (FR), high temperature will lead to a high metabolic rate, but growth will slow down to compensate for the high metabolism. We measure the rates of growth and metabolism of 4 cohorts of 5th instar hornworms (Manduca sexta larvae) reared at 2 levels of food supply (AL and FR) and 2 temperatures (20 and 30 °C). Our results show that, compared to the cohorts reared at 20 °C, the ones reared at 30 °C have high metabolic rates under both AL and FR conditions, but a high growth rate under AL and a low growth rate under FR, supporting this hypothesis.

  4. Disentangling density and temperature effects in the viscous slowing down of glassforming liquids

    NASA Astrophysics Data System (ADS)

    Tarjus, G.; Kivelson, D.; Mossa, S.; Alba-Simionesco, C.

    2004-04-01

    We present a consistent picture of the respective role of density (ρ) and temperature (T) in the viscous slowing down of glassforming liquids and polymers. Specifically, based in part upon a new analysis of simulation and experimental data on liquid ortho-terphenyl, we conclude that a zeroth-order description of the approach to the glass transition (in the range of experimentally accessible pressures) should be formulated in terms of a temperature-driven super-Arrhenius activated behavior rather than a density-driven congestion or jamming phenomenon. The density plays a role at a quantitative level, but its effect on the viscosity and the α-relaxation time can be simply described via a single parameter, an effective interaction energy that is characteristic of the high-T liquid regime; as a result, ρ does not affect the "fragility" of the glassforming system.

  5. The slowing down times of positrons emitted from selected β+ isotopes into metals

    NASA Astrophysics Data System (ADS)

    Dryzek, Jerzy; Horodek, Paweł; Siemek, Krzysztof

    2012-11-01

    We report the GEANT4 Monte Carlo simulations and the approximated calculations of the slowing down time (SDT) for positrons emitted from three β+ isotopes, i.e., 22Na, 68Ge/68Ga and 48V. The first two isotopes are commonly used in the positron annihilation spectroscopy. The results revealed that the SDT exhibits the nonsymmetrical distribution and its average value depends on the end point energy of the isotope, the density and atomic number of the implanted material. For metals the average SDT varies from 0.4 ps to a few ps. We argue that this can affect the analysis of the measured positron lifetime and should be considered in theoretical calculations. The SDT in selected gases was simulated as well and in this case its average values are about four orders higher than in metals.

  6. Slow down of a globally neutral relativistic e-e+ beam shearing the vacuum

    NASA Astrophysics Data System (ADS)

    Alves, E. P.; Grismayer, T.; Silveirinha, M. G.; Fonseca, R. A.; Silva, L. O.

    2016-01-01

    The microphysics of relativistic collisionless shear flows is investigated in a configuration consisting of a globally neutral, relativistic {{e}-}{{e}+} beam streaming through a hollow plasma/dielectric channel. We show through multidimensional particle-in-cell simulations that this scenario excites the mushroom instability (MI), a transverse shear instability on the electron-scale, when there is no overlap (no contact) between the {{e}-}{{e}+} beam and the walls of the hollow plasma channel. The onset of the MI leads to the conversion of the beam’s kinetic energy into magnetic (and electric) field energy, effectively slowing down a globally neutral body in the absence of contact. The collisionless shear physics explored in this configuration may operate in astrophysical environments, particularly in highly relativistic and supersonic settings where macroscopic shear processes are stable.

  7. Analysis of spent fuel assay with a lead slowing down spectrometer

    SciTech Connect

    Gavron, Victor I; Smith, L Eric; Ressler, Jennifer J

    2008-01-01

    Assay of fissile materials in spent fuel that are produced or depleted during the operation of a reactor, is of paramount importance to nuclear materials accounting, verification of the reactor operation history, as well as for criticality considerations for storage. In order to prevent future proliferation following the spread of nuclear energy, we must develop accurate methods to assay large quantities of nuclear fuels. We analyze the potential of using a Lead Slowing Down Spectrometer for assaying spent fuel. We conclude that it is possible to design a system that will provide around 1% statistical precision in the determination of the {sup 239}Pu, {sup 241}Pu and {sup 235}U concentrations in a PWR spent-fuel assembly, for intermediate-to-high burnup levels, using commercial neutron sources, and a system of {sup 238}U threshold fission detectors. Pending further analysis of systematic errors, it is possible that missing pins can be detected, as can asymmetry in the fuel bundle.

  8. Critical slowing down of spin fluctuations in BiFeO3

    NASA Astrophysics Data System (ADS)

    Scott, J. F.; Singh, M. K.; Katiyar, R. S.

    2008-10-01

    In earlier work we reported the discovery of phase transitions in BiFeO3 evidenced by divergences in the magnon light-scattering cross-sections at 140 and 201 K (Singh et al 2008 J. Phys.: Condens. Matter 20 252203) and fitted these intensity data to critical exponents α = 0.06 and α' = 0.10 (Scott et al 2008 J. Phys.: Condens. Matter 20 322203), under the assumption that the transitions are strongly magnetoelastic (Redfern et al 2008 at press) and couple to strain divergences through the Pippard relationship (Pippard 1956 Phil. Mag. 1 473). In the present paper we extend those criticality studies to examine the magnon linewidths, which exhibit critical slowing down (and hence linewidth narrowing) of spin fluctuations. The linewidth data near the two transitions are qualitatively different and we cannot reliably extract a critical exponent ν, although the mean field value ν = 1/2 gives a good fit near the lower transition.

  9. Analysis of spent fuel assay with a lead slowing down spectrometer

    SciTech Connect

    Gavron, Victor I; Smith, L. Eric; Ressler, Jennifer J

    2010-10-29

    Assay of fissile materials in spent fuel that are produced or depleted during the operation of a reactor, is of paramount importance to nuclear materials accounting, verification of the reactor operation history, as well as for criticality considerations for storage. In order to prevent future proliferation following the spread of nuclear energy, we must develop accurate methods to assay large quantities of nuclear fuels. We analyze the potential of using a Lead Slowing Down Spectrometer for assaying spent fuel. We conclude that it is possible to design a system that will provide around 1% statistical precision in the determination of the {sup 239}Pu, {sup 241}Pu and {sup 235}U concentrations in a PWR spent-fuel assembly, for intermediate-to-high burnup levels, using commercial neutron sources, and a system of {sup 238}U threshold fission detectors. Pending further analysis of systematic errors, it is possible that missing pins can be detected, as can asymmetry in the fuel bundle.

  10. PT-symmetric slowing down of decoherence

    SciTech Connect

    Gardas, Bartlomiej; Deffner, Sebastian; Saxena, Avadh Behari

    2016-10-27

    Here, we investigate PT-symmetric quantum systems ultraweakly coupled to an environment. We find that such open systems evolve under PT-symmetric, purely dephasing and unital dynamics. The dynamical map describing the evolution is then determined explicitly using a quantum canonical transformation. Furthermore, we provide an explanation of why PT-symmetric dephasing-type interactions lead to a critical slowing down of decoherence. This effect is further exemplified with an experimentally relevant system, a PT-symmetric qubit easily realizable, e.g., in optical or microcavity experiments.

  11. Lead Slowing Down Spectrometry Analysis of Data from Measurements on Nuclear Fuel

    SciTech Connect

    Warren, Glen A.; Anderson, Kevin K.; Kulisek, Jonathan A.; Danon, Yaron; Weltz, Adam; Gavron, Victor A.; Harris, Jason; Stewart, Trevor N.

    2015-01-12

    Improved non-destructive assay of isotopic masses in used nuclear fuel would be valuable for nuclear safeguards operations associated with the transport, storage and reprocessing of used nuclear fuel. Our collaboration is examining the feasibility of using lead slowing down spectrometry techniques to assay the isotopic fissile masses in used nuclear fuel assemblies. We present the application of our analysis algorithms on measurements conducted with a lead spectrometer. The measurements involved a single fresh fuel pin and discrete 239Pu and 235U samples. We are able to describe the isotopic fissile masses with root mean square errors over seven different configurations to 6.35% for 239Pu and 2.7% for 235U over seven different configurations. Funding Source(s):

  12. Gel mesh as "brake" to slow down DNA translocation through solid-state nanopores.

    PubMed

    Tang, Zhipeng; Liang, Zexi; Lu, Bo; Li, Ji; Hu, Rui; Zhao, Qing; Yu, Dapeng

    2015-08-21

    Agarose gel is introduced onto the cis side of silicon nitride nanopores by a simple and low-cost method to slow down the speed of DNA translocation. DNA translocation speed is slowed by roughly an order of magnitude without losing signal to noise ratio for different DNA lengths and applied voltages in gel-meshed nanopores. The existence of the gel moves the center-of-mass position of the DNA conformation further from the nanopore center, contributing to the observed slowing of translocation speed. A reduced velocity fluctuation is also noted, which is beneficial for further applications of gel-meshed nanopores. The reptation model is considered in simulation and agrees well with the experimental results.

  13. Lead Slowing-Down Spectrometry for Spent Fuel Assay: FY11 Status Report

    SciTech Connect

    Warren, Glen A.; Casella, Andrew M.; Haight, R. C.; Anderson, Kevin K.; Danon, Yaron; Hatchett, D.; Becker, Bjorn; Devlin, M.; Imel, G. R.; Beller, D.; Gavron, A.; Kulisek, Jonathan A.; Bowyer, Sonya M.; Gesh, Christopher J.; O'Donnell, J. M.

    2011-08-01

    Executive Summary Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today’s confirmatory assay methods. This document is a progress report for FY2011 collaboration activities. Progress made by the collaboration in FY2011 continues to indicate the promise of LSDS techniques applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model demonstrated the potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space. Similar results were obtained using a perturbation approach developed by LANL. Benchmark measurements have been successfully conducted at LANL and at RPI using their respective LSDS instruments. The ISU and UNLV collaborative effort is focused on the fabrication and testing of prototype fission chambers lined with ultra-depleted 238U and 232Th, and uranium deposition on a stainless steel disc using spiked U3O8 from room temperature ionic liquid was successful, with improving thickness obtained. In FY2012, the collaboration plans a broad array of activities. PNNL will focus on optimizing its empirical model and minimizing its reliance on calibration data, as well continuing efforts on developing an analytical model. Additional measurements are

  14. Spines slow down dendritic chloride diffusion and affect short-term ionic plasticity of GABAergic inhibition

    PubMed Central

    Mohapatra, Namrata; Tønnesen, Jan; Vlachos, Andreas; Kuner, Thomas; Deller, Thomas; Nägerl, U. Valentin; Santamaria, Fidel; Jedlicka, Peter

    2016-01-01

    Cl− plays a crucial role in neuronal function and synaptic inhibition. However, the impact of neuronal morphology on the diffusion and redistribution of intracellular Cl− is not well understood. The role of spines in Cl− diffusion along dendritic trees has not been addressed so far. Because measuring fast and spatially restricted Cl− changes within dendrites is not yet technically possible, we used computational approaches to predict the effects of spines on Cl− dynamics in morphologically complex dendrites. In all morphologies tested, including dendrites imaged by super-resolution STED microscopy in live brain tissue, spines slowed down longitudinal Cl− diffusion along dendrites. This effect was robust and could be observed in both deterministic as well as stochastic simulations. Cl− extrusion altered Cl− diffusion to a much lesser extent than the presence of spines. The spine-dependent slowing of Cl− diffusion affected the amount and spatial spread of changes in the GABA reversal potential thereby altering homosynaptic as well as heterosynaptic short-term ionic plasticity at GABAergic synapses in dendrites. Altogether, our results suggest a fundamental role of dendritic spines in shaping Cl− diffusion, which could be of relevance in the context of pathological conditions where spine densities and neural excitability are perturbed. PMID:26987404

  15. Mechanical slowing-down of cytoplasmic diffusion allows in vivo counting of proteins in individual cells

    PubMed Central

    Okumus, Burak; Landgraf, Dirk; Lai, Ghee Chuan; Bakhsi, Somenath; Arias-Castro, Juan Carlos; Yildiz, Sadik; Huh, Dann; Fernandez-Lopez, Raul; Peterson, Celeste N.; Toprak, Erdal; El Karoui, Meriem; Paulsson, Johan

    2016-01-01

    Many key regulatory proteins in bacteria are present in too low numbers to be detected with conventional methods, which poses a particular challenge for single-cell analyses because such proteins can contribute greatly to phenotypic heterogeneity. Here we develop a microfluidics-based platform that enables single-molecule counting of low-abundance proteins by mechanically slowing-down their diffusion within the cytoplasm of live Escherichia coli (E. coli) cells. Our technique also allows for automated microscopy at high throughput with minimal perturbation to native physiology, as well as viable enrichment/retrieval. We illustrate the method by analysing the control of the master regulator of the E. coli stress response, RpoS, by its adapter protein, SprE (RssB). Quantification of SprE numbers shows that though SprE is necessary for RpoS degradation, it is expressed at levels as low as 3–4 molecules per average cell cycle, and fluctuations in SprE are approximately Poisson distributed during exponential phase with no sign of bursting. PMID:27189321

  16. Non-destructive Assay Measurements Using the RPI Lead Slowing Down Spectrometer

    SciTech Connect

    Becker, Bjorn; Weltz, Adam; Kulisek, Jonathan A.; Thompson, J. T.; Thompson, N.; Danon, Yaron

    2013-10-01

    The use of a Lead Slowing-Down Spectrometer (LSDS) is consid- ered as a possible option for non-destructive assay of fissile material of used nuclear fuel. The primary objective is to quantify the 239Pu and 235U fissile content via a direct measurement, distinguishing them through their characteristic fission spectra in the LSDS. In this pa- per, we present several assay measurements performed at the Rensse- laer Polytechnic Institute (RPI) to demonstrate the feasibility of such a method and to provide benchmark experiments for Monte Carlo cal- culations of the assay system. A fresh UOX fuel rod from the RPI Criticality Research Facility, a 239PuBe source and several highly en- riched 235U discs were assayed in the LSDS. The characteristic fission spectra were measured with 238U and 232Th threshold fission cham- bers, which are only sensitive to fission neutron with energy above the threshold. Despite the constant neutron and gamma background from the PuBe source and the intense interrogation neutron flux, the LSDS system was able to measure the characteristic 235U and 239Pu responses. All measurements were compared to Monte Carlo simula- tions. It was shown that the available simulation tools and models are well suited to simulate the assay, and that it is possible to calculate the absolute count rate in all investigated cases.

  17. Spines slow down dendritic chloride diffusion and affect short-term ionic plasticity of GABAergic inhibition

    NASA Astrophysics Data System (ADS)

    Mohapatra, Namrata; Tønnesen, Jan; Vlachos, Andreas; Kuner, Thomas; Deller, Thomas; Nägerl, U. Valentin; Santamaria, Fidel; Jedlicka, Peter

    2016-03-01

    Cl‑ plays a crucial role in neuronal function and synaptic inhibition. However, the impact of neuronal morphology on the diffusion and redistribution of intracellular Cl‑ is not well understood. The role of spines in Cl‑ diffusion along dendritic trees has not been addressed so far. Because measuring fast and spatially restricted Cl‑ changes within dendrites is not yet technically possible, we used computational approaches to predict the effects of spines on Cl‑ dynamics in morphologically complex dendrites. In all morphologies tested, including dendrites imaged by super-resolution STED microscopy in live brain tissue, spines slowed down longitudinal Cl‑ diffusion along dendrites. This effect was robust and could be observed in both deterministic as well as stochastic simulations. Cl‑ extrusion altered Cl‑ diffusion to a much lesser extent than the presence of spines. The spine-dependent slowing of Cl‑ diffusion affected the amount and spatial spread of changes in the GABA reversal potential thereby altering homosynaptic as well as heterosynaptic short-term ionic plasticity at GABAergic synapses in dendrites. Altogether, our results suggest a fundamental role of dendritic spines in shaping Cl‑ diffusion, which could be of relevance in the context of pathological conditions where spine densities and neural excitability are perturbed.

  18. Mechanical slowing-down of cytoplasmic diffusion allows in vivo counting of proteins in individual cells

    NASA Astrophysics Data System (ADS)

    Okumus, Burak; Landgraf, Dirk; Lai, Ghee Chuan; Bakhsi, Somenath; Arias-Castro, Juan Carlos; Yildiz, Sadik; Huh, Dann; Fernandez-Lopez, Raul; Peterson, Celeste N.; Toprak, Erdal; El Karoui, Meriem; Paulsson, Johan

    2016-05-01

    Many key regulatory proteins in bacteria are present in too low numbers to be detected with conventional methods, which poses a particular challenge for single-cell analyses because such proteins can contribute greatly to phenotypic heterogeneity. Here we develop a microfluidics-based platform that enables single-molecule counting of low-abundance proteins by mechanically slowing-down their diffusion within the cytoplasm of live Escherichia coli (E. coli) cells. Our technique also allows for automated microscopy at high throughput with minimal perturbation to native physiology, as well as viable enrichment/retrieval. We illustrate the method by analysing the control of the master regulator of the E. coli stress response, RpoS, by its adapter protein, SprE (RssB). Quantification of SprE numbers shows that though SprE is necessary for RpoS degradation, it is expressed at levels as low as 3-4 molecules per average cell cycle, and fluctuations in SprE are approximately Poisson distributed during exponential phase with no sign of bursting.

  19. Inverse patchy colloids with small patches: fluid structure and dynamical slowing down

    NASA Astrophysics Data System (ADS)

    Ferrari, Silvano; Bianchi, Emanuela; Kalyuzhnyi, Yura V.; Kahl, Gerhard

    2015-06-01

    Inverse patchy colloids (IPCs) differ from conventional patchy particles because their patches repel (rather than attract) each other and attract (rather than repel) the part of the colloidal surface that is free of patches. These particular features occur, e.g. in heterogeneously charged colloidal systems. Here we consider overall neutral IPCs carrying two, relatively small, polar patches. Previous studies of the same model under planar confinement have evidenced the formation of branched, disordered aggregates composed of ring-like structures. We investigate here the bulk behavior of the system via molecular dynamics simulations, focusing on both the structure and the dynamics of the fluid phase in a wide region of the phase diagram. Additionally, the simulation results for the static observables are compared to the Associative Percus Yevick solution of an integral equation approach based on the multi-density Ornstein-Zernike theory. A good agreement between theoretical and numerical quantities is observed even in the region of the phase diagram where the slowing down of the dynamics occurs.

  20. Can vitamin D slow down the progression of chronic kidney disease?

    PubMed

    Shroff, Rukshana; Wan, Mandy; Rees, Lesley

    2012-12-01

    Pharmacological blockade of the renin-angiotensin-aldosterone system (RAAS) is the cornerstone of renoprotective therapy, and the reduction of persistent RAAS activation is considered to be an important target in the treatment of chronic kidney disease (CKD). Vitamin D is a steroid hormone that controls a broad range of metabolic and cell regulatory functions. It acts as a transcription factor and can suppress the renin gene, thereby acting as a negative endocrine regulator of RAAS. RAAS activation can reduce renal Klotho expression, and the Klotho-fibroblast growth factor 23 interaction may further reduce the production of active vitamin D. Results from both clinical and experimental studies suggest that vitamin D therapy is associated with a reduction in blood pressure and left ventricular hypertrophy and improves cardiovascular outcomes. In addition, a reduction in angiotensin II through RAAS blockade may have anti-proteinuric and anti-fibrotic effects. Vitamin D has also been shown to modulate the immune system, regulate inflammatory responses, improve insulin sensitivity and reduce high-density lipoprotein cholesterol. Taken together, these pleiotropic effects of vitamin D may slow down the progression of CKD. In this review, we discuss the experimental and early clinical findings that suggest a renoprotective effect of vitamin D, thereby providing an additional rationale beyond mineral metabolism for the close monitoring of, and supplementation with vitamin D from the earliest stages of CKD.

  1. Transient slowing down relaxation dynamics of the supercooled dusty plasma liquid after quenching.

    PubMed

    Su, Yen-Shuo; Io, Chong-Wai; I, Lin

    2012-07-01

    The spatiotemporal evolutions of microstructure and motion in the transient relaxation toward the steady supercooled liquid state after quenching a dusty plasma Wigner liquid, formed by charged dust particles suspended in a low pressure discharge, are experimentally investigated through direct optical microscopy. It is found that the quenched liquid slowly evolves to a colder state with more heterogeneities in structure and motion. Hopping particles and defects appear in the form of clusters with multiscale cluster size distributions. Via the structure rearrangement induced by the reduced thermal agitation from the cold thermal bath after quenching, the temporarily stored strain energy can be cascaded through the network to different newly distorted regions and dissipated after transferring to nonlinearly coupled motions with different scales. It leads to the observed self-similar multiscale slowing down relaxation with power law increases of structural order and structural relaxation time, the similar power law decreases of particle motions at different time scales, and the stronger and slower fluctuations with increasing waiting time toward the new steady state.

  2. Exercise and disease progression in multiple sclerosis: can exercise slow down the progression of multiple sclerosis?

    PubMed Central

    Stenager, Egon

    2012-01-01

    It has been suggested that exercise (or physical activity) might have the potential to have an impact on multiple sclerosis (MS) pathology and thereby slow down the disease process in MS patients. The objective of this literature review was to identify the literature linking physical exercise (or activity) and MS disease progression. A systematic literature search was conducted in the following databases: PubMed, SweMed+, Embase, Cochrane Library, PEDro, SPORTDiscus and ISI Web of Science. Different methodological approaches to the problem have been applied including (1) longitudinal exercise studies evaluating the effects on clinical outcome measures, (2) cross-sectional studies evaluating the relationship between fitness status and MRI findings, (3) cross-sectional and longitudinal studies evaluating the relationship between exercise/physical activity and disability/relapse rate and, finally, (4) longitudinal exercise studies applying the experimental autoimmune encephalomyelitis (EAE) animal model of MS. Data from intervention studies evaluating disease progression by clinical measures (1) do not support a disease-modifying effect of exercise; however, MRI data (2), patient-reported data (3) and data from the EAE model (4) indicate a possible disease-modifying effect of exercise, but the strength of the evidence limits definite conclusions. It was concluded that some evidence supports the possibility of a disease-modifying potential of exercise (or physical activity) in MS patients, but future studies using better methodologies are needed to confirm this. PMID:22435073

  3. Assaying Used Nuclear Fuel Assemblies Using Lead Slowing-Down Spectroscopy and Singular Value Decomposition

    SciTech Connect

    Kulisek, Jonathan A.; Anderson, Kevin K.; Casella, Andrew M.; Gesh, Christopher J.; Warren, Glen A.

    2013-04-01

    This study investigates the use of a Lead Slowing-Down Spectrometer (LSDS) for the direct and independent measurement of fissile isotopes in light-water nuclear reactor fuel assemblies. The current study applies MCNPX, a Monte Carlo radiation transport code, to simulate the measurement of the assay of the used nuclear fuel assemblies in the LSDS. An empirical model has been developed based on the calibration of the LSDS to responses generated from the simulated assay of six well-characterized fuel assemblies. The effects of self-shielding are taken into account by using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the self-shielding functions from the assay of assemblies in the calibration set. The performance of the empirical algorithm was tested on version 1 of the Next-Generation Safeguards Initiative (NGSI) used fuel library consisting of 64 assemblies, as well as a set of 27 diversion assemblies, both of which were developed by Los Alamos National Laboratory. The potential for direct and independent assay of the sum of the masses of Pu-239 and Pu-241 to within 2%, on average, has been demonstrated.

  4. Critical phase shifts slow down circadian clock recovery: implications for jet lag.

    PubMed

    Leloup, Jean-Christophe; Goldbeter, Albert

    2013-09-21

    Advancing or delaying the light-dark (LD) cycle perturbs the circadian clock, which eventually recovers its original phase with respect to the new LD cycle. Readjustment of the clock occurs by shifting its phase in the same (orthodromic re-entrainment) or opposite direction (antidromic re-entrainment) as the shift in the LD cycle. To investigate circadian clock recovery after phase shifts of the LD cycle we use a detailed computational model previously proposed for the cellular regulatory network underlying the mammalian circadian clock. The model predicts the existence of a sharp threshold separating orthodromic from antidromic re-entrainment. In the vicinity of this threshold, resynchronization of the clock after a phase shift markedly slows down. The type of re-entrainment, the position of the threshold and the time required for resynchronization depend on multiple factors such as the autonomous period of the clock, the direction and magnitude of the phase shift, the clock biochemical kinetic parameters, and light intensity. Partitioning the phase shift into a series of smaller phases shifts decreases the impact on the recovery of the circadian clock. We use the phase response curve to predict the location of the threshold separating orthodromic and antidromic re-entrainment after advanced or delayed phase shifts of the LD cycle. The marked increase in recovery times predicted near the threshold could be responsible for the most severe disturbances of the human circadian clock associated with jet lag.

  5. Spines slow down dendritic chloride diffusion and affect short-term ionic plasticity of GABAergic inhibition.

    PubMed

    Mohapatra, Namrata; Tønnesen, Jan; Vlachos, Andreas; Kuner, Thomas; Deller, Thomas; Nägerl, U Valentin; Santamaria, Fidel; Jedlicka, Peter

    2016-03-18

    Cl(-) plays a crucial role in neuronal function and synaptic inhibition. However, the impact of neuronal morphology on the diffusion and redistribution of intracellular Cl(-) is not well understood. The role of spines in Cl(-) diffusion along dendritic trees has not been addressed so far. Because measuring fast and spatially restricted Cl(-) changes within dendrites is not yet technically possible, we used computational approaches to predict the effects of spines on Cl(-) dynamics in morphologically complex dendrites. In all morphologies tested, including dendrites imaged by super-resolution STED microscopy in live brain tissue, spines slowed down longitudinal Cl(-) diffusion along dendrites. This effect was robust and could be observed in both deterministic as well as stochastic simulations. Cl(-) extrusion altered Cl(-) diffusion to a much lesser extent than the presence of spines. The spine-dependent slowing of Cl(-) diffusion affected the amount and spatial spread of changes in the GABA reversal potential thereby altering homosynaptic as well as heterosynaptic short-term ionic plasticity at GABAergic synapses in dendrites. Altogether, our results suggest a fundamental role of dendritic spines in shaping Cl(-) diffusion, which could be of relevance in the context of pathological conditions where spine densities and neural excitability are perturbed.

  6. Exercise and disease progression in multiple sclerosis: can exercise slow down the progression of multiple sclerosis?

    PubMed

    Dalgas, Ulrik; Stenager, Egon

    2012-03-01

    It has been suggested that exercise (or physical activity) might have the potential to have an impact on multiple sclerosis (MS) pathology and thereby slow down the disease process in MS patients. The objective of this literature review was to identify the literature linking physical exercise (or activity) and MS disease progression. A systematic literature search was conducted in the following databases: PubMed, SweMed+, Embase, Cochrane Library, PEDro, SPORTDiscus and ISI Web of Science. Different methodological approaches to the problem have been applied including (1) longitudinal exercise studies evaluating the effects on clinical outcome measures, (2) cross-sectional studies evaluating the relationship between fitness status and MRI findings, (3) cross-sectional and longitudinal studies evaluating the relationship between exercise/physical activity and disability/relapse rate and, finally, (4) longitudinal exercise studies applying the experimental autoimmune encephalomyelitis (EAE) animal model of MS. Data from intervention studies evaluating disease progression by clinical measures (1) do not support a disease-modifying effect of exercise; however, MRI data (2), patient-reported data (3) and data from the EAE model (4) indicate a possible disease-modifying effect of exercise, but the strength of the evidence limits definite conclusions. It was concluded that some evidence supports the possibility of a disease-modifying potential of exercise (or physical activity) in MS patients, but future studies using better methodologies are needed to confirm this.

  7. Traffic and Environmental Cues and Slow-Down Behaviors in Virtual Driving.

    PubMed

    Hsu, Chun-Chia; Chuang, Kai-Hsiang

    2016-02-01

    This study used a driving simulator to investigate whether the presence of pedestrians and traffic engineering designs that reported to have reduction effects on overall traffic speed at intersections can facilitate drivers adopting lower impact speed behaviors at pedestrian crossings. Twenty-eight men (M age = 39.9 yr., SD = 11.5) with drivers' licenses participated. Nine studied measures were obtained from the speed profiles of each participant. A 14-km virtual road was presented to the participants. It included experimental scenarios of base intersection, pedestrian presence, pedestrian warning sign at intersection and in advance of intersection, and perceptual lane narrowing by hatching lines. Compared to the base intersection, the presence of pedestrians caused drivers to slow down earlier and reach a lower minimum speed before the pedestrian crossing. This speed behavior was not completely evident when adding a pedestrian warning sign at an intersection or having perceptual lane narrowing to the stop line. Additionally, installing pedestrian warning signs in advance of the intersections rather at the intersections was associated with higher impact speeds at pedestrian crossings.

  8. Slowing-down of non-equilibrium concentration fluctuations in confinement

    NASA Astrophysics Data System (ADS)

    Giraudet, Cédric; Bataller, Henri; Sun, Yifei; Donev, Aleksandar; María Ortiz de Zárate, José; Croccolo, Fabrizio

    2015-09-01

    Fluctuations in a fluid are strongly affected by the presence of a macroscopic gradient making them long-ranged and enhancing their amplitude. While small-scale fluctuations exhibit diffusive lifetimes, moderate-scale fluctuations live shorter because of gravity. In this letter we explore fluctuations of even larger size, comparable to the extent of the system in the direction of the gradient, and find experimental evidence of a dramatic slowing-down of their dynamics. We recover diffusive behavior for these strongly confined fluctuations, but with a diffusion coefficient that depends on the solutal Rayleigh number. Results from dynamic shadowgraph experiments are complemented by theoretical calculations and numerical simulations based on fluctuating hydrodynamics, and excellent agreement is found. Hence, the study of the dynamics of non-equilibrium fluctuations allows to probe and measure the competition of physical processes such as diffusion, buoyancy and confinement, i.e. the ingredients included in the Rayleigh number, which is the control parameter of our system.

  9. Slowing Down the Presentation of Facial and Body Movements Enhances Imitation Performance in Children with Severe Autism

    ERIC Educational Resources Information Center

    Laine, France; Rauzy, Stephane; Tardif, Carole; Gepner, Bruno

    2011-01-01

    Imitation deficits observed among individuals with autism could be partly explained by the excessive speed of biological movements to be perceived and then reproduced. Along with this assumption, slowing down the speed of presentation of these movements might improve their imitative performances. To test this hypothesis, 19 children with autism,…

  10. Do calcium buffers always slow down the propagation of calcium waves?

    PubMed

    Tsai, Je-Chiang

    2013-12-01

    Calcium buffers are large proteins that act as binding sites for free cytosolic calcium. Since a large fraction of cytosolic calcium is bound to calcium buffers, calcium waves are widely observed under the condition that free cytosolic calcium is heavily buffered. In addition, all physiological buffered excitable systems contain multiple buffers with different affinities. It is thus important to understand the properties of waves in excitable systems with the inclusion of buffers. There is an ongoing controversy about whether or not the addition of calcium buffers into the system always slows down the propagation of calcium waves. To solve this controversy, we incorporate the buffering effect into the generic excitable system, the FitzHugh-Nagumo model, to get the buffered FitzHugh-Nagumo model, and then to study the effect of the added buffer with large diffusivity on traveling waves of such a model in one spatial dimension. We can find a critical dissociation constant (K = K(a)) characterized by system excitability parameter a such that calcium buffers can be classified into two types: weak buffers (K ∈ (K(a), ∞)) and strong buffers (K ∈ (0, K(a))). We analytically show that the addition of weak buffers or strong buffers but with its total concentration b(0)(1) below some critical total concentration b(0,c)(1) into the system can generate a traveling wave of the resulting system which propagates faster than that of the origin system, provided that the diffusivity D1 of the added buffers is sufficiently large. Further, the magnitude of the wave speed of traveling waves of the resulting system is proportional to √D1 as D1 --> ∞. In contrast, the addition of strong buffers with the total concentration b(0)(1) > b(0,c)(1) into the system may not be able to support the formation of a biologically acceptable wave provided that the diffusivity D1 of the added buffers is sufficiently large.

  11. Climatic Slow-down of the Pamir-Karakoram-Himalaya Glaciers Over the Last 25 Years

    NASA Astrophysics Data System (ADS)

    Dehecq, A.; Gourmelen, N.; Trouvé, E.

    2015-12-01

    Climate warming over the 20th century has caused drastic changes in mountain glaciers globally, and of the Himalayan glaciers in particular. The stakes are high; glaciers and ice caps are the largest contributor to the increase in the mass of the world's oceans, and the Himalayas play a key role in the hydrology of the region, impacting on the economy, food safety and flood risk. Partial monitoring of the Himalayan glaciers has revealed a contrasted picture; while many of the Himalayan glaciers are retreating, in some cases locally stable or advancing glaciers in this region have also been observed. Several studies based on field measurements or remote sensing have shown a dominant slow-down of mountain glaciers globally in response to these changes. But they are restricted to a few glaciers or small regions and none has analysed the dynamic response of glaciers to climate changes at regional scales. Here we present a region-wide analysis of annual glacier flow velocity covering the Pamir-Karakoram-Himalaya region obtained from the analysis of the entire archive of Landsat data. Over 90% of the ice-covered regions, as defined by the Randolph Glacier Inventory, are measured, with precision on the retrieved velocity of the order of 4 m/yr. The change in velocities over the last 25 years will be analysed with reference to regional glacier mass balance and topographic caracteristics. We show that the first order temporal evolution of glacier flow mirrors the pattern of glacier mass balance. We observe a general decrease of ice velocity in regions of known ice mass loss, and a more complex patterns consisting of mixed acceleration and decrease of ice velocity in regions that are known to be affected by stable mass balance and surge-like behavior.

  12. Climatic Slow-down of the Pamir-Karakoram-Himalaya Glaciers Over the Last 25 Years

    NASA Astrophysics Data System (ADS)

    Dumont, M.; Brun, E.; Picard, G.; Michou, M.; Libois, Q.; Petit, J. R.; Morin, S.; Josse, B.

    2014-12-01

    Climate warming over the 20th century has caused drastic changes in mountain glaciers globally, and of the Himalayan glaciers in particular. The stakes are high; glaciers and ice caps are the largest contributor to the increase in the mass of the world's oceans, and the Himalayas play a key role in the hydrology of the region, impacting on the economy, food safety and flood risk. Partial monitoring of the Himalayan glaciers has revealed a contrasted picture; while many of the Himalayan glaciers are retreating, in some cases locally stable or advancing glaciers in this region have also been observed. Several studies based on field measurements or remote sensing have shown a dominant slow-down of mountain glaciers globally in response to these changes. But they are restricted to a few glaciers or small regions and none has analysed the dynamic response of glaciers to climate changes at regional scales. Here we present a region-wide analysis of annual glacier flow velocity covering the Pamir-Karakoram-Himalaya region obtained from the analysis of the entire archive of Landsat data. Over 90% of the ice-covered regions, as defined by the Randolph Glacier Inventory, are measured, with precision on the retrieved velocity of the order of 4 m/yr. The change in velocities over the last 25 years will be analysed with reference to regional glacier mass balance and topographic caracteristics. We show that the first order temporal evolution of glacier flow mirrors the pattern of glacier mass balance. We observe a general decrease of ice velocity in regions of known ice mass loss, and a more complex patterns consisting of mixed acceleration and decrease of ice velocity in regions that are known to be affected by stable mass balance and surge-like behavior.

  13. Slowing down fat digestion and absorption by an oxadiazolone inhibitor targeting selectively gastric lipolysis.

    PubMed

    Point, Vanessa; Bénarouche, Anais; Zarrillo, Julie; Guy, Alexandre; Magnez, Romain; Fonseca, Laurence; Raux, Brigitt; Leclaire, Julien; Buono, Gérard; Fotiadu, Frédéric; Durand, Thierry; Carrière, Frédéric; Vaysse, Carole; Couëdelo, Leslie; Cavalier, Jean-François

    2016-11-10

    Based on a previous study and in silico molecular docking experiments, we have designed and synthesized a new series of ten 5-Alkoxy-N-3-(3-PhenoxyPhenyl)-1,3,4-Oxadiazol-2(3H)-one derivatives (RmPPOX). These molecules were further evaluated as selective and potent inhibitors of mammalian digestive lipases: purified dog gastric lipase (DGL) and guinea pig pancreatic lipase related protein 2 (GPLRP2), as well as porcine (PPL) and human (HPL) pancreatic lipases contained in porcine pancreatic extracts (PPE) and human pancreatic juices (HPJ), respectively. These compounds were found to strongly discriminate classical pancreatic lipases (poorly inhibited) from gastric lipase (fully inhibited). Among them, the 5-(2-(Benzyloxy)ethoxy)-3-(3-PhenoxyPhenyl)-1,3,4-Oxadiazol-2(3H)-one (BemPPOX) was identified as the most potent inhibitor of DGL, even more active than the FDA-approved drug Orlistat. BemPPOX and Orlistat were further compared in vitro in the course of test meal digestion, and in vivo with a mesenteric lymph duct cannulated rat model to evaluate their respective impacts on fat absorption. While Orlistat inhibited both gastric and duodenal lipolysis and drastically reduced fat absorption in rats, BemPPOX showed a specific action on gastric lipolysis that slowed down the overall lipolysis process and led to a subsequent reduction of around 55% of the intestinal absorption of fatty acids compared to controls. All these data promote BemPPOX as a potent candidate to efficiently regulate the gastrointestinal lipolysis, and to investigate its link with satiety mechanisms and therefore develop new strategies to "fight against obesity".

  14. Nitric oxide acts as a slow-down and search signal in developing neurites.

    PubMed

    Trimm, Kevin R; Rehder, Vincent

    2004-02-01

    Nitric oxide (NO) has been demonstrated to act as a signaling molecule during neuronal development, but its precise function is unclear. Here we investigate whether NO might function at the neuronal growth cone to affect growth cone motility. We have previously demonstrated that growth cones of identified neurons from the snail Helisoma trivolvis show a rapid and transient increase in filopodial length in response to NO, which was regulated by soluble guanylyl cyclase (sGC) [S. Van Wagenen and V. Rehder (1999) J. Neurobiol., 39, 168-185]. Because in vivo studies have demonstrated that growth cones have longer filopodia and advance more slowly in regions where pathfinding decisions are being made, this study aimed to establish whether NO could function as a combined 'slow-down and search signal' for growth cones by decreasing neurite outgrowth. In the presence of the NO donor NOC-7, neurites of B5 neurons showed a concentration-dependent effect on neurite outgrowth, ranging from slowing at low, stopping at intermediate and collapsing at high concentrations. The effects of the NO donor were mimicked by directly activating sGC with YC-1, or by increasing its product with 8-bromo-cGMP. In addition, blocking sGC in the presence of NO with NS2028 blocked the effect of NO, suggesting that NO affected outgrowth via sGC. Ca2+ imaging of growth cones with Fura-2 indicated that [Ca2+]i increased transiently in the presence of NOC-7. These results support the hypothesis that NO can function as a potent slow/stop signal for developing neurites. When coupled with transient filopodia elongation, this phenomenon emulates growth cone searching behavior.

  15. Does time ever fly or slow down? The difficult interpretation of psychophysical data on time perception.

    PubMed

    García-Pérez, Miguel A

    2014-01-01

    Time perception is studied with subjective or semi-objective psychophysical methods. With subjective methods, observers provide quantitative estimates of duration and data depict the psychophysical function relating subjective duration to objective duration. With semi-objective methods, observers provide categorical or comparative judgments of duration and data depict the psychometric function relating the probability of a certain judgment to objective duration. Both approaches are used to study whether subjective and objective time run at the same pace or whether time flies or slows down under certain conditions. We analyze theoretical aspects affecting the interpretation of data gathered with the most widely used semi-objective methods, including single-presentation and paired-comparison methods. For this purpose, a formal model of psychophysical performance is used in which subjective duration is represented via a psychophysical function and the scalar property. This provides the timing component of the model, which is invariant across methods. A decisional component that varies across methods reflects how observers use subjective durations to make judgments and give the responses requested under each method. Application of the model shows that psychometric functions in single-presentation methods are uninterpretable because the various influences on observed performance are inextricably confounded in the data. In contrast, data gathered with paired-comparison methods permit separating out those influences. Prevalent approaches to fitting psychometric functions to data are also discussed and shown to be inconsistent with widely accepted principles of time perception, implicitly assuming instead that subjective time equals objective time and that observed differences across conditions do not reflect differences in perceived duration but criterion shifts. These analyses prompt evidence-based recommendations for best methodological practice in studies on time

  16. Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY11 Status Report

    SciTech Connect

    Kulisek, Jonathan A.; Anderson, Kevin K.; Bowyer, Sonya M.; Casella, Andrew M.; Gesh, Christopher J.; Warren, Glen A.

    2011-09-30

    Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today's confirmatory assay methods. This document is a progress report for FY2011 PNNL analysis and algorithm development. Progress made by PNNL in FY2011 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model, which accounts for self-shielding effects using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the true self-shielding functions of the used fuel assembly models. The potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space was demonstrated. Also, in FY2011, PNNL continued to develop an analytical model. Such efforts included the addition of six more non-fissile absorbers in the analytical shielding function and the non-uniformity of the neutron flux across the LSDS assay chamber. A hybrid analytical-empirical approach was developed to determine the mass of total Pu (sum of the masses of 239Pu, 240Pu, and 241Pu), which is an important quantity in safeguards. Results using this hybrid method were of approximately the same accuracy as the pure

  17. Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY12 Status Report

    SciTech Connect

    Kulisek, Jonathan A.; Anderson, Kevin K.; Casella, Andrew M.; Siciliano, Edward R.; Warren, Glen A.

    2012-09-28

    Executive Summary Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today’s confirmatory methods. This document is a progress report for FY2012 PNNL analysis and algorithm development. Progress made by PNNL in FY2012 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel assemblies. PNNL further refined the semi-empirical model developed in FY2011 based on singular value decomposition (SVD) to numerically account for the effects of self-shielding. The average uncertainty in the Pu mass across the NGSI-64 fuel assemblies was shown to be less than 3% using only six calibration assemblies with a 2% uncertainty in the isotopic masses. When calibrated against the six NGSI-64 fuel assemblies, the algorithm was able to determine the total Pu mass within <2% uncertainty for the 27 diversion cases also developed under NGSI. Two purely empirical algorithms were developed that do not require the use of Pu isotopic fission chambers. The semi-empirical and purely empirical algorithms were successfully tested using MCNPX simulations as well applied to experimental data measured by RPI using their LSDS. The algorithms were able to describe the 235U masses of the RPI measurements with an average uncertainty of 2.3%. Analyses were conducted that provided valuable insight with regard to design requirements (e

  18. HF(v′ = 3) forward scattering in the F + H2 reaction: Shape resonance and slow-down mechanism

    PubMed Central

    Wang, Xingan; Dong, Wenrui; Qiu, Minghui; Ren, Zefeng; Che, Li; Dai, Dongxu; Wang, Xiuyan; Yang, Xueming; Sun, Zhigang; Fu, Bina; Lee, Soo-Y.; Xu, Xin; Zhang, Dong H.

    2008-01-01

    Crossed molecular beam experiments and accurate quantum dynamics calculations have been carried out to address the long standing and intriguing issue of the forward scattering observed in the F + H2 → HF(v′ = 3) + H reaction. Our study reveals that forward scattering in the reaction channel is not caused by Feshbach or dynamical resonances as in the F + H2 → HF(v′ = 2) + H reaction. It is caused predominantly by the slow-down mechanism over the centrifugal barrier in the exit channel, with some small contribution from the shape resonance mechanism in a very small collision energy regime slightly above the HF(v′ = 3) threshold. Our analysis also shows that forward scattering caused by dynamical resonances can very likely be accompanied by forward scattering in a different product vibrational state caused by a slow-down mechanism. PMID:18434547

  19. HF(v' = 3) forward scattering in the F + H2 reaction: shape resonance and slow-down mechanism.

    PubMed

    Wang, Xingan; Dong, Wenrui; Qiu, Minghui; Ren, Zefeng; Che, Li; Dai, Dongxu; Wang, Xiuyan; Yang, Xueming; Sun, Zhigang; Fu, Bina; Lee, Soo-Y; Xu, Xin; Zhang, Dong H

    2008-04-29

    Crossed molecular beam experiments and accurate quantum dynamics calculations have been carried out to address the long standing and intriguing issue of the forward scattering observed in the F + H(2) --> HF(v' = 3) + H reaction. Our study reveals that forward scattering in the reaction channel is not caused by Feshbach or dynamical resonances as in the F + H(2) --> HF(v' = 2) + H reaction. It is caused predominantly by the slow-down mechanism over the centrifugal barrier in the exit channel, with some small contribution from the shape resonance mechanism in a very small collision energy regime slightly above the HF(v' = 3) threshold. Our analysis also shows that forward scattering caused by dynamical resonances can very likely be accompanied by forward scattering in a different product vibrational state caused by a slow-down mechanism.

  20. Lack of Critical Slowing Down Suggests that Financial Meltdowns Are Not Critical Transitions, yet Rising Variability Could Signal Systemic Risk.

    PubMed

    Guttal, Vishwesha; Raghavendra, Srinivas; Goel, Nikunj; Hoarau, Quentin

    2016-01-01

    Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms.

  1. Lack of Critical Slowing Down Suggests that Financial Meltdowns Are Not Critical Transitions, yet Rising Variability Could Signal Systemic Risk

    PubMed Central

    Hoarau, Quentin

    2016-01-01

    Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms. PMID:26761792

  2. Lead Slowing-Down Spectrometry for Spent Fuel Assay: FY12 Status Report

    SciTech Connect

    Warren, Glen A.; Anderson, Kevin K.; Casella, Andrew M.; Danon, Yaron; Devlin, M.; Gavron, A.; Haight, R. C.; Harris, Jason; Imel, G. R.; Kulisek, Jonathan A.; O'Donnell, J. M.; Stewart, T.; Weltz, Adam

    2012-10-01

    Executive Summary The Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign is supporting a multi-institutional collaboration to study the feasibility of using Lead Slowing Down Spectroscopy (LSDS) to conduct direct, independent and accurate assay of fissile isotopes in used fuel assemblies. The collaboration consists of Pacific Northwest National Laboratory (PNNL), Los Alamos National Laboratory (LANL), Rensselaer Polytechnic Institute (RPI), Idaho State University (ISU). There are three main challenges to implementing LSDS to assay used fuel assemblies. These challenges are the development of an algorithm for interpreting the data with an acceptable accuracy for the fissile masses, the development of suitable detectors for the technique, and the experimental benchmarking of the approach. This report is a summary of the progress in these areas made by the collaboration during FY2012. Significant progress was made on the project in FY2012. Extensive characterization of a “semi-empirical” algorithm was conducted. For example, we studied the impact on the accuracy of this algorithm by the minimization of the calibration set, uncertainties in the calibration masses, and by the choice of time window. Issues such a lead size, number of required neutrons, placement of the neutron source and the impact of cadmium around the detectors were also studied. In addition, new algorithms were developed that do not require the use of plutonium fission chambers. These algorithms were applied to measurement data taken by RPI and shown to determine the 235U mass within 4%. For detectors, a new concept for a fast neutron detector involving 4He recoil from neutron scattering was investigated. The detector has the potential to provide a couple of orders of magnitude more sensitivity than 238U fission chambers. Progress was also made on the more conventional approach of using 232Th fission chambers as fast neutron detectors. For

  3. Electron slowing-down spectra in water for electron and photon sources calculated with the Geant4-DNA code.

    PubMed

    Vassiliev, Oleg N

    2012-02-21

    Recently, a very low energy extension was added to the Monte Carlo simulation toolkit Geant4. It is intended for radiobiological modeling and is referred to as Geant4-DNA. Its performance, however, has not been systematically benchmarked in terms of transport characteristics. This study reports on the electron slowing-down spectra and mean energy per ion pair, the W-value, in water for monoenergetic electron and photon sources calculated with Geant4-DNA. These quantities depend on electron energy, but not on spatial or angular variables which makes them a good choice for testing the model of energy transfer processes. The spectra also have a scientific value for radiobiological modeling as they describe the energy distribution of electrons entering small volumes, such as the cell nucleus. Comparisons of Geant4-DNA results with previous studies showed overall good agreement. Some differences in slowing-down spectra between Geant4-DNA and previous studies were found at 100 eV and at approximately 500 eV that were attributed to approximations in models of vibrational excitations and atomic de-excitation after ionization by electron impact. We also found that the high-energy part of the Geant4-DNA spectrum for a 1 keV electron source was higher, and the asymptotic high-energy W-value was lower than previous studies reported.

  4. Slow-down of 13C spin diffusion in organic solids by fast MAS: a CODEX NMR Study.

    PubMed

    Reichert, D; Bonagamba, T J; Schmidt-Rohr, K

    2001-07-01

    One- and two-dimensional 13C exchange nuclear magnetic resonance experiments under magic-angle spinning (MAS) can provide detailed information on slow segmental reorientations and chemical exchange in organic solids, including polymers and proteins. However, observations of dynamics on the time scale of seconds or longer are hampered by the competing process of dipolar 13C spin exchange (spin diffusion). In this Communication, we show that fast MAS can significantly slow down the dipolar spin exchange effect for unprotonated carbon sites. The exchange is measured quantitatively using the centerband-only detection of exchange technique, which enables the detection of exchange at any spinning speed, even in the absence of changes of isotropic chemical shifts. For chemically equivalent unprotonated 13C sites, the dipolar spin exchange rate is found to decrease slightly less than proportionally with the sample-rotation frequency, between 8 and 28 kHz. In the same range, the dipolar spin exchange rate for a glassy polymer with an inhomogeneously broadened MAS line decreases by a factor of 10. For methylene groups, no or only a minor slow-down of the exchange rate is found.

  5. The Widom-Rowlinson mixture on a sphere: elimination of exponential slowing down at first-order phase transitions.

    PubMed

    Fischer, T; Vink, R L C

    2010-03-17

    Computer simulations of first-order phase transitions using 'standard' toroidal boundary conditions are generally hampered by exponential slowing down. This is partly due to interface formation, and partly due to shape transitions. The latter occur when droplets become large such that they self-interact through the periodic boundaries. On a spherical simulation topology, however, shape transitions are absent. We expect that by using an appropriate bias function, exponential slowing down can be largely eliminated. In this work, these ideas are applied to the two-dimensional Widom-Rowlinson mixture confined to the surface of a sphere. Indeed, on the sphere, we find that the number of Monte Carlo steps needed to sample a first-order phase transition does not increase exponentially with system size, but rather as a power law τ α V(α), with α≈2.5, and V the system area. This is remarkably close to a random walk for which α(RW) = 2. The benefit of this improved scaling behavior for biased sampling methods, such as the Wang-Landau algorithm, is investigated in detail.

  6. Ultrafast Measurement of Critical Slowing Down of Hole-Spin Relaxation in Ferromagnetic GaMnAs

    NASA Astrophysics Data System (ADS)

    Patz, Aaron; Li, Tianqi; Perakis, Ilias; Liu, Xinyu; Furdyna, Jacek; Wang, Jigang

    2011-03-01

    We have studied ultrafast photoinduced hole spin relaxation in GaMnAs via degenerate ultrafast magneto-optical Kerr spectroscopy. Near-infrared pump pulses strongly excite the sample, and probe pulses at the same photon energy reveal subpicosecond demagnetization accompanied by energy and spin relaxation of holes manifesting themselves as a fast (~ 200 fs) and a slow (ps) recovery of transient MOKE signals. By carefully analyzing the temporal profiles at different temperatures, we are able to isolate femtosecond hole spin relaxation processes, which are subject to a critical slowing down near the critical temperature of 77K. These results demonstrate a new spectroscopy tool to study the highly elusive hole spin relaxation processes in heavily-doped, correlated spin systems, and have important implications for future applications of these materials in spintronics and magnetic-photonic-electronic multifunctional devices.

  7. MicroRNA-124 slows down the progression of Huntington's disease by promoting neurogenesis in the striatum.

    PubMed

    Liu, Tian; Im, Wooseok; Mook-Jung, Inhee; Kim, Manho

    2015-05-01

    MicroRNA-124 contributes to neurogenesis through regulating its targets, but its expression both in the brain of Huntington's disease mouse models and patients is decreased. However, the effects of microRNA-124 on the progression of Huntington's disease have not been reported. Results from this study showed that microRNA-124 increased the latency to fall for each R6/2 Huntington's disease transgenic mouse in the rotarod test. 5-Bromo-2'-deoxyuridine (BrdU) staining of the striatum shows an increase in neurogenesis. In addition, brain-derived neurotrophic factor and peroxisome proliferator-activated receptor gamma coactivator 1-alpha protein levels in the striatum were increased and SRY-related HMG box transcription factor 9 protein level was decreased. These findings suggest that microRNA-124 slows down the progression of Huntington's disease possibly through its important role in neuronal differentiation and survival.

  8. FOXO/DAF-16 Activation Slows Down Turnover of the Majority of Proteins in C. elegans

    SciTech Connect

    Dhondt, Ineke; Petyuk, Vladislav A.; Cai, Huaihan; Vandemeulebroucke, Lieselot; Vierstraete, Andy; Smith, Richard D.; Depuydt, Geert; Braeckman, Bart  P.

    2016-09-13

    Most aging hypotheses assume the accumulation of damage, resulting in gradual physiological decline and, ultimately, death. Avoiding protein damage accumulation by enhanced turnover should slow down the aging process and extend the lifespan. But, lowering translational efficiency extends rather than shortens the lifespan in C. elegans. We studied turnover of individual proteins in the long-lived daf-2 mutant by combining SILeNCe (stable isotope labeling by nitrogen in Caenorhabditiselegans) and mass spectrometry. Intriguingly, the majority of proteins displayed prolonged half-lives in daf-2, whereas others remained unchanged, signifying that longevity is not supported by high protein turnover. We found that this slowdown was most prominent for translation-related and mitochondrial proteins. Conversely, the high turnover of lysosomal hydrolases and very low turnover of cytoskeletal proteins remained largely unchanged. The slowdown of protein dynamics and decreased abundance of the translational machinery may point to the importance of anabolic attenuation in lifespan extension, as suggested by the hyperfunction theory.

  9. Causality-driven slow-down and speed-up of diffusion in non-Markovian temporal networks.

    PubMed

    Scholtes, Ingo; Wider, Nicolas; Pfitzner, René; Garas, Antonios; Tessone, Claudio J; Schweitzer, Frank

    2014-09-24

    Recent research has highlighted limitations of studying complex systems with time-varying topologies from the perspective of static, time-aggregated networks. Non-Markovian characteristics resulting from the ordering of interactions in temporal networks were identified as one important mechanism that alters causality and affects dynamical processes. So far, an analytical explanation for this phenomenon and for the significant variations observed across different systems is missing. Here we introduce a methodology that allows to analytically predict causality-driven changes of diffusion speed in non-Markovian temporal networks. Validating our predictions in six data sets we show that compared with the time-aggregated network, non-Markovian characteristics can lead to both a slow-down or speed-up of diffusion, which can even outweigh the decelerating effect of community structures in the static topology. Thus, non-Markovian properties of temporal networks constitute an important additional dimension of complexity in time-varying complex systems.

  10. Group-index independent coupling to band engineered SOI photonic crystal waveguide with large slow-down factor.

    PubMed

    Rahimi, Somayyeh; Hosseini, Amir; Xu, Xiaochuan; Subbaraman, Harish; Chen, Ray T

    2011-10-24

    Group-index independent coupling to a silicon-on-insulator (SOI) based band-engineered photonic crystal (PCW) waveguide is presented. A single hole size is used for designing both the PCW coupler and the band-engineered PCW to improve fabrication yield. Efficiency of several types of PCW couplers is numerically investigated. An on-chip integrated Fourier transform spectral interferometry device is used to experimentally determine the group-index while excluding the effect of the couplers. A low-loss, low-dispersion slow light transmission over 18 nm bandwidth under the silica light line with a group index of 26.5 is demonstrated, that corresponds to the largest slow-down factor of 0.31 ever demonstrated for a PCW with oxide bottom cladding.

  11. Rural Growth Slows Down.

    ERIC Educational Resources Information Center

    Henry, Mark; And Others

    1987-01-01

    After decade of growth, rural income, population, and overall economic activity have stalled and again lag behind urban trends. Causes include banking and transportation deregulation, international competition, agricultural finance problems. Only nonmetropolitan counties dependent on retirement, government, and trade show continuing income growth…

  12. Regulation reform slows down

    SciTech Connect

    1995-03-29

    Regulatory reformers in Congress are easing off the accelerator as they recognize that some of their more far-reaching proposals lack sufficient support to win passage. Last week the proposed one-year moratorium on new regulations was set back in the Senate by it main sponsor, Sen. Non Nickles (R., OK), who now seeks to replace it with a more moderate bill. Nickel`s substitute bill would give Congress 45 days after a regulation is issued to decide whether to reject it. It also retroactively allows for review of 80 regulations issued since last November 9, 1994. Asked how his new proposal is superior to a moratorium, which is sharply opposed by the Clinton Administration, Nickles says he thinks it is better because its permanent. The Chemical Manufacturer`s Association (CMA) has not publicly made a regulatory moratorium a top priority, but has quietly supported it by joining with other industry groups lobbying on the issue. A moratorium would halt EPA expansion of the Toxics Release Inventory (TRI) and alloys the delisting of several TRI chemicals.

  13. D-Factor: A Quantitative Model of Application Slow-Down in Multi-Resource Shared Systems

    SciTech Connect

    Lim, Seung-Hwan; Huh, Jae-Seok; Kim, Youngjae; Shipman, Galen M; Das, Chita

    2012-01-01

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price - resource contention among jobs increases job completion time. In this paper, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job is characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We also show that the model can be integrated with an existing on-line scheduler to minimize the makespan of workloads.

  14. Measurement and Analysis Plan for Investigation of Spent-Fuel Assay Using Lead Slowing-Down Spectroscopy

    SciTech Connect

    Smith, Leon E.; Haas, Derek A.; Gavron, Victor A.; Imel, G. R.; Ressler, Jennifer J.; Bowyer, Sonya M.; Danon, Y.; Beller, D.

    2009-09-25

    Under funding from the Department of Energy Office of Nuclear Energy’s Materials, Protection, Accounting, and Control for Transmutation (MPACT) program (formerly the Advanced Fuel Cycle Initiative Safeguards Campaign), Pacific Northwest National Laboratory (PNNL) and Los Alamos National Laboratory (LANL) are collaborating to study the viability of lead slowing-down spectroscopy (LSDS) for spent-fuel assay. Based on the results of previous simulation studies conducted by PNNL and LANL to estimate potential LSDS performance, a more comprehensive study of LSDS viability has been defined. That study includes benchmarking measurements, development and testing of key enabling instrumentation, and continued study of time-spectra analysis methods. This report satisfies the requirements for a PNNL/LANL deliverable that describes the objectives, plans and contributing organizations for a comprehensive three-year study of LSDS for spent-fuel assay. This deliverable was generated largely during the LSDS workshop held on August 25-26, 2009 at Rensselaer Polytechnic Institute (RPI). The workshop itself was a prominent milestone in the FY09 MPACT project and is also described within this report.

  15. Forward scattering due to slow-down of the intermediate in the H + HD --> D + H2 reaction

    NASA Astrophysics Data System (ADS)

    Harich, Steven A.; Dai, Dongxu; Wang, Chia C.; Yang, Xueming; Chao, Sheng Der; Skodje, Rex T.

    2002-09-01

    Quantum dynamical processes near the energy barrier that separates reactants from products influence the detailed mechanism by which elementary chemical reactions occur. In fact, these processes can change the product scattering behaviour from that expected from simple collision considerations, as seen in the two classical reactions F + H2 --> HF + H and H + H2 --> H2 + H and their isotopic variants. In the case of the F + HD reaction, the role of a quantized trapped Feshbach resonance state had been directly determined, confirming previous conclusions that Feshbach resonances cause state-specific forward scattering of product molecules. Forward scattering has also been observed in the H + D2 --> HD + D reaction and attributed to a time-delayed mechanism. But despite extensive experimental and theoretical investigations, the details of the mechanism remain unclear. Here we present crossed-beam scattering experiments and quantum calculations on the H + HD --> H2 + D reaction. We find that the motion of the system along the reaction coordinate slows down as it approaches the top of the reaction barrier, thereby allowing vibrations perpendicular to the reaction coordinate and forward scattering. The reaction thus proceeds, as previously suggested, through a well-defined `quantized bottleneck state' different from the trapped Feshbach resonance states observed before.

  16. FOXO/DAF-16 Activation Slows Down Turnover of the Majority of Proteins in C. elegans

    DOE PAGES

    Dhondt, Ineke; Petyuk, Vladislav A.; Cai, Huaihan; ...

    2016-09-13

    Most aging hypotheses assume the accumulation of damage, resulting in gradual physiological decline and, ultimately, death. Avoiding protein damage accumulation by enhanced turnover should slow down the aging process and extend the lifespan. But, lowering translational efficiency extends rather than shortens the lifespan in C. elegans. We studied turnover of individual proteins in the long-lived daf-2 mutant by combining SILeNCe (stable isotope labeling by nitrogen in Caenorhabditiselegans) and mass spectrometry. Intriguingly, the majority of proteins displayed prolonged half-lives in daf-2, whereas others remained unchanged, signifying that longevity is not supported by high protein turnover. We found that this slowdown wasmore » most prominent for translation-related and mitochondrial proteins. Conversely, the high turnover of lysosomal hydrolases and very low turnover of cytoskeletal proteins remained largely unchanged. The slowdown of protein dynamics and decreased abundance of the translational machinery may point to the importance of anabolic attenuation in lifespan extension, as suggested by the hyperfunction theory.« less

  17. Slow Down and Concentrate: Time for a Paradigm Shift in Fall Prevention among People with Parkinson's Disease?

    PubMed

    Stack, Emma L; Roberts, Helen C

    2013-01-01

    Introduction. We know little about how environmental challenges beyond home exacerbate difficulty moving, leading to falls among people with Parkinson's (PwP). Aims. To survey falls beyond home, identifying challenges amenable to behaviour change. Methods. We distributed 380 questionnaires to PwP in Southern England, asking participants to count and describe falls beyond home in the previous 12 months. Results. Among 255 responses, 136 PwP (diagnosed a median 8 years) reported falling beyond home. They described 249 falls in detail, commonly falling forward after tripping in streets. Single fallers (one fall in 12 months) commonly missed their footing, walking, or changing position and recovered to standing alone or with unfamiliar help. Repeat fallers (median falls, two) commonly felt shaken or embarrassed and sought medical advice. Very frequent fallers (falling at least monthly; median falls beyond home, six) commonly fell backward, in shops and after collapse but often recovered to standing alone. Conclusion. Even independently active PwP who do not fall at home may fall beyond home, often after tripping. Falling beyond home may result in psychological and/or physical trauma (embarrassment if observed by strangers and/or injury if falling backwards onto a hard surface). Prevention requires vigilance and preparedness: slowing down and concentrating on a single task might effectively prevent falling.

  18. Epigenomic maintenance through dietary intervention can facilitate DNA repair process to slow down the progress of premature aging.

    PubMed

    Ghosh, Shampa; Sinha, Jitendra Kumar; Raghunath, Manchala

    2016-09-01

    DNA damage caused by various sources remains one of the most researched topics in the area of aging and neurodegeneration. Increased DNA damage causes premature aging. Aging is plastic and is characterised by the decline in the ability of a cell/organism to maintain genomic stability. Lifespan can be modulated by various interventions like calorie restriction, a balanced diet of macro and micronutrients or supplementation with nutrients/nutrient formulations such as Amalaki rasayana, docosahexaenoic acid, resveratrol, curcumin, etc. Increased levels of DNA damage in the form of double stranded and single stranded breaks are associated with decreased longevity in animal models like WNIN/Ob obese rats. Erroneous DNA repair can result in accumulation of DNA damage products, which in turn result in premature aging disorders such as Hutchinson-Gilford progeria syndrome. Epigenomic studies of the aging process have opened a completely new arena for research and development of drugs and therapeutic agents. We propose here that agents or interventions that can maintain epigenomic stability and facilitate the DNA repair process can slow down the progress of premature aging, if not completely prevent it. © 2016 IUBMB Life, 68(9):717-721, 2016.

  19. [Slowing down the rate of irreversible age-related atrophy of the thymus gland by atopic autotransplantation of its tissue, subjected to long-term cryoconservation].

    PubMed

    Kulikov, A V; Arkhipova, L V; Smirnova, G N; Novoselova, E G; Shpurova, N A; Shishova, N V; Sukhikh, G T

    2010-01-01

    An experimental procedure has been developed enabling to slow down the rate of irreversible atrophy of the thymus gland. The atopic autotransplantation of its tissue subjected to prolonged cryoconservation enables one to inhibit the aging of the organism with respect to several biochemical and immunological indicators.

  20. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    ERIC Educational Resources Information Center

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  1. Therapeutic dosages of aspirin counteract the IL-6 induced pro-tumorigenic effects by slowing down the ribosome biogenesis rate

    PubMed Central

    Brighenti, Elisa; Giannone, Ferdinando Antonino; Fornari, Francesca; Onofrillo, Carmine; Govoni, Marzia; Montanaro, Lorenzo; Treré, Davide; Derenzini, Massimo

    2016-01-01

    Chronic inflammation is a risk factor for the onset of cancer and the regular use of aspirin reduces the risk of cancer development. Here we showed that therapeutic dosages of aspirin counteract the pro-tumorigenic effects of the inflammatory cytokine interleukin(IL)-6 in cancer and non-cancer cell lines, and in mouse liver in vivo. We found that therapeutic dosages of aspirin prevented IL-6 from inducing the down-regulation of p53 expression and the acquisition of the epithelial mesenchymal transition (EMT) phenotypic changes in the cell lines. This was the result of a reduction in c-Myc mRNA transcription which was responsible for a down-regulation of the ribosomal protein S6 expression which, in turn, slowed down the rRNA maturation process, thus reducing the ribosome biogenesis rate. The perturbation of ribosome biogenesis hindered the Mdm2-mediated proteasomal degradation of p53, throughout the ribosomal protein-Mdm2-p53 pathway. P53 stabilization hindered the IL-6 induction of the EMT changes. The same effects were observed in livers from mice stimulated with IL-6 and treated with aspirin. It is worth noting that aspirin down-regulated ribosome biogenesis, stabilized p53 and up-regulated E-cadherin expression in unstimulated control cells also. In conclusion, these data showed that therapeutic dosages of aspirin increase the p53-mediated tumor-suppressor activity of the cells thus being in this way able to reduce the risk of cancer onset, either or not linked to chronic inflammatory processes. PMID:27557515

  2. "You can save time if…"—A qualitative study on internal factors slowing down clinical trials in Sub-Saharan Africa

    PubMed Central

    Pfeiffer, Constanze; Limacher, Manuela; Burri, Christian

    2017-01-01

    Background The costs, complexity, legal requirements and number of amendments associated with clinical trials are rising constantly, which negatively affects the efficient conduct of trials. In Sub-Saharan Africa, this situation is exacerbated by capacity and funding limitations, which further increase the workload of clinical trialists. At the same time, trials are critically important for improving public health in these settings. The aim of this study was to identify the internal factors that slow down clinical trials in Sub-Saharan Africa. Here, factors are limited to those that exclusively relate to clinical trial teams and sponsors. These factors may be influenced independently of external conditions and may significantly increase trial efficiency if addressed by the respective teams. Methods We conducted sixty key informant interviews with clinical trial staff working in different positions in two clinical research centres in Kenya, Ghana, Burkina Faso and Senegal. The study covered English- and French-speaking, and Eastern and Western parts of Sub-Saharan Africa. We performed thematic analysis of the interview transcripts. Results We found various internal factors associated with slowing down clinical trials; these were summarised into two broad themes, “planning” and “site organisation”. These themes were consistently mentioned across positions and countries. “Planning” factors related to budget feasibility, clear project ideas, realistic deadlines, understanding of trial processes, adaptation to the local context and involvement of site staff in planning. “Site organisation” factors covered staff turnover, employment conditions, career paths, workload, delegation and management. Conclusions We found that internal factors slowing down clinical trials are of high importance to trial staff. Our data suggest that adequate and coherent planning, careful assessment of the setting, clear task allocation and management capacity strengthening may

  3. Effect of the size of experimental channels of the lead slowing-down spectrometer SVZ-100 (Institute for Nuclear Research, Moscow) on the moderation constant

    NASA Astrophysics Data System (ADS)

    Latysheva, L. N.; Bergman, A. A.; Sobolevsky, N. M.; Ilić, R. D.

    2013-04-01

    Lead slowing-down (LSD) spectrometers have a low energy resolution (about 30%), but their luminosity is 103 to 104 times higher than that of time-of-flight (TOF) spectrometers. A high luminosity of LSD spectrometers makes it possible to use them to measure neutron cross section for samples of mass about several micrograms. These features specify a niche for the application of LSD spectrometers in measuring neutron cross sections for elements hardly available in macroscopic amounts—in particular, for actinides. A mathematical simulation of the parameters of SVZ-100 LSD spectrometer of the Institute for Nuclear Research (INR, Moscow) is performed in the present study on the basis of the MCNPX code. It is found that the moderation constant, which is the main parameter of LSD spectrometers, is highly sensitive to the size and shape of detecting volumes in calculations and, hence, to the real size of experimental channels of the LSD spectrometer.

  4. Effect of the size of experimental channels of the lead slowing-down spectrometer SVZ-100 (Institute for Nuclear Research, Moscow) on the moderation constant

    SciTech Connect

    Latysheva, L. N.; Bergman, A. A.; Sobolevsky, N. M.; Ilic, R. D.

    2013-04-15

    Lead slowing-down (LSD) spectrometers have a low energy resolution (about 30%), but their luminosity is 10{sup 3} to 10{sup 4} times higher than that of time-of-flight (TOF) spectrometers. A high luminosity of LSD spectrometers makes it possible to use them to measure neutron cross section for samples of mass about several micrograms. These features specify a niche for the application of LSD spectrometers in measuring neutron cross sections for elements hardly available in macroscopic amounts-in particular, for actinides. A mathematical simulation of the parameters of SVZ-100 LSD spectrometer of the Institute for Nuclear Research (INR, Moscow) is performed in the present study on the basis of the MCNPX code. It is found that the moderation constant, which is the main parameter of LSD spectrometers, is highly sensitive to the size and shape of detecting volumes in calculations and, hence, to the real size of experimental channels of the LSD spectrometer.

  5. Critical slowing down and elastic anomaly of uniaxial ferroelectric Ca0.28Ba0.72Nb2O6 crystals with tungsten bronze structure

    NASA Astrophysics Data System (ADS)

    Suzuki, K.; Matsumoto, K.; Dec, J.; Łukasiewicz, T.; Kleemann, W.; Kojima, S.

    2014-08-01

    The ferroelectric phase transition of uniaxial Ca0.28Ba0.72Nb2O6 single crystals with a moderate effective charge disorder was investigated by Brillouin scattering to clarify the dynamic properties. In the tetragonal paraelectric phase a remarkable softening of the sound velocity of the longitudinal acoustic mode and a significant increase in the sound attenuation were observed close to the Curie temperature TC=527K. The intermediate temperature T* ˜640K and the Burns temperature TB ˜790K were determined from the temperature variation in the sound attenuation. The intense broad central peak (CP) caused by polarization and strain fluctuations due to polar nanoregions was clearly observed in the vicinity of TC. The relaxation time determined by the CP width clearly shows critical slowing down towards TC, reflecting a weakly first-order phase transition under weak random fields.

  6. Slow-down or speed-up of inter- and intra-cluster diffusion of controversial knowledge in stubborn communities based on a small world network

    NASA Astrophysics Data System (ADS)

    Ausloos, Marcel

    2015-06-01

    Diffusion of knowledge is expected to be huge when agents are open minded. The report concerns a more difficult diffusion case when communities are made of stubborn agents. Communities having markedly different opinions are for example the Neocreationist and Intelligent Design Proponents (IDP), on one hand, and the Darwinian Evolution Defenders (DED), on the other hand. The case of knowledge diffusion within such communities is studied here on a network based on an adjacency matrix built from time ordered selected quotations of agents, whence for inter- and intra-communities. The network is intrinsically directed and not necessarily reciprocal. Thus, the adjacency matrices have complex eigenvalues; the eigenvectors present complex components. A quantification of the slow-down or speed-up effects of information diffusion in such temporal networks, with non-Markovian contact sequences, can be made by comparing the real time dependent (directed) network to its counterpart, the time aggregated (undirected) network, - which has real eigenvalues. In order to do so, small world networks which both contain an odd number of nodes are studied and compared to similar networks with an even number of nodes. It is found that (i) the diffusion of knowledge is more difficult on the largest networks; (ii) the network size influences the slowing-down or speeding-up diffusion process. Interestingly, it is observed that (iii) the diffusion of knowledge is slower in IDP and faster in DED communities. It is suggested that the finding can be "rationalized", if some "scientific quality" and "publication habit" is attributed to the agents, as common sense would guess. This finding offers some opening discussion toward tying scientific knowledge to belief.

  7. Slow-downs and speed-ups of India-Eurasia convergence since ˜20Ma: Data-noise, uncertainties and dynamic implications

    NASA Astrophysics Data System (ADS)

    Iaffaldano, Giampiero; Bodin, Thomas; Sambridge, Malcolm

    2013-04-01

    India-Somalia and North America-Eurasia relative motions since Early Miocene (˜20Ma) have been recently reconstructed at unprecedented temporal resolution (<1Myr) from magnetic surveys of the Carlsberg and northern Mid-Atlantic Ridges. These new datasets revamped interest in the convergence of India relative to Eurasia, which is obtained from the India-Somalia-Nubia-North America-Eurasia plate circuit. Unless finite rotations are arbitrarily smoothed through time, however, the reconstructed kinematics (i.e. stage Euler vectors) appear to be surprisingly unusual over the past ˜20Myr. In fact, the Euler pole for the India-Eurasia rigid motion scattered erratically over a broad region, while the associated angular velocity underwent sudden increases and decreases. Consequently, convergence across the Himalayan front featured significant speed-ups as well as slow-downs with almost no consistent trend. Arguably, this pattern arises from the presence of data-noise, which biases kinematic reconstructions—particularly at high temporal resolution. The rapid and important India-Eurasia plate-motion changes reconstructed since Early Miocene are likely to be of apparent nature, because they cannot result even from the most optimistic estimates of torques associated, for instance, with the descent of the Indian slab into Earth's mantle. Our previous work aimed at reducing noise in finite-rotation datasets via an expanded Bayesian formulation, which offers several advantages over arbitrary smoothing methods. Here we build on this advance and revise the India-Eurasia kinematics since ˜20Ma, accounting also for three alternative histories of rifting in Africa. We find that India-Eurasia kinematics are simpler and, most importantly, geodynamically plausible upon noise reduction. Convergence across the Himalayan front overall decreased until ˜10Ma, but then systematically increased, albeit moderately, towards the present-day. We test with global dynamic models of the coupled

  8. Slow-downs and speed-ups of India-Eurasia convergence since ~20 Ma: Data-noise, uncertainties and dynamic implications

    NASA Astrophysics Data System (ADS)

    Iaffaldano, G.; Bodin, T.; Sambridge, M.

    2012-12-01

    India-Somalia and North America-Eurasia relative motions since Early Miocene (~20 Ma) have been recently reconstructed at unprecedented temporal resolution from magnetic surveys of the Carlsberg and northern Mid-Atlantic Ridges. These new datasets revamped interest in the convergence of India relative to Eurasia, which is obtained from the India-Somalia-Nubia-North America-Eurasia plate circuit. Unless finite rotations are arbitrarily smoothed through time, however, the reconstructed kinematics (i.e. stage Euler vectors) appear to be surprisingly unusual over the past ~20 Myr. In fact, the Euler pole for the India-Eurasia rigid motion scattered erratically over a broad region, while the associated angular velocity underwent sudden increases and decreases. As a consequence, convergence across the Himalayan front featured significant speed-ups as well as slow-downs with almost no consistent trend. Arguably, this pattern arises from the presence of data-noise that biases kinematic reconstructions, particularly at high temporal resolution. The rapid and important India-Eurasia plate-motion changes reconstructed since Early Miocene are likely to be of apparent nature, because they cannot result even from the most optimistic estimates of torques associated, for instance, with the descent of the Indian slab into Earth's mantle. Our recent work aimed at reducing noise in finite-rotation datasets via an expanded Bayesian formulation, which offers several advantages over arbitrary smoothing methods. Here we build on this advance and revise the India-Eurasia kinematics since ~20 Ma, accounting also for three alternative histories of rifting in Africa. We find that India-Eurasia kinematics are simpler and, most importantly, geodynamically plausible upon noise reduction. Convergence across the Himalayan front decreased systematically until ~10 Ma, but then increased moderately until the present-day. We test with global dynamic models of the coupled mantle/lithosphere system how

  9. Information slows down hierarchy growth.

    PubMed

    Czaplicka, Agnieszka; Suchecki, Krzysztof; Miñano, Borja; Trias, Miquel; Hołyst, Janusz A

    2014-06-01

    We consider models of growing multilevel systems wherein the growth process is driven by rules of tournament selection. A system can be conceived as an evolving tree with a new node being attached to a contestant node at the best hierarchy level (a level nearest to the tree root). The proposed evolution reflects limited information on system properties available to new nodes. It can also be expressed in terms of population dynamics. Two models are considered: a constant tournament (CT) model wherein the number of tournament participants is constant throughout system evolution, and a proportional tournament (PT) model where this number increases proportionally to the growing size of the system itself. The results of analytical calculations based on a rate equation fit well to numerical simulations for both models. In the CT model all hierarchy levels emerge, but the birth time of a consecutive hierarchy level increases exponentially or faster for each new level. The number of nodes at the first hierarchy level grows logarithmically in time, while the size of the last, "worst" hierarchy level oscillates quasi-log-periodically. In the PT model, the occupations of the first two hierarchy levels increase linearly, but worse hierarchy levels either do not emerge at all or appear only by chance in the early stage of system evolution to further stop growing at all. The results allow us to conclude that information available to each new node in tournament dynamics restrains the emergence of new hierarchy levels and that it is the absolute amount of information, not relative, which governs such behavior.

  10. Information slows down hierarchy growth

    NASA Astrophysics Data System (ADS)

    Czaplicka, Agnieszka; Suchecki, Krzysztof; Miñano, Borja; Trias, Miquel; Hołyst, Janusz A.

    2014-06-01

    We consider models of growing multilevel systems wherein the growth process is driven by rules of tournament selection. A system can be conceived as an evolving tree with a new node being attached to a contestant node at the best hierarchy level (a level nearest to the tree root). The proposed evolution reflects limited information on system properties available to new nodes. It can also be expressed in terms of population dynamics. Two models are considered: a constant tournament (CT) model wherein the number of tournament participants is constant throughout system evolution, and a proportional tournament (PT) model where this number increases proportionally to the growing size of the system itself. The results of analytical calculations based on a rate equation fit well to numerical simulations for both models. In the CT model all hierarchy levels emerge, but the birth time of a consecutive hierarchy level increases exponentially or faster for each new level. The number of nodes at the first hierarchy level grows logarithmically in time, while the size of the last, "worst" hierarchy level oscillates quasi-log-periodically. In the PT model, the occupations of the first two hierarchy levels increase linearly, but worse hierarchy levels either do not emerge at all or appear only by chance in the early stage of system evolution to further stop growing at all. The results allow us to conclude that information available to each new node in tournament dynamics restrains the emergence of new hierarchy levels and that it is the absolute amount of information, not relative, which governs such behavior.

  11. Why does diversification slow down?

    PubMed

    Moen, Daniel; Morlon, Hélène

    2014-04-01

    Studies of phylogenetic diversification often show evidence for slowdowns in diversification rates over the history of clades. Recent studies seeking biological explanations for this pattern have emphasized the role of niche differentiation, as in hypotheses of adaptive radiation and ecological limits to diversity. Yet many other biological explanations might underlie diversification slowdowns. In this paper, we focus on the geographic context of diversification, environment-driven bursts of speciation, failure of clades to keep pace with a changing environment, and protracted speciation. We argue that, despite being currently underemphasized, these alternatives represent biologically plausible explanations that should be considered along with niche differentiation. Testing the importance of these alternative hypotheses might yield fundamentally different explanations for what influences species richness within clades through time.

  12. Decline of deep and bottom water ventilation and slowing down of anthropogenic carbon storage in the Weddell Sea, 1984-2011

    NASA Astrophysics Data System (ADS)

    Huhn, Oliver; Rhein, Monika; Hoppema, Mario; van Heuven, Steven

    2013-06-01

    We use a 27 year long time series of repeated transient tracer observations to investigate the evolution of the ventilation time scales and the related content of anthropogenic carbon (Cant) in deep and bottom water in the Weddell Sea. This time series consists of chlorofluorocarbon (CFC) observations from 1984 to 2008 together with first combined CFC and sulphur hexafluoride (SF6) measurements from 2010/2011 along the Prime Meridian in the Antarctic Ocean and across the Weddell Sea. Applying the Transit Time Distribution (TTD) method we find that all deep water masses in the Weddell Sea have been continually growing older and getting less ventilated during the last 27 years. The decline of the ventilation rate of Weddell Sea Bottom Water (WSBW) and Weddell Sea Deep Water (WSDW) along the Prime Meridian is in the order of 15-21%; the Warm Deep Water (WDW) ventilation rate declined much faster by 33%. About 88-94% of the age increase in WSBW near its source regions (1.8-2.4 years per year) is explained by the age increase of WDW (4.5 years per year). As a consequence of the aging, the Cant increase in the deep and bottom water formed in the Weddell Sea slowed down by 14-21% over the period of observations.

  13. Some new results on electron transport in the atmosphere. [Monte Carlo calculation of penetration, diffusion, and slowing down of electron beams in air

    NASA Technical Reports Server (NTRS)

    Berger, M. J.; Seltzer, S. M.; Maeda, K.

    1972-01-01

    The penetration, diffusion and slowing down of electrons in a semi-infinite air medium has been studied by the Monte Carlo method. The results are applicable to the atmosphere at altitudes up to 300 km. Most of the results pertain to monoenergetic electron beams injected into the atmosphere at a height of 300 km, either vertically downwards or with a pitch-angle distribution isotropic over the downward hemisphere. Some results were also obtained for various initial pitch angles between 0 deg and 90 deg. Information has been generated concerning the following topics: (1) the backscattering of electrons from the atmosphere, expressed in terms of backscattering coefficients, angular distributions and energy spectra of reflected electrons, for incident energies T(o) between 2 keV and 2 MeV; (2) energy deposition by electrons as a function of the altitude, down to 80 km, for T(o) between 2 keV and 2 MeV; (3) the corresponding energy depostion by electron-produced bremsstrahlung, down to 30 km; (4) the evolution of the electron flux spectrum as function of the atmospheric depth, for T(o) between 2 keV and 20 keV. Energy deposition results are given for incident electron beams with exponential and power-exponential spectra.

  14. Asparagine slows down the breakdown of storage lipid and degradation of autophagic bodies in sugar-starved embryo axes of germinating lupin seeds.

    PubMed

    Borek, Sławomir; Paluch-Lubawa, Ewelina; Pukacka, Stanisława; Pietrowska-Borek, Małgorzata; Ratajczak, Lech

    2017-02-01

    The research was conducted on embryo axes of yellow lupin (Lupinus luteus L.), white lupin (Lupinus albus L.) and Andean lupin (Lupinus mutabilis Sweet), which were isolated from imbibed seeds and cultured for 96h in vitro under different conditions of carbon and nitrogen nutrition. Isolated embryo axes were fed with 60mM sucrose or were sugar-starved. The effect of 35mM asparagine (a central amino acid in the metabolism of germinating lupin seeds) and 35mM nitrate (used as an inorganic kind of nitrogen) on growth, storage lipid breakdown and autophagy was investigated. The sugar-starved isolated embryo axes contained more total lipid than axes fed with sucrose, and the content of this storage compound was even higher in sugar-starved isolated embryo axes fed with asparagine. Ultrastructural observations showed that asparagine significantly slowed down decomposition of autophagic bodies, and this allowed detailed analysis of their content. We found peroxisomes inside autophagic bodies in cells of sugar-starved Andean lupin embryo axes fed with asparagine, which led us to conclude that peroxisomes may be degraded during autophagy in sugar-starved isolated lupin embryo axes. One reason for the slower degradation of autophagic bodies was the markedly lower lipolytic activity in axes fed with asparagine.

  15. Antihypertensive treatment with cerebral hemodynamics monitoring by ultrasonography in elderly hypertensives without a history of stroke may prevent or slow down cognitive decline. A pending issue.

    PubMed

    Hadjiev, Dimiter I; Mineva, Petya P

    2011-03-01

    The role of the antihypertensive therapy in preventing cognitive disorders in elderly persons without a history of stroke is still a matter of debate. This article focuses on the pathogenesis of vascular cognitive disorders in hypertension and on the impact of antihypertensive treatment in their prevention. Cerebral white matter lesions, caused by small vessel disease and cerebral hypoperfusion, have been found in the majority of elderly hypertensives. They correlate with cognitive disorders, particularly impairments of attention and executive functions. Excessive blood pressure lowering in elderly patients with long-standing hypertension below a certain critical level, may increase the risk of further cerebral hypoperfusion because of disrupted cerebral blood flow autoregulation. As a result, worsening of the cognitive functions could occur, especially in cases with additional vascular risk factors. Five randomized, placebo-controlled trials have focused on the efficacy of antihypertensive treatments in preventing cognitive impairments in elderly patients without a prior cerebrovascular disease. Four of them have not found positive effects. We suggest that repeated neuropsychological assessments and ultrasonography for evaluation of carotid atherosclerosis, as well as cerebral hemodynamics monitoring could adjust the antihypertensive therapy with the aim to decrease the risk of cerebral hypoperfusion and prevent or slow down cognitive decline in elderly hypertensives. Prospective studies are needed to confirm such a treatment strategy.

  16. Fission fragment mass and energy distributions as a function of incident neutron energy measured in a lead slowing-down spectrometer

    SciTech Connect

    Romano, C.; Danon, Y.; Block, R.; Thompson, J.; Blain, E.; Bond, E.

    2010-01-15

    A new method of measuring fission fragment mass and energy distributions as a function of incident neutron energy in the range from below 0.1 eV to 1 keV has been developed. The method involves placing a double-sided Frisch-gridded fission chamber in Rensselaer Polytechnic Institute's lead slowing-down spectrometer (LSDS). The high neutron flux of the LSDS allows for the measurement of the energy-dependent, neutron-induced fission cross sections simultaneously with the mass and kinetic energy of the fission fragments of various small samples. The samples may be isotopes that are not available in large quantities (submicrograms) or with small fission cross sections (microbarns). The fission chamber consists of two anodes shielded by Frisch grids on either side of a single cathode. The sample is located in the center of the cathode and is made by depositing small amounts of actinides on very thin films. The chamber was successfully tested and calibrated using 0.41+-0.04 ng of {sup 252}Cf and the resulting mass distributions were compared to those of previous work. As a proof of concept, the chamber was placed in the LSDS to measure the neutron-induced fission cross section and fragment mass and energy distributions of 25.3+-0.5 mug of {sup 235}U. Changes in the mass distributions as a function of incident neutron energy are evident and are examined using the multimodal fission mode model.

  17. Weighted Bergman kernels and virtual Bergman kernels

    NASA Astrophysics Data System (ADS)

    Roos, Guy

    2005-12-01

    We introduce the notion of "virtual Bergman kernel" and apply it to the computation of the Bergman kernel of "domains inflated by Hermitian balls", in particular when the base domain is a bounded symmetric domain.

  18. Measurement of Neutron-Induced Fission Cross Sections of {sup 229}Th and {sup 231}Pa Using Linac-Driven Lead Slowing-Down Spectrometer

    SciTech Connect

    Kobayashi, Katsuhei; Yamamoto, Shuji; Lee, Samyol; Cho, Hyun-Je; Yamana, Hajimu; Moriyama, Hirotake; Fujita, Yoshiaki; Mitsugashira, Toshiaki

    2001-11-15

    Use is made of a back-to-back type of double fission chamber and an electron linear accelerator-driven lead slowing-down spectrometer to measure the neutron-induced fission cross sections of {sup 229}Th and {sup 231}Pa below 10 keV relative to that of {sup 235}U. A measurement relative to the {sup 10}B(n, {alpha}) reaction is also made using a BF{sub 3} counter at energies below 1 keV and normalized to the absolute value obtained by using the cross section of the {sup 235}U(n,f) reaction between 200 eV and 1 keV.The experimental data of the {sup 229}Th(n,f) reaction, which was measured by Konakhovich et al., show higher cross-section values, especially at energies of 0.1 to 0.4 eV. The data by Gokhberg et al. seem to be lower than the current measurement above 6 keV. Although the evaluated data in JENDL-3.2 are in general agreement with the measurement, the evaluation is higher from 0.25 to 5 eV and lower above 10 eV. The ENDF/B-VI data evaluated above 10 eV are also lower. The current thermal neutron-induced fission cross section at 0.0253 eV is 32.4 {+-} 10.7 b, which is in good agreement with results of Gindler et al., Mughabghab, and JENDL-3.2.The mean value of the {sup 231}Pa(n,f) cross sections between 0.37 and 0.52 eV, which were measured by Leonard and Odegaarden, is close to the current measurement. The evaluated data in ENDF/B-VI are lower below 0.15 eV and higher above {approx}30 eV. The ENDF/B-VI and the JEF-2.2 are extremely higher above 1 keV. The JENDL-3.2 data are in general agreement with the measurement, although they are lower above {approx}100 eV.

  19. Semisupervised kernel matrix learning by kernel propagation.

    PubMed

    Hu, Enliang; Chen, Songcan; Zhang, Daoqiang; Yin, Xuesong

    2010-11-01

    The goal of semisupervised kernel matrix learning (SS-KML) is to learn a kernel matrix on all the given samples on which just a little supervised information, such as class label or pairwise constraint, is provided. Despite extensive research, the performance of SS-KML still leaves some space for improvement in terms of effectiveness and efficiency. For example, a recent pairwise constraints propagation (PCP) algorithm has formulated SS-KML into a semidefinite programming (SDP) problem, but its computation is very expensive, which undoubtedly restricts PCPs scalability in practice. In this paper, a novel algorithm, called kernel propagation (KP), is proposed to improve the comprehensive performance in SS-KML. The main idea of KP is first to learn a small-sized sub-kernel matrix (named seed-kernel matrix) and then propagate it into a larger-sized full-kernel matrix. Specifically, the implementation of KP consists of three stages: 1) separate the supervised sample (sub)set X(l) from the full sample set X; 2) learn a seed-kernel matrix on X(l) through solving a small-scale SDP problem; and 3) propagate the learnt seed-kernel matrix into a full-kernel matrix on X . Furthermore, following the idea in KP, we naturally develop two conveniently realizable out-of-sample extensions for KML: one is batch-style extension, and the other is online-style extension. The experiments demonstrate that KP is encouraging in both effectiveness and efficiency compared with three state-of-the-art algorithms and its related out-of-sample extensions are promising too.

  20. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  1. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2016-02-25

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  2. Iterative software kernels

    SciTech Connect

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  3. Learning with Box Kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-04-12

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, since the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given which dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  4. Learning with box kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-11-01

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, because the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given that dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  5. Kernel Affine Projection Algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  6. The two isomers of HDTIC compounds from Astragali Radix slow down telomere shortening rate via attenuating oxidative stress and increasing DNA repair ability in human fetal lung diploid fibroblast cells.

    PubMed

    Wang, Peichang; Zhang, Zongyu; Sun, Ying; Liu, Xinwen; Tong, Tanjun

    2010-01-01

    4-Hydroxy-5-hydroxymethyl-[1,3]dioxolan-2,6'-spirane-5',6',7',8'-tetrahydro-indolizine-3'-carbaldehyde (HDTIC)-1 and HDTIC-2 are two isomers extracted from Astragalus membranaceus (Fisch) Bunge Var. mongholicus (Bge) Hsiao. Our previous study had demonstrated that they could extend the lifespan of human fetal lung diploid fibroblasts (2BS). To investigate the mechanisms of the HDTIC-induced delay of replicative senescence, in this study, we assessed the effects of these two compounds on telomere shortening rate and DNA repair ability in 2BS cells. The telomere shortening rates of the cells cultured with HDTIC-1 or HDTIC-2 were 31.5 and 41.1 bp with each division, respectively, which were much less than that of the control cells (71.1 bp/PD). We also found that 2BS cells pretreated with HDTIC-1 or HDTIC-2 had a significant reduction in DNA damage after exposure to 200 microM H(2)O(2) for 5 min. Moreover, the 100 microM H(2)O(2)-induced DNA damage was significantly repaired after the damaged cells were continually cultured with HDTIC for 1 h. These results suggest that HDTIC compounds slow down the telomere shortening rate of 2BS cells, which is mainly due to the biological properties of the compounds including the reduction of DNA damage and the improvement of DNA repair ability. In addition, the slow down of telomere shortening rate, the reduction of DNA damage, and the improvement of DNA repair ability induced by HDTIC may be responsible for their delay of replicative senescence.

  7. Multiple collaborative kernel tracking.

    PubMed

    Fan, Zhimin; Yang, Ming; Wu, Ying

    2007-07-01

    Those motion parameters that cannot be recovered from image measurements are unobservable in the visual dynamic system. This paper studies this important issue of singularity in the context of kernel-based tracking and presents a novel approach that is based on a motion field representation which employs redundant but sparsely correlated local motion parameters instead of compact but uncorrelated global ones. This approach makes it easy to design fully observable kernel-based motion estimators. This paper shows that these high-dimensional motion fields can be estimated efficiently by the collaboration among a set of simpler local kernel-based motion estimators, which makes the new approach very practical.

  8. Robotic Intelligence Kernel: Communications

    SciTech Connect

    Walton, Mike C.

    2009-09-16

    The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.

  9. Robotic Intelligence Kernel: Driver

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.

  10. Polygamy slows down population divergence in shorebirds

    USGS Publications Warehouse

    Jackson, Josephine D'Urban; dos Remedios, Natalie; Maher, Kathryn; Zefania, Sama; Haig, Susan M.; Oyler-McCance, Sara J.; Blomqvist, Donald; Burke, Terry; Bruford, Michael W.; Székely, Tamás; Küpper, Clemens

    2017-01-01

    Sexual selection may act as a promotor of speciation since divergent mate choice and competition for mates can rapidly lead to reproductive isolation. Alternatively, sexual selection may also retard speciation since polygamous individuals can access additional mates by increased breeding dispersal. High breeding dispersal should hence increase gene flow and reduce diversification in polygamous species. Here, we test how polygamy predicts diversification in shorebirds using genetic differentiation and subspecies richness as proxies for population divergence. Examining microsatellite data from 79 populations in 10 plover species (Genus: Charadrius) we found that polygamous species display significantly less genetic structure and weaker isolation-by-distance effects than monogamous species. Consistent with this result, a comparative analysis including 136 shorebird species showed significantly fewer subspecies for polygamous than for monogamous species. By contrast, migratory behavior neither predicted genetic differentiation nor subspecies richness. Taken together, our results suggest that dispersal associated with polygamy may facilitate gene flow and limit population divergence. Therefore, intense sexual selection, as occurs in polygamous species, may act as a brake rather than an engine of speciation in shorebirds. We discuss alternative explanations for these results and call for further studies to understand the relationships between sexual selection, dispersal, and diversification.

  11. [Demography: can growth be slowed down?].

    PubMed

    1990-01-01

    The UN Fund for Population Activities report on the status of world population in 1990 is particularly unsettling because it indicates that fertility is not declining as rapidly as had been predicted. The world population of some 5.3 billion is growing by 90-100 million per year. 6 years ago the growth rate appeared to be declining everywhere except in Africa and some regions of South Asia. Hopes that the world population would stabilize at around 10.2 billion by the end of the 21st century now appear unrealistic. Some countries such as the Philippines, India, and Morocco which had some success in slowing growth in the 1960s and 70s have seen a significant deceleration in the decline. Growth rates in several African countries are already 2.7% per year and increasing. It is projected that Africa's population will reach 1.581 billion by 2025. Already there are severe shortages of arable land in some overwhelmingly agricultural countries like Rwanda and Burundi, and malnutrition is widespread on the continent. Between 1979-81 and 1986- 87, cereal production declined in 25 African countries out of 43 for which the Food and Agriculture Organization has data. The urban population of developing countries is increasing at 3.6%/year. It grew from 285 million in 1950 to 1.384 billion today and is projected at 4.050 billion in 2050. Provision of water, electricity, and sanitary services will be very difficult. From 1970-88 the number of urban households without portable water increased from 138 million to 215 million. It is not merely the quality of life that is menaced by constant population growth, but also the very future of the earth as a habitat, because of the degradation of soils and forests and resulting global warming. 6-7 million hectares of agricultural land are believed to be lost to erosion each year. Deforestation is a principal cause of soil erosion. Each year more than 11 million hectares of tropical forest and forested zones are stripped, in addition to some 4.4 million hectares selectively harvested for lumber. Deforestation contributes to global warming and to deterioration of the ozone layer. Consequences of global warming by the middle of the next century may include decertification of entire countries, raising of the level of the oceans, and submersion of certain countries. To avert demographic and ecologic disaster, the geographic and financial access of women in developing countries to contraception should be improved, and some neglected groups such as adolescents should be brought into family planning programs. The condition of women must be improved so that they have access to a source of status other than motherhood.

  12. Polygamy slows down population divergence in shorebirds.

    PubMed

    D'Urban Jackson, Josephine; Dos Remedios, Natalie; Maher, Kathryn H; Zefania, Sama; Haig, Susan; Oyler-McCance, Sara; Blomqvist, Donald; Burke, Terry; Bruford, Michael W; Székely, Tamás; Küpper, Clemens

    2017-02-24

    Sexual selection may act as a promotor of speciation since divergent mate choice and competition for mates can rapidly lead to reproductive isolation. Alternatively, sexual selection may also retard speciation since polygamous individuals can access additional mates by increased breeding dispersal. High breeding dispersal should hence increase gene flow and reduce diversification in polygamous species. Here we test how polygamy predicts diversification in shorebirds using genetic differentiation and subspecies richness as proxies for population divergence. Examining microsatellite data from 79 populations in ten plover species (Genus: Charadrius) we found that polygamous species display significantly less genetic structure and weaker isolation-by-distance effects than monogamous species. Consistent with this result, a comparative analysis including 136 shorebird species showed significantly fewer subspecies for polygamous than for monogamous species. By contrast, migratory behaviour neither predicted genetic differentiation nor subspecies richness. Taken together, our results suggest that dispersal associated with polygamy may facilitate gene flow and limit population divergence. Therefore, intense sexual selection, as occurs in polygamous species, may act as a brake rather than an engine of speciation in shorebirds. We discuss alternative explanations for these results and call for further studies to understand the relationships between sexual selection, dispersal and diversification. This article is protected by copyright. All rights reserved.

  13. Time for bacteria to slow down.

    PubMed

    Armitage, Judith P; Berry, Richard M

    2010-04-02

    The speed of the bacterial flagellar motor is thought to be regulated by structural changes in the motor. Two new studies, Boehm et al. (2010) in this issue and Paul et al. (2010) in Molecular Cell, now show that cyclic di-GMP also regulates flagellar motor speed through interactions between the cyclic di-GMP binding protein YcgR and the motor proteins.

  14. Words can slow down category learning.

    PubMed

    Brojde, Chandra L; Porter, Chelsea; Colunga, Eliana

    2011-08-01

    Words have been shown to influence many cognitive tasks, including category learning. Most demonstrations of these effects have focused on instances in which words facilitate performance. One possibility is that words augment representations, predicting an across the-board benefit of words during category learning. We propose that words shift attention to dimensions that have been historically predictive in similar contexts. Under this account, there should be cases in which words are detrimental to performance. The results from two experiments show that words impair learning of object categories under some conditions. Experiment 1 shows that words hurt performance when learning to categorize by texture. Experiment 2 shows that words also hurt when learning to categorize by brightness, leading to selectively attending to shape when both shape and hue could be used to correctly categorize stimuli. We suggest that both the positive and negative effects of words have developmental origins in the history of word usage while learning categories. [corrected

  15. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  16. Kernel mucking in top

    SciTech Connect

    LeFebvre, W.

    1994-08-01

    For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.

  17. Analytical continuous slowing down model for nuclear reaction cross-section measurements by exploitation of stopping for projectile energy scanning and results for 13C(3He,α)12C and 13C(3He,p)15N

    NASA Astrophysics Data System (ADS)

    Möller, S.

    2017-03-01

    Ion beam analysis is a set of precise, calibration free and non-destructive methods for determining surface-near concentrations of potentially all elements and isotopes in a single measurement. For determination of concentrations the reaction cross-section of the projectile with the targets has to be known, in general at the primary beam energy and all energies below. To reduce the experimental effort of cross-section measurements a new method is presented here. The method is based on the projectile energy reduction when passing matter of thick targets. The continuous slowing down approximation is used to determine cross-sections from a thick target at projectile energies below the primary energy by backward calculation of the measured product spectra. Results for 12C(3He,p)14N below 4.5 MeV are in rough agreement with literature data and reproduce the measured spectra. New data for reactions of 3He with 13C are acquired using the new technique. The applied approximations and further applications are discussed.

  18. Robotic Intelligence Kernel: Architecture

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  19. Robotic Intelligence Kernel: Visualization

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.

  20. Discrete beta dose kernel matrices for nuclides applied in targeted radionuclide therapy (TRT) calculated with MCNP5

    SciTech Connect

    Reiner, Dora; Blaickner, Matthias; Rattay, Frank

    2009-11-15

    radionuclides applied in TRT. In contrast to analytical dose point kernels, the discrete kernels elude the problem of overestimation near the source and take energy depositions into account, which occur beyond the range of the continuous-slowing-down approximation (csda range). Recalculation of the 1x1x1 mm{sup 3} kernels to other dose kernels with varying voxel dimensions, cubic or noncubic, is shown to be easily manageable and thereby provides a resolution-independent system of dose calculation.

  1. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2015-12-22

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  2. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2016-06-01

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  3. Kernel Optimization in Discriminant Analysis

    PubMed Central

    You, Di; Hamsici, Onur C.; Martinez, Aleix M.

    2011-01-01

    Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results using a large number of databases and classifiers demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates. PMID:20820072

  4. Kernel machine SNP-set testing under multiple candidate kernels.

    PubMed

    Wu, Michael C; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M; Harmon, Quaker E; Lin, Xinyi; Engel, Stephanie M; Molldrem, Jeffrey J; Armistead, Paul M

    2013-04-01

    Joint testing for the cumulative effect of multiple single-nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large-scale genetic association studies. The kernel machine (KM)-testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori because this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest P-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power vs. using the best candidate kernel.

  5. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or...

  6. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored...

  7. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel,...

  8. Kernel phase and kernel amplitude in Fizeau imaging

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin J. S.

    2016-12-01

    Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent history of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.

  9. The Adaptive Kernel Neural Network

    DTIC Science & Technology

    1989-10-01

    A neural network architecture for clustering and classification is described. The Adaptive Kernel Neural Network (AKNN) is a density estimation...classification layer. The AKNN retains the inherent parallelism common in neural network models. Its relationship to the kernel estimator allows the network to

  10. Robotic intelligence kernel

    DOEpatents

    Bruemmer, David J.

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  11. Flexible Kernel Memory

    PubMed Central

    Nowicki, Dimitri; Siegelmann, Hava

    2010-01-01

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013

  12. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off....

  13. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...

  14. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle...

  15. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  16. Dose point kernels in liquid water: an intra-comparison between GEANT4-DNA and a variety of Monte Carlo codes.

    PubMed

    Champion, C; Incerti, S; Perrot, Y; Delorme, R; Bordage, M C; Bardiès, M; Mascialino, B; Tran, H N; Ivanchenko, V; Bernal, M; Francis, Z; Groetz, J-E; Fromm, M; Campos, L

    2014-01-01

    Modeling the radio-induced effects in biological medium still requires accurate physics models to describe the interactions induced by all the charged particles present in the irradiated medium in detail. These interactions include inelastic as well as elastic processes. To check the accuracy of the very low energy models recently implemented into the GEANT4 toolkit for modeling the electron slowing-down in liquid water, the simulation of electron dose point kernels remains the preferential test. In this context, we here report normalized radial dose profiles, for mono-energetic point sources, computed in liquid water by using the very low energy "GEANT4-DNA" physics processes available in the GEANT4 toolkit. In the present study, we report an extensive intra-comparison of profiles obtained by a large selection of existing and well-documented Monte-Carlo codes, namely, EGSnrc, PENELOPE, CPA100, FLUKA and MCNPX.

  17. RTOS kernel in portable electrocardiograph

    NASA Astrophysics Data System (ADS)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  18. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  19. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  20. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  1. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  2. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  3. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  4. Travel-Time and Amplitude Sensitivity Kernels

    DTIC Science & Technology

    2011-09-01

    amplitude sensitivity kernels shown in the lower panels concentrate about the corresponding eigenrays . Each 3D kernel exhibits a broad negative...in 2 and 3 dimensions have similar 11 shapes to corresponding travel-time sensitivity kernels (TSKs), centered about the respective eigenrays

  5. Adaptive wiener image restoration kernel

    SciTech Connect

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  6. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  7. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  8. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    PubMed

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.

  9. Diffusion Map Kernel Analysis for Target Classification

    DTIC Science & Technology

    2010-06-01

    Gaussian and Polynomial kernels are most familiar from support vector machines. The Laplacian and Rayleigh were introduced previously in [7]. IV ...Cancer • Clev. Heart: Heart Disease Data Set, Cleveland • Wisc . BC: Wisconsin Breast Cancer Original • Sonar2: Shallow Water Acoustic Toolset [9...the Rayleigh kernel captures the embedding with an average PC of 77.3% and a slightly higher PFA than the Gaussian kernel. For the Wisc . BC

  10. Comparison between an event-by-event Monte Carlo code, NOREC, and ETRAN for electron scaled point kernels between 20 keV and 1 MeV.

    PubMed

    Cho, Sang Hyun; Vassiliev, Oleg N; Horton, John L

    2007-03-01

    An event-by-event Monte Carlo code called NOREC, a substantially improved version of the Oak Ridge electron transport code (OREC), was released in 2003, after a number of modifications to OREC. In spite of some earlier work, the characteristics of the code have not been clearly shown so far, especially for a wide range of electron energies. Therefore, NOREC was used in this study to generate one of the popular dosimetric quantities, the scaled point kernel, for a number of electron energies between 0.02 and 1.0 MeV. Calculated kernels were compared with the most well-known published kernels based on a condensed history Monte Carlo code, ETRAN, to show not only general agreement between the codes for the electron energy range considered but also possible differences between an event-by-event code and a condensed history code. There was general agreement between the kernels within about 5% up to 0.7 r/r (0) for 100 keV and 1 MeV electrons. Note that r/r (0) denotes the scaled distance, where r is the radial distance from the source to the dose point and r (0) is the continuous slowing down approximation (CSDA) range of a mono-energetic electron. For the same range of scaled distances, the discrepancies for 20 and 500 keV electrons were up to 6 and 12%, respectively. Especially, there was more pronounced disagreement for 500 keV electrons than for 20 keV electrons. The degree of disagreement for 500 keV electrons decreased when NOREC results were compared with published EGS4/PRESTA results, producing similar agreement to other electron energies.

  11. Molecular Hydrodynamics from Memory Kernels

    NASA Astrophysics Data System (ADS)

    Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin

    2016-04-01

    The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t-3 /2 . We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius.

  12. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  13. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...

  14. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...

  15. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  16. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  17. Bergman Kernel from Path Integral

    NASA Astrophysics Data System (ADS)

    Douglas, Michael R.; Klevtsov, Semyon

    2010-01-01

    We rederive the expansion of the Bergman kernel on Kähler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory, and generalize it to supersymmetric quantum mechanics. One physics interpretation of this result is as an expansion of the projector of wave functions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kähler form. This is relevant for the quantum Hall effect in curved space, and for its higher dimensional generalizations. Other applications include the theory of coherent states, the study of balanced metrics, noncommutative field theory, and a conjecture on metrics in black hole backgrounds discussed in [24]. We give a short overview of these various topics. From a conceptual point of view, this expansion is noteworthy as it is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey et al short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry.

  18. Kernel current source density method.

    PubMed

    Potworowski, Jan; Jakuczun, Wit; Lȩski, Szymon; Wójcik, Daniel

    2012-02-01

    Local field potentials (LFP), the low-frequency part of extracellular electrical recordings, are a measure of the neural activity reflecting dendritic processing of synaptic inputs to neuronal populations. To localize synaptic dynamics, it is convenient, whenever possible, to estimate the density of transmembrane current sources (CSD) generating the LFP. In this work, we propose a new framework, the kernel current source density method (kCSD), for nonparametric estimation of CSD from LFP recorded from arbitrarily distributed electrodes using kernel methods. We test specific implementations of this framework on model data measured with one-, two-, and three-dimensional multielectrode setups. We compare these methods with the traditional approach through numerical approximation of the Laplacian and with the recently developed inverse current source density methods (iCSD). We show that iCSD is a special case of kCSD. The proposed method opens up new experimental possibilities for CSD analysis from existing or new recordings on arbitrarily distributed electrodes (not necessarily on a grid), which can be obtained in extracellular recordings of single unit activity with multiple electrodes.

  19. KERNEL PHASE IN FIZEAU INTERFEROMETRY

    SciTech Connect

    Martinache, Frantz

    2010-11-20

    The detection of high contrast companions at small angular separation appears feasible in conventional direct images using the self-calibration properties of interferometric observable quantities. The friendly notion of closure phase, which is key to the recent observational successes of non-redundant aperture masking interferometry used with adaptive optics, appears to be one example of a wide family of observable quantities that are not contaminated by phase noise. In the high-Strehl regime, soon to be available thanks to the coming generation of extreme adaptive optics systems on ground-based telescopes, and already available from space, closure phase like information can be extracted from any direct image, even taken with a redundant aperture. These new phase-noise immune observable quantities, called kernel phases, are determined a priori from the knowledge of the geometry of the pupil only. Re-analysis of archive data acquired with the Hubble Space Telescope NICMOS instrument using this new kernel-phase algorithm demonstrates the power of the method as it clearly detects and locates with milliarcsecond precision a known companion to a star at angular separation less than the diffraction limit.

  20. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  1. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  2. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  3. Does time really slow down during a frightening event?

    PubMed

    Stetson, Chess; Fiesta, Matthew P; Eagleman, David M

    2007-12-12

    Observers commonly report that time seems to have moved in slow motion during a life-threatening event. It is unknown whether this is a function of increased time resolution during the event, or instead an illusion of remembering an emotionally salient event. Using a hand-held device to measure speed of visual perception, participants experienced free fall for 31 m before landing safely in a net. We found no evidence of increased temporal resolution, in apparent conflict with the fact that participants retrospectively estimated their own fall to last 36% longer than others' falls. The duration dilation during a frightening event, and the lack of concomitant increase in temporal resolution, indicate that subjective time is not a single entity that speeds or slows, but instead is composed of separable subcomponents. Our findings suggest that time-slowing is a function of recollection, not perception: a richer encoding of memory may cause a salient event to appear, retrospectively, as though it lasted longer.

  4. Slowing Down: Age-Related Neurobiological Predictors of Processing Speed

    PubMed Central

    Eckert, Mark A.

    2011-01-01

    Processing speed, or the rate at which tasks can be performed, is a robust predictor of age-related cognitive decline and an indicator of independence among older adults. This review examines evidence for neurobiological predictors of age-related changes in processing speed, which is guided in part by our source based morphometry findings that unique patterns of frontal and cerebellar gray matter predict age-related variation in processing speed. These results, together with the extant literature on morphological predictors of age-related changes in processing speed, suggest that specific neural systems undergo declines and as a result slow processing speed. Future studies of processing speed – dependent neural systems will be important for identifying the etiologies for processing speed change and the development of interventions that mitigate gradual age-related declines in cognitive functioning and enhance healthy cognitive aging. PMID:21441995

  5. Intermittent Flow In Yield Stress Fluids Slows Down Chaotic Mixing

    NASA Astrophysics Data System (ADS)

    Boujlel, Jalila; Wendell, Dawn; Gouillart, Emmanuelle; Pigeonneau, Franck; Jop, Pierre; Laboratoire Surface du Verre et Interfaces Team

    2013-11-01

    Many mixing situations involve fluids with non-Newtonian properties: mixing of building materials such as concrete or mortar are based on fluids that have shear- thinning rheological properties. Lack of correct mixing can waste time and money, or lead to products with defects. When fluids are stirred and mixed together at low Reynolds number, the fluid particles should undergo chaotic trajectories to be well mixed by the so-called chaotic advection resulting from the flow. Previous work to characterize chaotic mixing in many different geometries has primarily focused on Newtonian fluids. First studies into non-Newtonian chaotic advection often utilize idealized mixing geometries such as cavity flows or journal bearing flows for numerical studies. Here, we present experimental results of chaotic mixing of yield stress fluids with non-Newtonian fluids using rod-stirring protocol with rotating vessel. We describe the various steps of the mixing and determine their dependence on the fluid rheology and speeds of rotation of the rods and the vessel. We show how the mixing of yield-stress fluids by chaotic advection is reduced compared to the mixing of Newtonian fluids and explain our results, bringing to light the relevant mechanisms: the presence of fluid that only flows intermittently, a phenomenon enhanced by the yield stress, and the importance of the peripheral region. This result is confirmed via numerical simulations.

  6. Vitamin E slows down the progression of osteoarthritis

    PubMed Central

    LI, XI; DONG, ZHONGLI; ZHANG, FUHOU; DONG, JUNJIE; ZHANG, YUAN

    2016-01-01

    Osteoarthritis is a chronic degenerative joint disorder with the characteristics of articular cartilage destruction, subchondral bone alterations and synovitis. Clinical signs and symptoms of osteoarthritis include pain, stiffness, restricted motion and crepitus. It is the major cause of joint dysfunction in developed nations and has enormous social and economic consequences. Current treatments focus on symptomatic relief, however, they lack efficacy in controlling the progression of this disease, which is a leading cause of disability. Vitamin E is safe to use and may delay the progression of osteoarthritis by acting on several aspects of the disease. In this review, how vitamin E may promote the maintenance of skeletal muscle and the regulation of nucleic acid metabolism to delay osteoarthritis progression is explored. In addition, how vitamin E may maintain the function of sex organs and the stability of mast cells, thus conferring a greater resistance to the underlying disease process is also discussed. Finally, the protective effect of vitamin E on the subchondral vascular system, which decreases the reactive remodeling in osteoarthritis, is reviewed. PMID:27347011

  7. Hydrodynamic interactions slow down crystallization of soft colloids.

    PubMed

    Roehm, Dominic; Kesselheim, Stefan; Arnold, Axel

    2014-08-14

    Colloidal suspensions are often argued to be an ideal model for studying phase transitions such as crystallization, as they have the advantage of tunable interactions and experimentally tractable time and length scales. Because crystallization is assumed to be unaffected by details of particle transport other than the bulk diffusion coefficient, findings are frequently argued to be transferable to pure melts without solvent. In this article, we present molecular dynamics simulations of crystallization in a suspension of colloids with Yukawa interactions which challenge this assumption. In order to investigate the role of hydrodynamic interactions mediated by the solvent, we model the solvent both implicitly and explicitly, using Langevin dynamics and the fluctuating lattice Boltzmann method, respectively. Our simulations show a significant reduction of the crystal growth velocity due to hydrodynamic interactions even at moderate hydrodynamic coupling. This slowdown is accompanied by a reduction of the width of the layering region in front of the growing crystal. Thus the dynamics of a colloidal suspension differ strongly from that of a melt, making it less useful as a model for solvent-free melts than previously thought.

  8. Slow down of actin depolymerization by cross-linking molecules.

    PubMed

    Schmoller, Kurt M; Semmrich, Christine; Bausch, Andreas R

    2011-02-01

    The ability to control the assembly and disassembly dynamics of actin filaments is an essential property of the cellular cytoskeleton. While many different proteins are known which accelerate the polymerization of monomers into filaments or promote their disintegration, much less is known on mechanisms which guarantee the kinetic stability of the cytoskeletal filaments. Previous studies indicate that cross-linking molecules might fulfill these stabilizing tasks, which in addition facilitates their ability to regulate the organization of cytoskeletal structures in vivo. The effect of depolymerization factors on such structures or the mechanism which leads finally to their disintegration remain unknown. Here, we use multiple depolymerization methods in order to directly demonstrate that cross-linking and bundling proteins effectively suppress the actin depolymerization in a concentration dependent manner. Even the actin depolymerizing factor cofilin is not sufficient to facilitate a fast disintegration of highly cross-linked actin networks unless molecular motors are used simultaneously. The drastic modification of actin kinetics by cross-linking molecules can be expected to have wide-ranging implications for our understanding of the cytoskeleton, where cross-linking molecules are omnipresent and essential.

  9. [Tripeptides slow down aging process in renal cell culture].

    PubMed

    Khavinson, V Kh; Tarnovskaia, S I; Lin'kova, N S; Poliakova, V O; Durnova, A O; Nichik, T E; Kvetnoĭ, I M; D'iakonov, M M; Iakutseni, P P

    2014-01-01

    The mechanism of geroprotective effect of peptides AED and EDL was studied in ageing renal cell culture. Peptide AED and EDL increase cell proliferation, decreasing expression of marker of aging p16, p21, p53 and increasing expression of SIRT-6 in young and aged renal cell culture. The reduction of SIRT-6 synthesis in cell is one of the causes of cell senescence. On the basis of experimental data models of interaction of peptides with various sites of DNA were constructed. Both peptides form most energetically favorable complexes with d(ATATATATAT)2 sequences in minor groove of DNA. It is shown that interaction of peptides AED and EDL with DNA is the cause of gene expression, encoded marker of ageing in renal cells.

  10. Gastropods slow down succession and maintain diversity in cryptogam communities.

    PubMed

    Boch, Steffen; Prati, Daniel; Fischer, Markus

    2016-09-01

    Herbivore effects on diversity and succession were often studied in plants, but not in cryptogams. Besides direct herbivore effects on cryptogams, we expected indirect effects by changes in competitive interactions among cryptogams. Therefore, we conducted a long-term gastropod exclusion experiment testing for grazing effects on epiphytic cryptogam communities. We estimated the grazing damage, cover and diversity of cryptogams before gastropods were excluded and three and six years thereafter. Gastropod herbivory pronouncedly affected cryptogams, except for bryophytes, strongly depending on host tree species and duration of gastropod exclusion. On control trees, gastropod grazing regulated the growth of algae and non-lichenized fungi and thereby maintained a high lichen diversity and cover. On European beech, the release from gastropod grazing temporarily increased lichen vitality, cover, and species richness, but later caused rapid succession where algae and fungi overgrew lichens and thereby reduced their cover and diversity compared with the control. On Norway spruce, without gastropods lichen richness decreased and lichen cover increased compared with the control. Our findings highlight the importance of long-term exclusion experiments to disentangle short-term, direct effects from longer-term, indirect effects via changes in competitive relationships between taxa. We further demonstrated that gastropod feeding maintains the diversity of cryptogam communities.

  11. Monitoring accelerations with GPS in football: time to slow down?

    PubMed

    Buchheit, Martin; Al Haddad, Hani; Simpson, Ben M; Palazzi, Dino; Bourdon, Pitre C; Di Salvo, Valter; Mendez-Villanueva, Alberto

    2014-05-01

    The aims of the current study were to examine the magnitude of between-GPS-models differences in commonly reported running-based measures in football, examine between-units variability, and assess the effect of software updates on these measures. Fifty identical-brand GPS units (15 SPI-proX and 35 SPIproX2, 15 Hz, GPSports, Canberra, Australia) were attached to a custom-made plastic sled towed by a player performing simulated match running activities. GPS data collected during training sessions over 4 wk from 4 professional football players (N = 53 files) were also analyzed before and after 2 manufacturer-supplied software updates. There were substantial differences between the different models (eg, standardized difference for the number of acceleration >4 m/s2 = 2.1; 90% confidence limits [1.4, 2.7], with 100% chance of a true difference). Between-units variations ranged from 1% (maximal speed) to 56% (number of deceleration >4 m/s2). Some GPS units measured 2-6 times more acceleration/deceleration occurrences than others. Software updates did not substantially affect the distance covered at different speeds or peak speed reached, but 1 of the updates led to large and small decreases in the occurrence of accelerations (-1.24; -1.32, -1.15) and decelerations (-0.45; -0.48, -0.41), respectively. Practitioners are advised to apply care when comparing data collected with different models or units or when updating their software. The metrics of accelerations and decelerations show the most variability in GPS monitoring and must be interpreted cautiously.

  12. Can Lionel Messi's brain slow down time passing?

    PubMed

    Jafari, Sajad; Smith, Leslie Samuel

    2016-01-01

    It seems that seeing others in slow-motion by heroes does not belong only to movies. When Lionel Messi plays football, you can hardly see anything from him that other players cannot do. Then why he is not stoppable really? It seems the answer may be that opponents do not have enough time to do what they want; because in Messi's neural system, time passes slower. In differential equations that model a single neuron, this speed can be generated by multiplying an equal term in all equations. Or maybe interactions between neurons and the structure of neural networks play this role.

  13. Using Paramagnetism to Slow Down Nuclear Relaxation in Protein NMR.

    PubMed

    Orton, Henry W; Kuprov, Ilya; Loh, Choy-Theng; Otting, Gottfried

    2016-12-01

    Paramagnetic metal ions accelerate nuclear spin relaxation; this effect is widely used for distance measurement and called paramagnetic relaxation enhancement (PRE). Theoretical predictions established that, under special circumstances, it is also possible to achieve a reduction in nuclear relaxation rates (negative PRE). This situation would occur if the mechanism of nuclear relaxation in the diamagnetic state is counterbalanced by a paramagnetic relaxation mechanism caused by the metal ion. Here we report the first experimental evidence for such a cross-correlation effect. Using a uniformly (15)N-labeled mutant of calbindin D9k loaded with either Tm(3+) or Tb(3+), reduced R1 and R2 relaxation rates of backbone (15)N spins were observed compared with the diamagnetic reference (the same protein loaded with Y(3+)). The effect arises from the compensation of the chemical shift anisotropy tensor by the anisotropic dipolar shielding generated by the unpaired electron spin.

  14. Slow Down to Brake: Effects of Tapering Epinephrine on Potassium.

    PubMed

    Veerbhadran, Sivaprasad; Nayagam, Asher Ennis; Ramraj, Sandeep; Raghavan, Jaganathan

    2016-07-01

    Hyperkalemia is not an uncommon complication of cardiac surgical procedures. Intractable hyperkalemia is a difficult situation that can even lead to death. We report on a postoperative case in a patient in whom a sudden decrease of epinephrine led to intractable hyperkalemia and cardiac arrest. We wish to draw the reader's attention to the issue that sudden discontinuation of epinephrine can lead to dangerous hyperkalemia.

  15. Cholesterol homeostasis: a key to prevent or slow down neurodegeneration.

    PubMed

    Anchisi, Laura; Dessì, Sandra; Pani, Alessandra; Mandas, Antonella

    2012-01-01

    Neurodegeneration, a common feature for many brain disorders, has severe consequences on the mental and physical health of an individual. Typically human neurodegenerative diseases are devastating illnesses that predominantly affect elderly people, progress slowly, and lead to disability and premature death; however they may occur at all ages. Despite extensive research and investments, current therapeutic interventions against these disorders treat solely the symptoms. Therefore, since the underlying mechanisms of damage to neurons are similar, in spite of etiology and background heterogeneous, it will be of interest to identify possible trigger point of neurodegeneration enabling development of drugs and/or prevention strategies that target many disorders simultaneously. Among the factors that have been identified so far to cause neurodegeneration, failures in cholesterol homeostasis are indubitably the best investigated. The aim of this review is to critically discuss some of the main results reported in the recent years in this field mainly focusing on the mechanisms that, by recovering perturbations of cholesterol homeostasis in neuronal cells, may correct clinically relevant features occurring in different neurodegenerative disorders and, in this regard, also debate the current potential therapeutic interventions.

  16. Slowing Down Surface Plasmons on a Moiré Surface

    NASA Astrophysics Data System (ADS)

    Kocabas, Askin; Senlik, S. Seckin; Aydinli, Atilla

    2009-02-01

    We have demonstrated slow propagation of surface plasmons on metallic Moiré surfaces. The phase shift at the node of the Moiré surface localizes the propagating surface plasmons and adjacent nodes form weakly coupled plasmonic cavities. Group velocities around vg=0.44c at the center of the coupled cavity band and almost a zero group velocity at the band edges are observed. A tight binding model is used to understand the coupling behavior. Furthermore, the sinusoidally modified amplitude about the node suppresses the radiation losses and reveals a relatively high quality factor (Q=103).

  17. The context-tree kernel for strings.

    PubMed

    Cuturi, Marco; Vert, Jean-Philippe

    2005-10-01

    We propose a new kernel for strings which borrows ideas and techniques from information theory and data compression. This kernel can be used in combination with any kernel method, in particular Support Vector Machines for string classification, with notable applications in proteomics. By using a Bayesian averaging framework with conjugate priors on a class of Markovian models known as probabilistic suffix trees or context-trees, we compute the value of this kernel in linear time and space while only using the information contained in the spectrum of the considered strings. This is ensured through an adaptation of a compression method known as the context-tree weighting algorithm. Encouraging classification results are reported on a standard protein homology detection experiment, showing that the context-tree kernel performs well with respect to other state-of-the-art methods while using no biological prior knowledge.

  18. Kernel method for corrections to scaling.

    PubMed

    Harada, Kenji

    2015-07-01

    Scaling analysis, in which one infers scaling exponents and a scaling function in a scaling law from given data, is a powerful tool for determining universal properties of critical phenomena in many fields of science. However, there are corrections to scaling in many cases, and then the inference problem becomes ill-posed by an uncontrollable irrelevant scaling variable. We propose a new kernel method based on Gaussian process regression to fix this problem generally. We test the performance of the new kernel method for some example cases. In all cases, when the precision of the example data increases, inference results of the new kernel method correctly converge. Because there is no limitation in the new kernel method for the scaling function even with corrections to scaling, unlike in the conventional method, the new kernel method can be widely applied to real data in critical phenomena.

  19. Dose point kernel for boron-11 decay and the cellular S values in boron neutron capture therapy.

    PubMed

    Ma, Yunzhi; Geng, JinPeng; Gao, Song; Bao, Shanglian

    2006-12-01

    The study of the radiobiology of boron neutron capture therapy is based on the cellular level dosimetry of boron-10's thermal neutron capture reaction 10B(n,alpha)7Li, in which one 1.47 MeV helium-4 ion and one 0.84 MeV lithium-7 ion are spawned. Because of the chemical preference of boron-10 carrier molecules, the dose is heterogeneously distributed in cells. In the present work, the (scaled) dose point kernel of boron-11 decay, called 11B-DPK, was calculated by GEANT4 Monte Carlo simulation code. The DPK curve drops suddenly at the radius of 4.26 microm, the continuous slowing down approximation (CSDA) range of a lithium-7 ion. Then, after a slight ascending, the curve decreases to near zero when the radius goes beyond 8.20 microm, which is the CSDA range of a 1.47 MeV helium-4 ion. With the DPK data, S values for nuclei and cells with the boron-10 on the cell surface are calculated for different combinations of cell and nucleus sizes. The S value for a cell radius of 10 microm and a nucleus radius of 5 microm is slightly larger than the value published by Tung et al. [Appl. Radiat. Isot. 61, 739-743 (2004)]. This result is potentially more accurate than the published value since it includes the contribution of a lithium-7 ion as well as the alpha particle.

  20. Einstein Critical-Slowing-Down is Siegel CyberWar Denial-of-Access Queuing/Pinning/ Jamming/Aikido Via Siegel DIGIT-Physics BEC ``Intersection''-BECOME-UNION Barabasi Network/GRAPH-Physics BEC: Strutt/Rayleigh-Siegel Percolation GLOBALITY-to-LOCALITY Phase-Transition Critical-Phenomenon

    NASA Astrophysics Data System (ADS)

    Buick, Otto; Falcon, Pat; Alexander, G.; Siegel, Edward Carl-Ludwig

    2013-03-01

    Einstein[Dover(03)] critical-slowing-down(CSD)[Pais, Subtle in The Lord; Life & Sci. of Albert Einstein(81)] is Siegel CyberWar denial-of-access(DOA) operations-research queuing theory/pinning/jamming/.../Read [Aikido, Aikibojitsu & Natural-Law(90)]/Aikido(!!!) phase-transition critical-phenomenon via Siegel DIGIT-Physics (Newcomb[Am.J.Math. 4,39(1881)]-{Planck[(1901)]-Einstein[(1905)])-Poincare[Calcul Probabilités(12)-p.313]-Weyl [Goett.Nachr.(14); Math.Ann.77,313 (16)]-{Bose[(24)-Einstein[(25)]-Fermi[(27)]-Dirac[(1927)]}-``Benford''[Proc.Am.Phil.Soc. 78,4,551 (38)]-Kac[Maths.Stat.-Reasoning(55)]-Raimi[Sci.Am. 221,109 (69)...]-Jech[preprint, PSU(95)]-Hill[Proc.AMS 123,3,887(95)]-Browne[NYT(8/98)]-Antonoff-Smith-Siegel[AMS Joint-Mtg.,S.-D.(02)] algebraic-inversion to yield ONLY BOSE-EINSTEIN QUANTUM-statistics (BEQS) with ZERO-digit Bose-Einstein CONDENSATION(BEC) ``INTERSECTION''-BECOME-UNION to Barabasi[PRL 876,5632(01); Rev.Mod.Phys.74,47(02)...] Network /Net/GRAPH(!!!)-physics BEC: Strutt/Rayleigh(1881)-Polya(21)-``Anderson''(58)-Siegel[J.Non-crystalline-Sol.40,453(80)

  1. Bayesian Kernel Mixtures for Counts.

    PubMed

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  2. MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES

    PubMed Central

    Dunson, David B.

    2013-01-01

    Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563

  3. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  4. Perturbed kernel approximation on homogeneous manifolds

    NASA Astrophysics Data System (ADS)

    Levesley, J.; Sun, X.

    2007-02-01

    Current methods for interpolation and approximation within a native space rely heavily on the strict positive-definiteness of the underlying kernels. If the domains of approximation are the unit spheres in euclidean spaces, then zonal kernels (kernels that are invariant under the orthogonal group action) are strongly favored. In the implementation of these methods to handle real world problems, however, some or all of the symmetries and positive-definiteness may be lost in digitalization due to small random errors that occur unpredictably during various stages of the execution. Perturbation analysis is therefore needed to address the stability problem encountered. In this paper we study two kinds of perturbations of positive-definite kernels: small random perturbations and perturbations by Dunkl's intertwining operators [C. Dunkl, Y. Xu, Orthogonal polynomials of several variables, Encyclopedia of Mathematics and Its Applications, vol. 81, Cambridge University Press, Cambridge, 2001]. We show that with some reasonable assumptions, a small random perturbation of a strictly positive-definite kernel can still provide vehicles for interpolation and enjoy the same error estimates. We examine the actions of the Dunkl intertwining operators on zonal (strictly) positive-definite kernels on spheres. We show that the resulted kernels are (strictly) positive-definite on spheres of lower dimensions.

  5. Relationship between cyanogenic compounds in kernels, leaves, and roots of sweet and bitter kernelled almonds.

    PubMed

    Dicenta, F; Martínez-Gómez, P; Grané, N; Martín, M L; León, A; Cánovas, J A; Berenguer, V

    2002-03-27

    The relationship between the levels of cyanogenic compounds (amygdalin and prunasin) in kernels, leaves, and roots of 5 sweet-, 5 slightly bitter-, and 5 bitter-kernelled almond trees was determined. Variability was observed among the genotypes for these compounds. Prunasin was found only in the vegetative part (roots and leaves) for all genotypes tested. Amygdalin was detected only in the kernels, mainly in bitter genotypes. In general, bitter-kernelled genotypes had higher levels of prunasin in their roots than nonbitter ones, but the correlation between cyanogenic compounds in the different parts of plants was not high. While prunasin seems to be present in most almond roots (with a variable concentration) only bitter-kernelled genotypes are able to transform it into amygdalin in the kernel. Breeding for prunasin-based resistance to the buprestid beetle Capnodis tenebrionis L. is discussed.

  6. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  7. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  8. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...

  9. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...

  10. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  11. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more...

  12. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...

  13. Kernel-Based Equiprobabilistic Topographic Map Formation.

    PubMed

    Van Hulle MM

    1998-09-15

    We introduce a new unsupervised competitive learning rule, the kernel-based maximum entropy learning rule (kMER), which performs equiprobabilistic topographic map formation in regular, fixed-topology lattices, for use with nonparametric density estimation as well as nonparametric regression analysis. The receptive fields of the formal neurons are overlapping radially symmetric kernels, compatible with radial basis functions (RBFs); but unlike other learning schemes, the radii of these kernels do not have to be chosen in an ad hoc manner: the radii are adapted to the local input density, together with the weight vectors that define the kernel centers, so as to produce maps of which the neurons have an equal probability to be active (equiprobabilistic maps). Both an "online" and a "batch" version of the learning rule are introduced, which are applied to nonparametric density estimation and regression, respectively. The application envisaged is blind source separation (BSS) from nonlinear, noisy mixtures.

  14. Bergman kernel from the lowest Landau level

    NASA Astrophysics Data System (ADS)

    Klevtsov, S.

    2009-07-01

    We use path integral representation for the density matrix, projected on the lowest Landau level, to generalize the expansion of the Bergman kernel on Kähler manifold to the case of arbitrary magnetic field.

  15. Quantum kernel applications in medicinal chemistry.

    PubMed

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.

  16. KITTEN Lightweight Kernel 0.1 Beta

    SciTech Connect

    Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne; VanDyke, John; Hudson, Trammell

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.

  17. TICK: Transparent Incremental Checkpointing at Kernel Level

    SciTech Connect

    Petrini, Fabrizio; Gioiosa, Roberto

    2004-10-25

    TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5

  18. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  19. Weighted Bergman Kernels and Quantization}

    NASA Astrophysics Data System (ADS)

    Engliš, Miroslav

    Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion for x,y near z, where φ(x,y) is an almost-analytic extension of &\\phi(x)=φ(x,x) and similarly for ψ. Further, . If in addition Ω is of finite type, φ,ψ behave reasonably at the boundary, and - log φ, - log ψ are strictly plurisubharmonic on Ω, we obtain also an analogous asymptotic expansion for the Berezin transform and give applications to the Berezin quantization. Finally, for Ω smoothly bounded and strictly pseudoconvex and φ a smooth strictly plurisubharmonic defining function for Ω, we also obtain results on the Berezin-Toeplitz quantization.

  20. RKF-PCA: robust kernel fuzzy PCA.

    PubMed

    Heo, Gyeongyong; Gader, Paul; Frigui, Hichem

    2009-01-01

    Principal component analysis (PCA) is a mathematical method that reduces the dimensionality of the data while retaining most of the variation in the data. Although PCA has been applied in many areas successfully, it suffers from sensitivity to noise and is limited to linear principal components. The noise sensitivity problem comes from the least-squares measure used in PCA and the limitation to linear components originates from the fact that PCA uses an affine transform defined by eigenvectors of the covariance matrix and the mean of the data. In this paper, a robust kernel PCA method that extends the kernel PCA and uses fuzzy memberships is introduced to tackle the two problems simultaneously. We first introduce an iterative method to find robust principal components, called Robust Fuzzy PCA (RF-PCA), which has a connection with robust statistics and entropy regularization. The RF-PCA method is then extended to a non-linear one, Robust Kernel Fuzzy PCA (RKF-PCA), using kernels. The modified kernel used in the RKF-PCA satisfies the Mercer's condition, which means that the derivation of the K-PCA is also valid for the RKF-PCA. Formal analyses and experimental results suggest that the RKF-PCA is an efficient non-linear dimension reduction method and is more noise-robust than the original kernel PCA.

  1. Kernel-Based Reconstruction of Graph Signals

    NASA Astrophysics Data System (ADS)

    Romero, Daniel; Ma, Meng; Giannakis, Georgios B.

    2017-02-01

    A number of applications in engineering, social sciences, physics, and biology involve inference over networks. In this context, graph signals are widely encountered as descriptors of vertex attributes or features in graph-structured data. Estimating such signals in all vertices given noisy observations of their values on a subset of vertices has been extensively analyzed in the literature of signal processing on graphs (SPoG). This paper advocates kernel regression as a framework generalizing popular SPoG modeling and reconstruction and expanding their capabilities. Formulating signal reconstruction as a regression task on reproducing kernel Hilbert spaces of graph signals permeates benefits from statistical learning, offers fresh insights, and allows for estimators to leverage richer forms of prior information than existing alternatives. A number of SPoG notions such as bandlimitedness, graph filters, and the graph Fourier transform are naturally accommodated in the kernel framework. Additionally, this paper capitalizes on the so-called representer theorem to devise simpler versions of existing Thikhonov regularized estimators, and offers a novel probabilistic interpretation of kernel methods on graphs based on graphical models. Motivated by the challenges of selecting the bandwidth parameter in SPoG estimators or the kernel map in kernel-based methods, the present paper further proposes two multi-kernel approaches with complementary strengths. Whereas the first enables estimation of the unknown bandwidth of bandlimited signals, the second allows for efficient graph filter selection. Numerical tests with synthetic as well as real data demonstrate the merits of the proposed methods relative to state-of-the-art alternatives.

  2. Oecophylla longinoda (Hymenoptera: Formicidae) Lead to Increased Cashew Kernel Size and Kernel Quality.

    PubMed

    Anato, F M; Sinzogan, A A C; Offenberg, J; Adandonon, A; Wargui, R B; Deguenon, J M; Ayelo, P M; Vayssières, J-F; Kossou, D K

    2017-03-03

    Weaver ants, Oecophylla spp., are known to positively affect cashew, Anacardium occidentale L., raw nut yield, but their effects on the kernels have not been reported. We compared nut size and the proportion of marketable kernels between raw nuts collected from trees with and without ants. Raw nuts collected from trees with weaver ants were 2.9% larger than nuts from control trees (i.e., without weaver ants), leading to 14% higher proportion of marketable kernels. On trees with ants, the kernel: raw nut ratio from nuts damaged by formic acid was 4.8% lower compared with nondamaged nuts from the same trees. Weaver ants provided three benefits to cashew production by increasing yields, yielding larger nuts, and by producing greater proportions of marketable kernel mass.

  3. A new Mercer sigmoid kernel for clinical data classification.

    PubMed

    Carrington, André M; Fieguth, Paul W; Chen, Helen H

    2014-01-01

    In classification with Support Vector Machines, only Mercer kernels, i.e. valid kernels, such as the Gaussian RBF kernel, are widely accepted and thus suitable for clinical data. Practitioners would also like to use the sigmoid kernel, a non-Mercer kernel, but its range of validity is difficult to determine, and even within range its validity is in dispute. Despite these shortcomings the sigmoid kernel is used by some, and two kernels in the literature attempt to emulate and improve upon it. We propose the first Mercer sigmoid kernel, that is therefore trustworthy for the classification of clinical data. We show the similarity between the Mercer sigmoid kernel and the sigmoid kernel and, in the process, identify a normalization technique that improves the classification accuracy of the latter. The Mercer sigmoid kernel achieves the best mean accuracy on three clinical data sets, detecting melanoma in skin lesions better than the most popular kernels; while with non-clinical data sets it has no significant difference in median accuracy as compared with the Gaussian RBF kernel. It consistently classifies some points correctly that the Gaussian RBF kernel does not and vice versa.

  4. Kernel bandwidth optimization in spike rate estimation.

    PubMed

    Shimazaki, Hideaki; Shinomoto, Shigeru

    2010-08-01

    Kernel smoother and a time-histogram are classical tools for estimating an instantaneous rate of spike occurrences. We recently established a method for selecting the bin width of the time-histogram, based on the principle of minimizing the mean integrated square error (MISE) between the estimated rate and unknown underlying rate. Here we apply the same optimization principle to the kernel density estimation in selecting the width or "bandwidth" of the kernel, and further extend the algorithm to allow a variable bandwidth, in conformity with data. The variable kernel has the potential to accurately grasp non-stationary phenomena, such as abrupt changes in the firing rate, which we often encounter in neuroscience. In order to avoid possible overfitting that may take place due to excessive freedom, we introduced a stiffness constant for bandwidth variability. Our method automatically adjusts the stiffness constant, thereby adapting to the entire set of spike data. It is revealed that the classical kernel smoother may exhibit goodness-of-fit comparable to, or even better than, that of modern sophisticated rate estimation methods, provided that the bandwidth is selected properly for a given set of spike data, according to the optimization methods presented here.

  5. Analog forecasting with dynamics-adapted kernels

    NASA Astrophysics Data System (ADS)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  6. Online Sequential Extreme Learning Machine With Kernels.

    PubMed

    Scardapane, Simone; Comminiello, Danilo; Scarpiniti, Michele; Uncini, Aurelio

    2015-09-01

    The extreme learning machine (ELM) was recently proposed as a unifying framework for different families of learning algorithms. The classical ELM model consists of a linear combination of a fixed number of nonlinear expansions of the input vector. Learning in ELM is hence equivalent to finding the optimal weights that minimize the error on a dataset. The update works in batch mode, either with explicit feature mappings or with implicit mappings defined by kernels. Although an online version has been proposed for the former, no work has been done up to this point for the latter, and whether an efficient learning algorithm for online kernel-based ELM exists remains an open problem. By explicating some connections between nonlinear adaptive filtering and ELM theory, in this brief, we present an algorithm for this task. In particular, we propose a straightforward extension of the well-known kernel recursive least-squares, belonging to the kernel adaptive filtering (KAF) family, to the ELM framework. We call the resulting algorithm the kernel online sequential ELM (KOS-ELM). Moreover, we consider two different criteria used in the KAF field to obtain sparse filters and extend them to our context. We show that KOS-ELM, with their integration, can result in a highly efficient algorithm, both in terms of obtained generalization error and training time. Empirical evaluations demonstrate interesting results on some benchmarking datasets.

  7. The connection between regularization operators and support vector kernels.

    PubMed

    Smola, Alex J.; Schölkopf, Bernhard; Müller, Klaus Robert

    1998-06-01

    In this paper a correspondence is derived between regularization operators used in regularization networks and support vector kernels. We prove that the Green's Functions associated with regularization operators are suitable support vector kernels with equivalent regularization properties. Moreover, the paper provides an analysis of currently used support vector kernels in the view of regularization theory and corresponding operators associated with the classes of both polynomial kernels and translation invariant kernels. The latter are also analyzed on periodical domains. As a by-product we show that a large number of radial basis functions, namely conditionally positive definite functions, may be used as support vector kernels.

  8. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  9. Fast generation of sparse random kernel graphs

    SciTech Connect

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.

  10. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  11. Phenolic constituents of shea (Vitellaria paradoxa) kernels.

    PubMed

    Maranz, Steven; Wiesman, Zeev; Garti, Nissim

    2003-10-08

    Analysis of the phenolic constituents of shea (Vitellaria paradoxa) kernels by LC-MS revealed eight catechin compounds-gallic acid, catechin, epicatechin, epicatechin gallate, gallocatechin, epigallocatechin, gallocatechin gallate, and epigallocatechin gallate-as well as quercetin and trans-cinnamic acid. The mean kernel content of the eight catechin compounds was 4000 ppm (0.4% of kernel dry weight), with a 2100-9500 ppm range. Comparison of the profiles of the six major catechins from 40 Vitellaria provenances from 10 African countries showed that the relative proportions of these compounds varied from region to region. Gallic acid was the major phenolic compound, comprising an average of 27% of the measured total phenols and exceeding 70% in some populations. Colorimetric analysis (101 samples) of total polyphenols extracted from shea butter into hexane gave an average of 97 ppm, with the values for different provenances varying between 62 and 135 ppm of total polyphenols.

  12. Tile-Compressed FITS Kernel for IRAF

    NASA Astrophysics Data System (ADS)

    Seaman, R.

    2011-07-01

    The Flexible Image Transport System (FITS) is a ubiquitously supported standard of the astronomical community. Similarly, the Image Reduction and Analysis Facility (IRAF), developed by the National Optical Astronomy Observatory, is a widely used astronomical data reduction package. IRAF supplies compatibility with FITS format data through numerous tools and interfaces. The most integrated of these is IRAF's FITS image kernel that provides access to FITS from any IRAF task that uses the basic IMIO interface. The original FITS kernel is a complex interface of purpose-built procedures that presents growing maintenance issues and lacks recent FITS innovations. A new FITS kernel is being developed at NOAO that is layered on the CFITSIO library from the NASA Goddard Space Flight Center. The simplified interface will minimize maintenance headaches as well as add important new features such as support for the FITS tile-compressed (fpack) format.

  13. Fractal Weyl law for Linux Kernel architecture

    NASA Astrophysics Data System (ADS)

    Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.

    2011-01-01

    We study the properties of spectrum and eigenstates of the Google matrix of a directed network formed by the procedure calls in the Linux Kernel. Our results obtained for various versions of the Linux Kernel show that the spectrum is characterized by the fractal Weyl law established recently for systems of quantum chaotic scattering and the Perron-Frobenius operators of dynamical maps. The fractal Weyl exponent is found to be ν ≈ 0.65 that corresponds to the fractal dimension of the network d ≈ 1.3. An independent computation of the fractal dimension by the cluster growing method, generalized for directed networks, gives a close value d ≈ 1.4. The eigenmodes of the Google matrix of Linux Kernel are localized on certain principal nodes. We argue that the fractal Weyl law should be generic for directed networks with the fractal dimension d < 2.

  14. A kernel-based approach for biomedical named entity recognition.

    PubMed

    Patra, Rakesh; Saha, Sujan Kumar

    2013-01-01

    Support vector machine (SVM) is one of the popular machine learning techniques used in various text processing tasks including named entity recognition (NER). The performance of the SVM classifier largely depends on the appropriateness of the kernel function. In the last few years a number of task-specific kernel functions have been proposed and used in various text processing tasks, for example, string kernel, graph kernel, tree kernel and so on. So far very few efforts have been devoted to the development of NER task specific kernel. In the literature we found that the tree kernel has been used in NER task only for entity boundary detection or reannotation. The conventional tree kernel is unable to execute the complete NER task on its own. In this paper we have proposed a kernel function, motivated by the tree kernel, which is able to perform the complete NER task. To examine the effectiveness of the proposed kernel, we have applied the kernel function on the openly available JNLPBA 2004 data. Our kernel executes the complete NER task and achieves reasonable accuracy.

  15. Experimental study of turbulent flame kernel propagation

    SciTech Connect

    Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve

    2008-07-15

    Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{sub j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)

  16. A dynamic kernel modifier for linux

    SciTech Connect

    Minnich, R. G.

    2002-09-03

    Dynamic Kernel Modifier, or DKM, is a kernel module for Linux that allows user-mode programs to modify the execution of functions in the kernel without recompiling or modifying the kernel source in any way. Functions may be traced, either function entry only or function entry and exit; nullified; or replaced with some other function. For the tracing case, function execution results in the activation of a watchpoint. When the watchpoint is activated, the address of the function is logged in a FIFO buffer that is readable by external applications. The watchpoints are time-stamped with the resolution of the processor high resolution timers, which on most modem processors are accurate to a single processor tick. DKM is very similar to earlier systems such as the SunOS trace device or Linux TT. Unlike these two systems, and other similar systems, DKM requires no kernel modifications. DKM allows users to do initial probing of the kernel to look for performance problems, or even to resolve potential problems by turning functions off or replacing them. DKM watchpoints are not without cost: it takes about 200 nanoseconds to make a log entry on an 800 Mhz Pentium-Ill. The overhead numbers are actually competitive with other hardware-based trace systems, although it has less 'Los Alamos National Laboratory is operated by the University of California for the National Nuclear Security Administration of the United States Department of Energy under contract W-7405-ENG-36. accuracy than an In-Circuit Emulator such as the American Arium. Once the user has zeroed in on a problem, other mechanisms with a higher degree of accuracy can be used.

  17. Kernel abortion in maize. II. Distribution of /sup 14/C among kernel carboydrates

    SciTech Connect

    Hanft, J.M.; Jones, R.J.

    1986-06-01

    This study was designed to compare the uptake and distribution of /sup 14/C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35/sup 0/C were transferred to (/sup 14/C)sucrose media 10 days after pollination. Kernels cultured at 35/sup 0/C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on (/sup 14/C)sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35/sup 0/C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35/sup 0/C compared to kernels cultured at 30/sup 0/C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35/sup 0/C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30/sup 0/C (89%). Kernels cultured at 35/sup 0/C had a correspondingly higher proportion of /sup 14/C in endosperm fructose, glucose, and sucrose.

  18. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  19. Full Waveform Inversion Using Waveform Sensitivity Kernels

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    2013-04-01

    We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver

  20. Regularization techniques for PSF-matching kernels - I. Choice of kernel basis

    NASA Astrophysics Data System (ADS)

    Becker, A. C.; Homrighausen, D.; Connolly, A. J.; Genovese, C. R.; Owen, R.; Bickerton, S. J.; Lupton, R. H.

    2012-09-01

    We review current methods for building point spread function (PSF)-matching kernels for the purposes of image subtraction or co-addition. Such methods use a linear decomposition of the kernel on a series of basis functions. The correct choice of these basis functions is fundamental to the efficiency and effectiveness of the matching - the chosen bases should represent the underlying signal using a reasonably small number of shapes, and/or have a minimum number of user-adjustable tuning parameters. We examine methods whose bases comprise multiple Gauss-Hermite polynomials, as well as a form-free basis composed of delta-functions. Kernels derived from delta-functions are unsurprisingly shown to be more expressive; they are able to take more general shapes and perform better in situations where sum-of-Gaussian methods are known to fail. However, due to its many degrees of freedom (the maximum number allowed by the kernel size) this basis tends to overfit the problem and yields noisy kernels having large variance. We introduce a new technique to regularize these delta-function kernel solutions, which bridges the gap between the generality of delta-function kernels and the compactness of sum-of-Gaussian kernels. Through this regularization we are able to create general kernel solutions that represent the intrinsic shape of the PSF-matching kernel with only one degree of freedom, the strength of the regularization λ. The role of λ is effectively to exchange variance in the resulting difference image with variance in the kernel itself. We examine considerations in choosing the value of λ, including statistical risk estimators and the ability of the solution to predict solutions for adjacent areas. Both of these suggest moderate strengths of λ between 0.1 and 1.0, although this optimization is likely data set dependent. This model allows for flexible representations of the convolution kernel that have significant predictive ability and will prove useful in implementing

  1. Accuracy of Reduced and Extended Thin-Wire Kernels

    SciTech Connect

    Burke, G J

    2008-11-24

    Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.

  2. Analysis of maize ( Zea mays ) kernel density and volume using microcomputed tomography and single-kernel near-infrared spectroscopy.

    PubMed

    Gustin, Jeffery L; Jackson, Sean; Williams, Chekeria; Patel, Anokhee; Armstrong, Paul; Peter, Gary F; Settles, A Mark

    2013-11-20

    Maize kernel density affects milling quality of the grain. Kernel density of bulk samples can be predicted by near-infrared reflectance (NIR) spectroscopy, but no accurate method to measure individual kernel density has been reported. This study demonstrates that individual kernel density and volume are accurately measured using X-ray microcomputed tomography (μCT). Kernel density was significantly correlated with kernel volume, air space within the kernel, and protein content. Embryo density and volume did not influence overall kernel density. Partial least-squares (PLS) regression of μCT traits with single-kernel NIR spectra gave stable predictive models for kernel density (R(2) = 0.78, SEP = 0.034 g/cm(3)) and volume (R(2) = 0.86, SEP = 2.88 cm(3)). Density and volume predictions were accurate for data collected over 10 months based on kernel weights calculated from predicted density and volume (R(2) = 0.83, SEP = 24.78 mg). Kernel density was significantly correlated with bulk test weight (r = 0.80), suggesting that selection of dense kernels can translate to improved agronomic performance.

  3. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    SciTech Connect

    Charles Barnes; CLay Richardson; Scott Nagley; John Hunn; Eric Shaber

    2010-10-01

    Babcock and Wilcox (B&W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-µm, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B&W produced 425-µm, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B&W also produced 500-µm, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B&W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  4. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  5. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  6. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  7. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  8. Multiple spectral kernel learning and a gaussian complexity computation.

    PubMed

    Reyhani, Nima

    2013-07-01

    Multiple kernel learning (MKL) partially solves the kernel selection problem in support vector machines and similar classifiers by minimizing the empirical risk over a subset of the linear combination of given kernel matrices. For large sample sets, the size of the kernel matrices becomes a numerical issue. In many cases, the kernel matrix is of low-efficient rank. However, the low-rank property is not efficiently utilized in MKL algorithms. Here, we suggest multiple spectral kernel learning that efficiently uses the low-rank property by finding a kernel matrix from a set of Gram matrices of a few eigenvectors from all given kernel matrices, called a spectral kernel set. We provide a new bound for the gaussian complexity of the proposed kernel set, which depends on both the geometry of the kernel set and the number of Gram matrices. This characterization of the complexity implies that in an MKL setting, adding more kernels may not monotonically increase the complexity, while previous bounds show otherwise.

  9. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...

  10. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of...

  11. Thermomechanical property of rice kernels studied by DMA

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The thermomechanical property of the rice kernels was investigated using a dynamic mechanical analyzer (DMA). The length change of rice kernel with a loaded constant force along the major axis direction was detected during temperature scanning. The thermomechanical transition occurred in rice kernel...

  12. NIRS method for precise identification of Fusarium damaged wheat kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Development of scab resistant wheat varieties may be enhanced by non-destructive evaluation of kernels for Fusarium damaged kernels (FDKs) and deoxynivalenol (DON) levels. Fusarium infection generally affects kernel appearance, but insect damage and other fungi can cause similar symptoms. Also, some...

  13. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  14. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  15. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  16. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  17. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... shall mean the actual gross weight of any lot of almonds: Less weight of containers; less moisture of... material, 350 grams, and moisture content of kernels, seven percent. Excess moisture is two percent. The...: Edible kernels, 840 grams; inedible kernels, 120 grams; foreign material, 40 grams; and moisture...

  18. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  19. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  20. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  1. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  2. Protein Structure Prediction Using String Kernels

    DTIC Science & Technology

    2006-03-03

    Prediction using String Kernels 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...consists of 4352 sequences from SCOP version 1.53 extracted from the Astral database, grouped into families and superfamilies. The dataset is processed

  3. Kernel Temporal Differences for Neural Decoding

    PubMed Central

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  4. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  5. Kernel weights optimization for error diffusion halftoning method

    NASA Astrophysics Data System (ADS)

    Fedoseev, Victor

    2015-02-01

    This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.

  6. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  7. Difference image analysis: automatic kernel design using information criteria

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.

    2016-03-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.

  8. Efficient $\\chi ^{2}$ Kernel Linearization via Random Feature Maps.

    PubMed

    Yuan, Xiao-Tong; Wang, Zhenzhen; Deng, Jiankang; Liu, Qingshan

    2016-11-01

    Explicit feature mapping is an appealing way to linearize additive kernels, such as χ(2) kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of χ(2) kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize χ(2) kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to χ(2) multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the χ(2) kernel SVMs at almost no cost of testing accuracy.

  9. A Novel Framework for Learning Geometry-Aware Kernels.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo

    2016-05-01

    The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.

  10. Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets.

    PubMed

    Wang, Shitong; Wang, Jun; Chung, Fu-lai

    2014-01-01

    Kernel methods such as the standard support vector machine and support vector regression trainings take O(N(3)) time and O(N(2)) space complexities in their naïve implementations, where N is the training set size. It is thus computationally infeasible in applying them to large data sets, and a replacement of the naive method for finding the quadratic programming (QP) solutions is highly desirable. By observing that many kernel methods can be linked up with kernel density estimate (KDE) which can be efficiently implemented by some approximation techniques, a new learning method called fast KDE (FastKDE) is proposed to scale up kernel methods. It is based on establishing a connection between KDE and the QP problems formulated for kernel methods using an entropy-based integrated-squared-error criterion. As a result, FastKDE approximation methods can be applied to solve these QP problems. In this paper, the latest advance in fast data reduction via KDE is exploited. With just a simple sampling strategy, the resulted FastKDE method can be used to scale up various kernel methods with a theoretical guarantee that their performance does not degrade a lot. It has a time complexity of O(m(3)) where m is the number of the data points sampled from the training set. Experiments on different benchmarking data sets demonstrate that the proposed method has comparable performance with the state-of-art method and it is effective for a wide range of kernel methods to achieve fast learning in large data sets.

  11. Wilson Dslash Kernel From Lattice QCD Optimization

    SciTech Connect

    Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  12. Bergman kernel and complex singularity exponent

    NASA Astrophysics Data System (ADS)

    Chen, Boyong; Lee, Hanjin

    2009-12-01

    We give a precise estimate of the Bergman kernel for the model domain defined by $\\Omega_F=\\{(z,w)\\in \\mathbb{C}^{n+1}:{\\rm Im}w-|F(z)|^2>0\\},$ where $F=(f_1,...,f_m)$ is a holomorphic map from $\\mathbb{C}^n$ to $\\mathbb{C}^m$, in terms of the complex singularity exponent of $F$.

  13. Advanced Development of Certified OS Kernels

    DTIC Science & Technology

    2015-06-01

    and Coq Ltac libraries. 15. SUBJECT TERMS Certified Software; Certified OS Kernels; Certified Compilers; Abstraction Layers; Modularity; Deep ...module should only need to be done once (to show that it implements its deep functional specification [14]). Global properties should be derived from the...building certified abstraction layers with deep specifications. A certified layer is a new language-based module construct that consists of a triple pL1,M

  14. The Palomar kernel-phase experiment: testing kernel phase interferometry for ground-based astronomical observations

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin; Tuthill, Peter; Hinkley, Sasha; Ireland, Michael J.; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D.; Martinache, Frantz

    2016-01-01

    At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently developed adaptive optics post-processing technique, called kernel-phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here, we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the α Ophiuchi binary system near periastron, using the Palomar 200-Inch Telescope. This is the first case in which kernel phase has been used with a full aperture to resolve a system close to the diffraction limit with ground-based extreme adaptive optics observations. Excellent agreement in astrometric quantities is found between kernel phase and masking, and kernel phase significantly outperforms PSF fitting and bispectral analysis, demonstrating its viability as an alternative to conventional non-redundant masking under appropriate conditions.

  15. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.

  16. Kernel Non-Rigid Structure from Motion

    PubMed Central

    Gotardo, Paulo F. U.; Martinez, Aleix M.

    2013-01-01

    Non-rigid structure from motion (NRSFM) is a difficult, underconstrained problem in computer vision. The standard approach in NRSFM constrains 3D shape deformation using a linear combination of K basis shapes; the solution is then obtained as the low-rank factorization of an input observation matrix. An important but overlooked problem with this approach is that non-linear deformations are often observed; these deformations lead to a weakened low-rank constraint due to the need to use additional basis shapes to linearly model points that move along curves. Here, we demonstrate how the kernel trick can be applied in standard NRSFM. As a result, we model complex, deformable 3D shapes as the outputs of a non-linear mapping whose inputs are points within a low-dimensional shape space. This approach is flexible and can use different kernels to build different non-linear models. Using the kernel trick, our model complements the low-rank constraint by capturing non-linear relationships in the shape coefficients of the linear model. The net effect can be seen as using non-linear dimensionality reduction to further compress the (shape) space of possible solutions. PMID:24002226

  17. Balancing continuous covariates based on Kernel densities.

    PubMed

    Ma, Zhenjun; Hu, Feifang

    2013-03-01

    The balance of important baseline covariates is essential for convincing treatment comparisons. Stratified permuted block design and minimization are the two most commonly used balancing strategies, both of which require the covariates to be discrete. Continuous covariates are typically discretized in order to be included in the randomization scheme. But breakdown of continuous covariates into subcategories often changes the nature of the covariates and makes distributional balance unattainable. In this article, we propose to balance continuous covariates based on Kernel density estimations, which keeps the continuity of the covariates. Simulation studies show that the proposed Kernel-Minimization can achieve distributional balance of both continuous and categorical covariates, while also keeping the group size well balanced. It is also shown that the Kernel-Minimization is less predictable than stratified permuted block design and minimization. Finally, we apply the proposed method to redesign the NINDS trial, which has been a source of controversy due to imbalance of continuous baseline covariates. Simulation shows that imbalances such as those observed in the NINDS trial can be generally avoided through the implementation of the new method.

  18. Kernel methods for phenotyping complex plant architecture.

    PubMed

    Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien

    2014-02-07

    The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.

  19. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  20. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  1. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  2. Influence of wheat kernel physical properties on the pulverizing process.

    PubMed

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  3. Geometric tree kernels: classification of COPD from airway tree geometry.

    PubMed

    Feragen, Aasa; Petersen, Jens; Grimm, Dominik; Dirksen, Asger; Pedersen, Jesper Holst; Borgwardt, Karsten; de Bruijne, Marleen

    2013-01-01

    Methodological contributions: This paper introduces a family of kernels for analyzing (anatomical) trees endowed with vector valued measurements made along the tree. While state-of-the-art graph and tree kernels use combinatorial tree/graph structure with discrete node and edge labels, the kernels presented in this paper can include geometric information such as branch shape, branch radius or other vector valued properties. In addition to being flexible in their ability to model different types of attributes, the presented kernels are computationally efficient and some of them can easily be computed for large datasets (N - 10.000) of trees with 30 - 600 branches. Combining the kernels with standard machine learning tools enables us to analyze the relation between disease and anatomical tree structure and geometry. Experimental results: The kernels are used to compare airway trees segmented from low-dose CT, endowed with branch shape descriptors and airway wall area percentage measurements made along the tree. Using kernelized hypothesis testing we show that the geometric airway trees are significantly differently distributed in patients with Chronic Obstructive Pulmonary Disease (COPD) than in healthy individuals. The geometric tree kernels also give a significant increase in the classification accuracy of COPD from geometric tree structure endowed with airway wall thickness measurements in comparison with state-of-the-art methods, giving further insight into the relationship between airway wall thickness and COPD. Software: Software for computing kernels and statistical tests is available at http://image.diku.dk/aasa/software.php.

  4. A Kernel-based Account of Bibliometric Measures

    NASA Astrophysics Data System (ADS)

    Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji

    The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.

  5. Model-based online learning with kernels.

    PubMed

    Li, Guoqi; Wen, Changyun; Li, Zheng Guo; Zhang, Aimin; Yang, Feng; Mao, Kezhi

    2013-03-01

    New optimization models and algorithms for online learning with Kernels (OLK) in classification, regression, and novelty detection are proposed in a reproducing Kernel Hilbert space. Unlike the stochastic gradient descent algorithm, called the naive online Reg minimization algorithm (NORMA), OLK algorithms are obtained by solving a constrained optimization problem based on the proposed models. By exploiting the techniques of the Lagrange dual problem like Vapnik's support vector machine (SVM), the solution of the optimization problem can be obtained iteratively and the iteration process is similar to that of the NORMA. This further strengthens the foundation of OLK and enriches the research area of SVM. We also apply the obtained OLK algorithms to problems in classification, regression, and novelty detection, including real time background substraction, to show their effectiveness. It is illustrated that, based on the experimental results of both classification and regression, the accuracy of OLK algorithms is comparable with traditional SVM-based algorithms, such as SVM and least square SVM (LS-SVM), and with the state-of-the-art algorithms, such as Kernel recursive least square (KRLS) method and projectron method, while it is slightly higher than that of NORMA. On the other hand, the computational cost of the OLK algorithm is comparable with or slightly lower than existing online methods, such as above mentioned NORMA, KRLS, and projectron methods, but much lower than that of SVM-based algorithms. In addition, different from SVM and LS-SVM, it is possible for OLK algorithms to be applied to non-stationary problems. Also, the applicability of OLK in novelty detection is illustrated by simulation results.

  6. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  7. Prediction of kernel density of corn using single-kernel near infrared spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...

  8. Neutron scattering kernel for solid deuterium

    NASA Astrophysics Data System (ADS)

    Granada, J. R.

    2009-06-01

    A new scattering kernel to describe the interaction of slow neutrons with solid deuterium was developed. The main characteristics of that system are contained in the formalism, including the lattice's density of states, the Young-Koppel quantum treatment of the rotations, and the internal molecular vibrations. The elastic processes involving coherent and incoherent contributions are fully described, as well as the spin-correlation effects. The results from the new model are compared with the best available experimental data, showing very good agreement.

  9. Oil point pressure of Indian almond kernels

    NASA Astrophysics Data System (ADS)

    Aregbesola, O.; Olatunde, G.; Esuola, S.; Owolarafe, O.

    2012-07-01

    The effect of preprocessing conditions such as moisture content, heating temperature, heating time and particle size on oil point pressure of Indian almond kernel was investigated. Results showed that oil point pressure was significantly (P < 0.05) affected by above mentioned parameters. It was also observed that oil point pressure reduced with increase in heating temperature and heating time for both coarse and fine particles. Furthermore, an increase in moisture content resulted in increased oil point pressure for coarse particles while there was a reduction in oil point pressure with increase in moisture content for fine particles.

  10. Verification of Chare-kernel programs

    SciTech Connect

    Bhansali, S.; Kale, L.V. )

    1989-01-01

    Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.

  11. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense.

  12. Analysis of maize (Zea mays) kernel density and volume using micro-computed tomography and single-kernel near infrared spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Maize kernel density impacts milling quality of the grain due to kernel hardness. Harder kernels are correlated with higher test weight and are more resistant to breakage during harvest and transport. Softer kernels, in addition to being susceptible to mechanical damage, are also prone to pathogen ...

  13. Delimiting Areas of Endemism through Kernel Interpolation

    PubMed Central

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  14. Bergman kernel, balanced metrics and black holes

    NASA Astrophysics Data System (ADS)

    Klevtsov, Semyon

    In this thesis we explore the connections between the Kahler geometry and Landau levels on compact manifolds. We rederive the expansion of the Bergman kernel on Kahler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory. The physics interpretation of this result is as an expansion of the projector of wavefunctions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kahler form. This is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry. We also generalize this expansion to supersymmetric quantum mechanics and more general magnetic fields, and explore its applications. These include the quantum Hall effect in curved space, the balanced metrics and Kahler gravity. In particular, we conjecture that for a probe in a BPS black hole in type II strings compactified on Calabi-Yau manifolds, the moduli space metric is the balanced metric.

  15. Scientific Computing Kernels on the Cell Processor

    SciTech Connect

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  16. Generalized Langevin equation with tempered memory kernel

    NASA Astrophysics Data System (ADS)

    Liemert, André; Sandev, Trifce; Kantz, Holger

    2017-01-01

    We study a generalized Langevin equation for a free particle in presence of a truncated power-law and Mittag-Leffler memory kernel. It is shown that in presence of truncation, the particle from subdiffusive behavior in the short time limit, turns to normal diffusion in the long time limit. The case of harmonic oscillator is considered as well, and the relaxation functions and the normalized displacement correlation function are represented in an exact form. By considering external time-dependent periodic force we obtain resonant behavior even in case of a free particle due to the influence of the environment on the particle movement. Additionally, the double-peak phenomenon in the imaginary part of the complex susceptibility is observed. It is obtained that the truncation parameter has a huge influence on the behavior of these quantities, and it is shown how the truncation parameter changes the critical frequencies. The normalized displacement correlation function for a fractional generalized Langevin equation is investigated as well. All the results are exact and given in terms of the three parameter Mittag-Leffler function and the Prabhakar generalized integral operator, which in the kernel contains a three parameter Mittag-Leffler function. Such kind of truncated Langevin equation motion can be of high relevance for the description of lateral diffusion of lipids and proteins in cell membranes.

  17. Transcriptome analysis of Ginkgo biloba kernels

    PubMed Central

    He, Bing; Gu, Yincong; Xu, Meng; Wang, Jianwen; Cao, Fuliang; Xu, Li-an

    2015-01-01

    Ginkgo biloba is a dioecious species native to China with medicinally and phylogenetically important characteristics; however, genomic resources for this species are limited. In this study, we performed the first transcriptome sequencing for Ginkgo kernels at five time points using Illumina paired-end sequencing. Approximately 25.08-Gb clean reads were obtained, and 68,547 unigenes with an average length of 870 bp were generated by de novo assembly. Of these unigenes, 29,987 (43.74%) were annotated in publicly available plant protein database. A total of 3,869 genes were identified as significantly differentially expressed, and enrichment analysis was conducted at different time points. Furthermore, metabolic pathway analysis revealed that 66 unigenes were responsible for terpenoid backbone biosynthesis, with up to 12 up-regulated unigenes involved in the biosynthesis of ginkgolide and bilobalide. Differential gene expression analysis together with real-time PCR experiments indicated that the synthesis of bilobalide may have interfered with the ginkgolide synthesis process in the kernel. These data can remarkably expand the existing transcriptome resources of Ginkgo, and provide a valuable platform to reveal more on developmental and metabolic mechanisms of this species. PMID:26500663

  18. Aligning Biomolecular Networks Using Modular Graph Kernels

    NASA Astrophysics Data System (ADS)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  19. Sugar uptake into kernels of tunicate tassel-seed maize

    SciTech Connect

    Thomas, P.A.; Felker, F.C.; Crawford, C.G. )

    1990-05-01

    A maize (Zea mays L.) strain expressing both the tassel-seed (Ts-5) and tunicate (Tu) characters was developed which produces glume-covered kernels on the tassel, often born on 7-10 mm pedicels. Vigorous plants produce up to 100 such kernels interspersed with additional sessile kernels. This floral unit provides a potentially valuable experimental system for studying sugar uptake into developing maize seeds. When detached kernels (with glumes and pedicel intact) are placed in incubation solution, fluid flows up the pedicel and into the glumes, entering the pedicel apoplast near the kernel base. The unusual anatomical features of this maize strain permit experimental access to the pedicel apoplast with much less possibility of kernel base tissue damage than with kernels excised from the cob. ({sup 14}C)Fructose incorporation into soluble and insoluble fractions of endosperm increased for 8 days. Endosperm uptake of sucrose, fructose, and D-glucose was significantly greater than that of L-glucose. Fructose uptake was significantly inhibited by CCCP, DNP, and PCMBS. These results suggest the presence of an active, non-diffusion component of sugar transport in maize kernels.

  20. Integral Transform Methods: A Critical Review of Various Kernels

    NASA Astrophysics Data System (ADS)

    Orlandini, Giuseppina; Turro, Francesco

    2017-03-01

    Some general remarks about integral transform approaches to response functions are made. Their advantage for calculating cross sections at energies in the continuum is stressed. In particular we discuss the class of kernels that allow calculations of the transform by matrix diagonalization. A particular set of such kernels, namely the wavelets, is tested in a model study.

  1. Evidence-Based Kernels: Fundamental Units of Behavioral Influence

    ERIC Educational Resources Information Center

    Embry, Dennis D.; Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…

  2. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  3. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  4. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  5. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  6. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  7. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  8. High speed sorting of Fusarium-damaged wheat kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recent studies have found that resistance to Fusarium fungal infection can be inherited in wheat from one generation to another. However, there is not yet available a cost effective method to separate Fusarium-damaged wheat kernels from undamaged kernels so that wheat breeders can take advantage of...

  9. End-use quality of soft kernel durum wheat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat is known for its very hard texture, which influences how it is milled and for what products it is well suited. We developed soft kernel durum wheat lines via Ph1b-mediated homoeologous recombination with Dr. Leonard Joppa...

  10. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  11. Parametric kernel-driven active contours for image segmentation

    NASA Astrophysics Data System (ADS)

    Wu, Qiongzhi; Fang, Jiangxiong

    2012-10-01

    We investigated a parametric kernel-driven active contour (PKAC) model, which implicitly transfers kernel mapping and piecewise constant to modeling the image data via kernel function. The proposed model consists of curve evolution functional with three terms: global kernel-driven and local kernel-driven terms, which evaluate the deviation of the mapped image data within each region from the piecewise constant model, and a regularization term expressed as the length of the evolution curves. In the local kernel-driven term, the proposed model can effectively segment images with intensity inhomogeneity by incorporating the local image information. By balancing the weight between the global kernel-driven term and the local kernel-driven term, the proposed model can segment the images with either intensity homogeneity or intensity inhomogeneity. To ensure the smoothness of the level set function and reduce the computational cost, the distance regularizing term is applied to penalize the deviation of the level set function and eliminate the requirement of re-initialization. Compared with the local image fitting model and local binary fitting model, experimental results show the advantages of the proposed method in terms of computational efficiency and accuracy.

  12. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  13. Computing the roots of complex orthogonal and kernel polynomials

    SciTech Connect

    Saylor, P.E.; Smolarski, D.C.

    1988-01-01

    A method is presented to compute the roots of complex orthogonal and kernel polynomials. An important application of complex kernel polynomials is the acceleration of iterative methods for the solution of nonsymmetric linear equations. In the real case, the roots of orthogonal polynomials coincide with the eigenvalues of the Jacobi matrix, a symmetric tridiagonal matrix obtained from the defining three-term recurrence relationship for the orthogonal polynomials. In the real case kernel polynomials are orthogonal. The Stieltjes procedure is an algorithm to compute the roots of orthogonal and kernel polynomials bases on these facts. In the complex case, the Jacobi matrix generalizes to a Hessenberg matrix, the eigenvalues of which are roots of either orthogonal or kernel polynomials. The resulting algorithm generalizes the Stieljes procedure. It may not be defined in the case of kernel polynomials, a consequence of the fact that they are orthogonal with respect to a nonpositive bilinear form. (Another consequence is that kernel polynomials need not be of exact degree.) A second algorithm that is always defined is presented for kernel polynomials. Numerical examples are described.

  14. OSKI: A Library of Automatically Tuned Sparse Matrix Kernels

    SciTech Connect

    Vuduc, R; Demmel, J W; Yelick, K A

    2005-07-19

    The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.

  15. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  16. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  17. A novel extended kernel recursive least squares algorithm.

    PubMed

    Zhu, Pingping; Chen, Badong; Príncipe, José C

    2012-08-01

    In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.

  18. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  19. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects.

  20. A visualization tool for the kernel-driven model with improved ability in data analysis and kernel assessment

    NASA Astrophysics Data System (ADS)

    Dong, Yadong; Jiao, Ziti; Zhang, Hu; Bai, Dongni; Zhang, Xiaoning; Li, Yang; He, Dandan

    2016-10-01

    The semi-empirical, kernel-driven Bidirectional Reflectance Distribution Function (BRDF) model has been widely used for many aspects of remote sensing. With the development of the kernel-driven model, there is a need to further assess the performance of newly developed kernels. The use of visualization tools can facilitate the analysis of model results and the assessment of newly developed kernels. However, the current version of the kernel-driven model does not contain a visualization function. In this study, a user-friendly visualization tool, named MaKeMAT, was developed specifically for the kernel-driven model. The POLDER-3 and CAR BRDF datasets were used to demonstrate the applicability of MaKeMAT. The visualization of inputted multi-angle measurements enhances understanding of multi-angle measurements and allows the choice of measurements with good representativeness. The visualization of modeling results facilitates the assessment of newly developed kernels. The study shows that the visualization tool MaKeMAT can promote the widespread application of the kernel-driven model.

  1. On the Kernelization Complexity of Colorful Motifs

    NASA Astrophysics Data System (ADS)

    Ambalath, Abhimanyu M.; Balasundaram, Radheshyam; Rao H., Chintan; Koppula, Venkata; Misra, Neeldhara; Philip, Geevarghese; Ramanujan, M. S.

    The Colorful Motif problem asks if, given a vertex-colored graph G, there exists a subset S of vertices of G such that the graph induced by G on S is connected and contains every color in the graph exactly once. The problem is motivated by applications in computational biology and is also well-studied from the theoretical point of view. In particular, it is known to be NP-complete even on trees of maximum degree three [Fellows et al, ICALP 2007]. In their pioneering paper that introduced the color-coding technique, Alon et al. [STOC 1995] show, inter alia, that the problem is FPT on general graphs. More recently, Cygan et al. [WG 2010] showed that Colorful Motif is NP-complete on comb graphs, a special subclass of the set of trees of maximum degree three. They also showed that the problem is not likely to admit polynomial kernels on forests.

  2. Kernel density estimation using graphical processing unit

    NASA Astrophysics Data System (ADS)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  3. Privacy preserving RBF kernel support vector machine.

    PubMed

    Li, Haoran; Xiong, Li; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2014-01-01

    Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data.

  4. Learning molecular energies using localized graph kernels

    NASA Astrophysics Data System (ADS)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  5. The flare kernel in the impulsive phase

    NASA Technical Reports Server (NTRS)

    Dejager, C.

    1986-01-01

    The impulsive phase of a flare is characterized by impulsive bursts of X-ray and microwave radiation, related to impulsive footpoint heating up to 50 or 60 MK, by upward gas velocities (150 to 400 km/sec) and by a gradual increase of the flare's thermal energy content. These phenomena, as well as non-thermal effects, are all related to the impulsive energy injection into the flare. The available observations are also quantitatively consistent with a model in which energy is injected into the flare by beams of energetic electrons, causing ablation of chromospheric gas, followed by convective rise of gas. Thus, a hole is burned into the chromosphere; at the end of impulsive phase of an average flare the lower part of that hole is situated about 1800 km above the photosphere. H alpha and other optical and UV line emission is radiated by a thin layer (approx. 20 km) at the bottom of the flare kernel. The upward rising and outward streaming gas cools down by conduction in about 45 s. The non-thermal effects in the initial phase are due to curtailing of the energy distribution function by escape of energetic electrons. The single flux tube model of a flare does not fit with these observations; instead we propose the spaghetti-bundle model. Microwave and gamma-ray observations suggest the occurrence of dense flare knots of approx. 800 km diameter, and of high temperature. Future observations should concentrate on locating the microwave/gamma-ray sources, and on determining the kernel's fine structure and the related multi-loop structure of the flaring area.

  6. Labeled Graph Kernel for Behavior Analysis.

    PubMed

    Zhao, Ruiqi; Martinez, Aleix M

    2016-08-01

    Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data.

  7. Labeled Graph Kernel for Behavior Analysis

    PubMed Central

    Zhao, Ruiqi; Martinez, Aleix M.

    2016-01-01

    Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data. PMID:26415154

  8. Spectrum-based kernel length estimation for Gaussian process classification.

    PubMed

    Wang, Liang; Li, Chuan

    2014-06-01

    Recent studies have shown that Gaussian process (GP) classification, a discriminative supervised learning approach, has achieved competitive performance in real applications compared with most state-of-the-art supervised learning methods. However, the problem of automatic model selection in GP classification, involving the kernel function form and the corresponding parameter values (which are unknown in advance), remains a challenge. To make GP classification a more practical tool, this paper presents a novel spectrum analysis-based approach for model selection by refining the GP kernel function to match the given input data. Specifically, we target the problem of GP kernel length scale estimation. Spectrums are first calculated analytically from the kernel function itself using the autocorrelation theorem as well as being estimated numerically from the training data themselves. Then, the kernel length scale is automatically estimated by equating the two spectrum values, i.e., the kernel function spectrum equals to the estimated training data spectrum. Compared with the classical Bayesian method for kernel length scale estimation via maximizing the marginal likelihood (which is time consuming and could suffer from multiple local optima), extensive experimental results on various data sets show that our proposed method is both efficient and accurate.

  9. Training Lp norm multiple kernel learning in the primal.

    PubMed

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method.

  10. Gaussian kernel width optimization for sparse Bayesian learning.

    PubMed

    Mohsenzadeh, Yalda; Sheikhzadeh, Hamid

    2015-04-01

    Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters.

  11. Relaxation and diffusion models with non-singular kernels

    NASA Astrophysics Data System (ADS)

    Sun, HongGuang; Hao, Xiaoxiao; Zhang, Yong; Baleanu, Dumitru

    2017-02-01

    Anomalous relaxation and diffusion processes have been widely quantified by fractional derivative models, where the definition of the fractional-order derivative remains a historical debate due to its limitation in describing different kinds of non-exponential decays (e.g. stretched exponential decay). Meanwhile, many efforts by mathematicians and engineers have been made to overcome the singularity of power function kernel in its definition. This study first explores physical properties of relaxation and diffusion models where the temporal derivative was defined recently using an exponential kernel. Analytical analysis shows that the Caputo type derivative model with an exponential kernel cannot characterize non-exponential dynamics well-documented in anomalous relaxation and diffusion. A legitimate extension of the previous derivative is then proposed by replacing the exponential kernel with a stretched exponential kernel. Numerical tests show that the Caputo type derivative model with the stretched exponential kernel can describe a much wider range of anomalous diffusion than the exponential kernel, implying the potential applicability of the new derivative in quantifying real-world, anomalous relaxation and diffusion processes.

  12. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    PubMed

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  13. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  14. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    PubMed

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  15. Bilinear analysis for kernel selection and nonlinear feature extraction.

    PubMed

    Yang, Shu; Yan, Shuicheng; Zhang, Chao; Tang, Xiaoou

    2007-09-01

    This paper presents a unified criterion, Fisher + kernel criterion (FKC), for feature extraction and recognition. This new criterion is intended to extract the most discriminant features in different nonlinear spaces, and then, fuse these features under a unified measurement. Thus, FKC can simultaneously achieve nonlinear discriminant analysis and kernel selection. In addition, we present an efficient algorithm Fisher + kernel analysis (FKA), which utilizes the bilinear analysis, to optimize the new criterion. This FKA algorithm can alleviate the ill-posed problem existed in traditional kernel discriminant analysis (KDA), and usually, has no singularity problem. The effectiveness of our proposed algorithm is validated by a series of face-recognition experiments on several different databases.

  16. Inheritance of Kernel Color in Corn: Explanations and Investigations.

    ERIC Educational Resources Information Center

    Ford, Rosemary H.

    2000-01-01

    Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)

  17. Intelligent classification methods of grain kernels using computer vision analysis

    NASA Astrophysics Data System (ADS)

    Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo

    2011-06-01

    In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.

  18. Nonlinear hyperspectral unmixing based on constrained multiple kernel NMF

    NASA Astrophysics Data System (ADS)

    Cui, Jiantao; Li, Xiaorun; Zhao, Liaoying

    2014-05-01

    Nonlinear spectral unmixing constitutes an important field of research for hyperspectral imagery. An unsupervised nonlinear spectral unmixing algorithm, namely multiple kernel constrained nonnegative matrix factorization (MKCNMF) is proposed by coupling multiple-kernel selection with kernel NMF. Additionally, a minimum endmemberwise distance constraint and an abundance smoothness constraint are introduced to alleviate the uniqueness problem of NMF in the algorithm. In the MKCNMF, two problems of optimizing matrices and selecting the proper kernel are jointly solved. The performance of the proposed unmixing algorithm is evaluated via experiments based on synthetic and real hyperspectral data sets. The experimental results demonstrate that the proposed method outperforms some existing unmixing algorithms in terms of spectral angle distance (SAD) and abundance fractions.

  19. Hash subgraph pairwise kernel for protein-protein interaction extraction.

    PubMed

    Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian; Li, Yanpeng

    2012-01-01

    Extracting protein-protein interaction (PPI) from biomedical literature is an important task in biomedical text mining (BioTM). In this paper, we propose a hash subgraph pairwise (HSP) kernel-based approach for this task. The key to the novel kernel is to use the hierarchical hash labels to express the structural information of subgraphs in a linear time. We apply the graph kernel to compute dependency graphs representing the sentence structure for protein-protein interaction extraction task, which can efficiently make use of full graph structural information, and particularly capture the contiguous topological and label information ignored before. We evaluate the proposed approach on five publicly available PPI corpora. The experimental results show that our approach significantly outperforms all-path kernel approach on all five corpora and achieves state-of-the-art performance.

  20. On the asymptotic expansion of the Bergman kernel

    NASA Astrophysics Data System (ADS)

    Seto, Shoo

    Let (L, h) → (M, o) be a polarized Kahler manifold. We define the Bergman kernel for H0(M, Lk), holomorphic sections of the high tensor powers of the line bundle L. In this thesis, we will study the asymptotic expansion of the Bergman kernel. We will consider the on-diagonal, near-diagonal and far off-diagonal, using L2 estimates to show the existence of the asymptotic expansion and computation of the coefficients for the on and near-diagonal case, and a heat kernel approach to show the exponential decay of the off-diagonal of the Bergman kernel for noncompact manifolds assuming only a lower bound on Ricci curvature and C2 regularity of the metric.

  1. Kernel-based Linux emulation for Plan 9.

    SciTech Connect

    Minnich, Ronald G.

    2010-09-01

    CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9. In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.

  2. Resummed memory kernels in generalized system-bath master equations.

    PubMed

    Mavros, Michael G; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  3. Landslide: Systematic Dynamic Race Detection in Kernel Space

    DTIC Science & Technology

    2012-05-01

    the general challenges of kernel-level concurrency, and we evaluate its effectiveness and usability as a debugging aid. We show that our techniques make...effectiveness and usability as a de- bugging aid. We show that our techniques make systematic testing in kernel-space feasible and that Landslide is a useful...Binary Instrumentation and Applications, WBIA ’09, pages 62–71, New York, NY, USA, 2009. ACM. [SKM+11] Eunsoo Seo , Mohammad Maifi Hasan Khan, Prasant

  4. Nonlinear stochastic system identification of skin using volterra kernels.

    PubMed

    Chen, Yi; Hunter, Ian W

    2013-04-01

    Volterra kernel stochastic system identification is a technique that can be used to capture and model nonlinear dynamics in biological systems, including the nonlinear properties of skin during indentation. A high bandwidth and high stroke Lorentz force linear actuator system was developed and used to test the mechanical properties of bulk skin and underlying tissue in vivo using a non-white input force and measuring an output position. These short tests (5 s) were conducted in an indentation configuration normal to the skin surface and in an extension configuration tangent to the skin surface. Volterra kernel solution methods were used including a fast least squares procedure and an orthogonalization solution method. The practical modifications, such as frequency domain filtering, necessary for working with low-pass filtered inputs are also described. A simple linear stochastic system identification technique had a variance accounted for (VAF) of less than 75%. Representations using the first and second Volterra kernels had a much higher VAF (90-97%) as well as a lower Akaike information criteria (AICc) indicating that the Volterra kernel models were more efficient. The experimental second Volterra kernel matches well with results from a dynamic-parameter nonlinearity model with fixed mass as a function of depth as well as stiffness and damping that increase with depth into the skin. A study with 16 subjects showed that the kernel peak values have mean coefficients of variation (CV) that ranged from 3 to 8% and showed that the kernel principal components were correlated with location on the body, subject mass, body mass index (BMI), and gender. These fast and robust methods for Volterra kernel stochastic system identification can be applied to the characterization of biological tissues, diagnosis of skin diseases, and determination of consumer product efficacy.

  5. The Weighted Super Bergman Kernels Over the Supermatrix Spaces

    NASA Astrophysics Data System (ADS)

    Feng, Zhiming

    2015-12-01

    The purpose of this paper is threefold. Firstly, using Howe duality for , we obtain integral formulas of the super Schur functions with respect to the super standard Gaussian distributions. Secondly, we give explicit expressions of the super Szegö kernels and the weighted super Bergman kernels for the Cartan superdomains of type I. Thirdly, combining these results, we obtain duality relations of integrals over the unitary groups and the Cartan superdomains, and the marginal distributions of the weighted measure.

  6. Kernel generalized neighbor discriminant embedding for SAR automatic target recognition

    NASA Astrophysics Data System (ADS)

    Huang, Yulin; Pei, Jifang; Yang, Jianyu; Wang, Tao; Yang, Haiguang; Wang, Bing

    2014-12-01

    In this paper, we propose a new supervised feature extraction algorithm in synthetic aperture radar automatic target recognition (SAR ATR), called generalized neighbor discriminant embedding (GNDE). Based on manifold learning, GNDE integrates class and neighborhood information to enhance discriminative power of extracted feature. Besides, the kernelized counterpart of this algorithm is also proposed, called kernel-GNDE (KGNDE). The experiment in this paper shows that the proposed algorithms have better recognition performance than PCA and KPCA.

  7. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    DTIC Science & Technology

    2016-01-05

    events (and subsequently, their likelihood of occurrence) based on historical evidence of the counts of previous event occurrences. The novel Bayesian...Aug-2014 22-May-2015 Approved for Public Release; Distribution Unlimited Final Report: Sparse Event Modeling with Hierarchical Bayesian Kernel Methods...Sparse Event Modeling with Hierarchical Bayesian Kernel Methods Report Title The research objective of this proposal was to develop a predictive Bayesian

  8. Resummed memory kernels in generalized system-bath master equations

    NASA Astrophysics Data System (ADS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  9. Resummed memory kernels in generalized system-bath master equations

    SciTech Connect

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  10. Protoribosome by quantum kernel energy method.

    PubMed

    Huang, Lulu; Krupkin, Miri; Bashan, Anat; Yonath, Ada; Massa, Lou

    2013-09-10

    Experimental evidence suggests the existence of an RNA molecular prebiotic entity, called by us the "protoribosome," which may have evolved in the RNA world before evolution of the genetic code and proteins. This vestige of the RNA world, which possesses all of the capabilities required for peptide bond formation, seems to be still functioning in the heart of all of the contemporary ribosome. Within the modern ribosome this remnant includes the peptidyl transferase center. Its highly conserved nucleotide sequence is suggestive of its robustness under diverse environmental conditions, and hence on its prebiotic origin. Its twofold pseudosymmetry suggests that this entity could have been a dimer of self-folding RNA units that formed a pocket within which two activated amino acids might be accommodated, similar to the binding mode of modern tRNA molecules that carry amino acids or peptidyl moieties. Using quantum mechanics and crystal coordinates, this work studies the question of whether the putative protoribosome has properties necessary to function as an evolutionary precursor to the modern ribosome. The quantum model used in the calculations is density functional theory--B3LYP/3-21G*, implemented using the kernel energy method to make the computations practical and efficient. It occurs that the necessary conditions that would characterize a practicable protoribosome--namely (i) energetic structural stability and (ii) energetically stable attachment to substrates--are both well satisfied.

  11. Enhanced FMAM based on empirical kernel map.

    PubMed

    Wang, Min; Chen, Songcan

    2005-05-01

    The existing morphological auto-associative memory models based on the morphological operations, typically including morphological auto-associative memories (auto-MAM) proposed by Ritter et al. and our fuzzy morphological auto-associative memories (auto-FMAM), have many attractive advantages such as unlimited storage capacity, one-shot recall speed and good noise-tolerance to single erosive or dilative noise. However, they suffer from the extreme vulnerability to noise of mixing erosion and dilation, resulting in great degradation on recall performance. To overcome this shortcoming, we focus on FMAM and propose an enhanced FMAM (EFMAM) based on the empirical kernel map. Although it is simple, EFMAM can significantly improve the auto-FMAM with respect to the recognition accuracy under hybrid-noise and computational effort. Experiments conducted on the thumbnail-sized faces (28 x 23 and 14 x 11) scaled from the ORL database show the average accuracies of 92%, 90%, and 88% with 40 classes under 10%, 20%, and 30% randomly generated hybrid-noises, respectively, which are far higher than the auto-FMAM (67%, 46%, 31%) under the same noise levels.

  12. Generalized Bergman kernels and geometric quantization

    NASA Astrophysics Data System (ADS)

    Tuynman, G. M.

    1987-03-01

    In geometric quantization it is well known that, if f is an observable and F a polarization on a symplectic manifold (M,ω), then the condition ``Xf leaves F invariant'' (where Xf denotes the Hamiltonian vector field associated to f ) is sufficient to guarantee that one does not have to compute the BKS kernel explicitly in order to know the corresponding quantum operator. It is shown in this paper that this condition on f can be weakened to ``Xf leaves F+F° invariant''and the corresponding quantum operator is then given implicitly by formula (4.8); in particular when F is a (positive) Kähler polarization, all observables can be quantized ``directly'' and moreover, an ``explicit'' formula for the corresponding quantum operator is derived (Theorem 5.8). Applying this to the phase space R2n one obtains a quantization prescription which ressembles the normal ordering of operators in quantum field theory. When we translate this prescription to the usual position representation of quantum mechanics, the result is (a.o) that the operator associated to a classical potential is multiplication by a function which is essentially the convolution of the potential function with a Gaussian function of width ℏ, instead of multiplication by the potential itself.

  13. Local Kernel for Brains Classification in Schizophrenia

    NASA Astrophysics Data System (ADS)

    Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.

    In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.

  14. The Dynamic Kernel Scheduler-Part 1

    NASA Astrophysics Data System (ADS)

    Adelmann, Andreas; Locans, Uldis; Suter, Andreas

    2016-10-01

    Emerging processor architectures such as GPUs and Intel MICs provide a huge performance potential for high performance computing. However developing software that uses these hardware accelerators introduces additional challenges for the developer. These challenges may include exposing increased parallelism, handling different hardware designs, and using multiple development frameworks in order to utilise devices from different vendors. The Dynamic Kernel Scheduler (DKS) is being developed in order to provide a software layer between the host application and different hardware accelerators. DKS handles the communication between the host and the device, schedules task execution, and provides a library of built-in algorithms. Algorithms available in the DKS library will be written in CUDA, OpenCL, and OpenMP. Depending on the available hardware, the DKS can select the appropriate implementation of the algorithm. The first DKS version was created using CUDA for the Nvidia GPUs and OpenMP for Intel MIC. DKS was further integrated into OPAL (Object-oriented Parallel Accelerator Library) in order to speed up a parallel FFT based Poisson solver and Monte Carlo simulations for particle-matter interaction used for proton therapy degrader modelling. DKS was also used together with Minuit2 for parameter fitting, where χ2 and max-log-likelihood functions were offloaded to the hardware accelerator. The concepts of the DKS, first results, and plans for the future will be shown in this paper.

  15. Kernel MAD Algorithm for Relative Radiometric Normalization

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Tang, Ping; Hu, Changmiao

    2016-06-01

    The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  16. Searching for efficient Markov chain Monte Carlo proposal kernels.

    PubMed

    Yang, Ziheng; Rodríguez, Carlos E

    2013-11-26

    Markov chain Monte Carlo (MCMC) or the Metropolis-Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis-Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals.

  17. Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.

    PubMed

    Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K

    2016-03-01

    Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.

  18. An Ensemble Approach to Building Mercer Kernels with Prior Information

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  19. Multiple Kernel Learning for Visual Object Recognition: A Review.

    PubMed

    Bucak, Serhat S; Rong Jin; Jain, Anil K

    2014-07-01

    Multiple kernel learning (MKL) is a principled approach for selecting and combining kernels for a given recognition task. A number of studies have shown that MKL is a useful tool for object recognition, where each image is represented by multiple sets of features and MKL is applied to combine different feature sets. We review the state-of-the-art for MKL, including different formulations and algorithms for solving the related optimization problems, with the focus on their applications to object recognition. One dilemma faced by practitioners interested in using MKL for object recognition is that different studies often provide conflicting results about the effectiveness and efficiency of MKL. To resolve this, we conduct extensive experiments on standard datasets to evaluate various approaches to MKL for object recognition. We argue that the seemingly contradictory conclusions offered by studies are due to different experimental setups. The conclusions of our study are: (i) given a sufficient number of training examples and feature/kernel types, MKL is more effective for object recognition than simple kernel combination (e.g., choosing the best performing kernel or average of kernels); and (ii) among the various approaches proposed for MKL, the sequential minimal optimization, semi-infinite programming, and level method based ones are computationally most efficient.

  20. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  1. A Gabor-block-based kernel discriminative common vector approach using cosine kernels for human face recognition.

    PubMed

    Kar, Arindam; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita; Kundu, Mahantapas

    2012-01-01

    In this paper a nonlinear Gabor Wavelet Transform (GWT) discriminant feature extraction approach for enhanced face recognition is proposed. Firstly, the low-energized blocks from Gabor wavelet transformed images are extracted. Secondly, the nonlinear discriminating features are analyzed and extracted from the selected low-energized blocks by the generalized Kernel Discriminative Common Vector (KDCV) method. The KDCV method is extended to include cosine kernel function in the discriminating method. The KDCV with the cosine kernels is then applied on the extracted low-energized discriminating feature vectors to obtain the real component of a complex quantity for face recognition. In order to derive positive kernel discriminative vectors, we apply only those kernel discriminative eigenvectors that are associated with nonzero eigenvalues. The feasibility of the low-energized Gabor-block-based generalized KDCV method with cosine kernel function models has been successfully tested for classification using the L(1), L(2) distance measures; and the cosine similarity measure on both frontal and pose-angled face recognition. Experimental results on the FRAV2D and the FERET database demonstrate the effectiveness of this new approach.

  2. Cold-moderator scattering kernel methods

    SciTech Connect

    MacFarlane, R. E.

    1998-01-01

    An accurate representation of the scattering of neutrons by the materials used to build cold sources at neutron scattering facilities is important for the initial design and optimization of a cold source, and for the analysis of experimental results obtained using the cold source. In practice, this requires a good representation of the physics of scattering from the material, a method to convert this into observable quantities (such as scattering cross sections), and a method to use the results in a neutron transport code (such as the MCNP Monte Carlo code). At Los Alamos, the authors have been developing these capabilities over the last ten years. The final set of cold-moderator evaluations, together with evaluations for conventional moderator materials, was released in 1994. These materials have been processed into MCNP data files using the NJOY Nuclear Data Processing System. Over the course of this work, they were able to develop a new module for NJOY called LEAPR based on the LEAP + ADDELT code from the UK as modified by D.J. Picton for cold-moderator calculations. Much of the physics for methane came from Picton`s work. The liquid hydrogen work was originally based on a code using the Young-Koppel approach that went through a number of hands in Europe (including Rolf Neef and Guy Robert). It was generalized and extended for LEAPR, and depends strongly on work by Keinert and Sax of the University of Stuttgart. Thus, their collection of cold-moderator scattering kernels is truly an international effort, and they are glad to be able to return the enhanced evaluations and processing techniques to the international community. In this paper, they give sections on the major cold moderator materials (namely, solid methane, liquid methane, and liquid hydrogen) using each section to introduce the relevant physics for that material and to show typical results.

  3. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  4. Mean kernels to improve gravimetric geoid determination based on modified Stokes's integration

    NASA Astrophysics Data System (ADS)

    Hirt, C.

    2011-11-01

    Gravimetric geoid computation is often based on modified Stokes's integration, where Stokes's integral is evaluated with some stochastic or deterministic kernel modification. Accurate numerical evaluation of Stokes's integral requires the modified kernel to be integrated across the area of each discretised grid cell (mean kernel). Evaluating the modified kernel at the center of the cell (point kernel) is an approximation, which may result in larger numerical integration errors near the computation point, where the modified kernel exhibits a strongly nonlinear behavior. The present study deals with the computation of whole-of-the-cell mean values of modified kernels, exemplified here with the Featherstone-Evans-Olliver (1998) kernel modification [Featherstone, W.E., Evans, J.D., Olliver, J.G., 1998. A Meissl-modified Vaníček and Kleusberg kernel to reduce the truncation error in gravimetric geoid computations. Journal of Geodesy 72(3), 154-160]. We investigate two approaches (analytical and numerical integration), which are capable of providing accurate mean kernels. The analytical integration approach is based on kernel weighting factors which are used for the conversion of point to mean kernels. For the efficient numerical integration, Gauss-Legendre quadrature is applied. The comparison of mean kernels from both approaches shows a satisfactory mutual agreement at the level of 10 -4 and better, which is considered to be sufficient for practical geoid computation requirements. Closed-loop tests based on the EGM2008 geopotential model demonstrate that using mean instead of point kernels reduces numerical integration errors by ˜65%. The use of mean kernels is recommended in remove-compute-restore geoid determination with the Featherstone-Evans-Olliver (1998) kernel or any other kernel modification under the condition that the kernel changes rapidly across the cells in the neighborhood of the computation point.

  5. A one-class kernel fisher criterion for outlier detection.

    PubMed

    Dufrenois, Franck

    2015-05-01

    Recently, Dufrenois and Noyer proposed a one class Fisher's linear discriminant to isolate normal data from outliers. In this paper, a kernelized version of their criterion is presented. Originally on the basis of an iterative optimization process, alternating between subspace selection and clustering, I show here that their criterion has an upper bound making these two problems independent. In particular, the estimation of the label vector is formulated as an unconstrained binary linear problem (UBLP) which can be solved using an iterative perturbation method. Once the label vector is estimated, an optimal projection subspace is obtained by solving a generalized eigenvalue problem. Like many other kernel methods, the performance of the proposed approach depends on the choice of the kernel. Constructed with a Gaussian kernel, I show that the proposed contrast measure is an efficient indicator for selecting an optimal kernel width. This property simplifies the model selection problem which is typically solved by costly (generalized) cross-validation procedures. Initialization, convergence analysis, and computational complexity are also discussed. Lastly, the proposed algorithm is compared with recent novelty detectors on synthetic and real data sets.

  6. On flame kernel formation and propagation in premixed gases

    SciTech Connect

    Eisazadeh-Far, Kian; Metghalchi, Hameed; Parsinejad, Farzan; Keck, James C.

    2010-12-15

    Flame kernel formation and propagation in premixed gases have been studied experimentally and theoretically. The experiments have been carried out at constant pressure and temperature in a constant volume vessel located in a high speed shadowgraph system. The formation and propagation of the hot plasma kernel has been simulated for inert gas mixtures using a thermodynamic model. The effects of various parameters including the discharge energy, radiation losses, initial temperature and initial volume of the plasma have been studied in detail. The experiments have been extended to flame kernel formation and propagation of methane/air mixtures. The effect of energy terms including spark energy, chemical energy and energy losses on flame kernel formation and propagation have been investigated. The inputs for this model are the initial conditions of the mixture and experimental data for flame radii. It is concluded that these are the most important parameters effecting plasma kernel growth. The results of laminar burning speeds have been compared with previously published results and are in good agreement. (author)

  7. CRKSPH - A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme

    NASA Astrophysics Data System (ADS)

    Frontiere, Nicholas; Raskin, Cody D.; Owen, J. Michael

    2017-03-01

    We present a formulation of smoothed particle hydrodynamics (SPH) that utilizes a first-order consistent reproducing kernel, a smoothing function that exactly interpolates linear fields with particle tracers. Previous formulations using reproducing kernel (RK) interpolation have had difficulties maintaining conservation of momentum due to the fact the RK kernels are not, in general, spatially symmetric. Here, we utilize a reformulation of the fluid equations such that mass, linear momentum, and energy are all rigorously conserved without any assumption about kernel symmetries, while additionally maintaining approximate angular momentum conservation. Our approach starts from a rigorously consistent interpolation theory, where we derive the evolution equations to enforce the appropriate conservation properties, at the sacrifice of full consistency in the momentum equation. Additionally, by exploiting the increased accuracy of the RK method's gradient, we formulate a simple limiter for the artificial viscosity that reduces the excess diffusion normally incurred by the ordinary SPH artificial viscosity. Collectively, we call our suite of modifications to the traditional SPH scheme Conservative Reproducing Kernel SPH, or CRKSPH. CRKSPH retains many benefits of traditional SPH methods (such as preserving Galilean invariance and manifest conservation of mass, momentum, and energy) while improving on many of the shortcomings of SPH, particularly the overly aggressive artificial viscosity and zeroth-order inaccuracy. We compare CRKSPH to two different modern SPH formulations (pressure based SPH and compatibly differenced SPH), demonstrating the advantages of our new formulation when modeling fluid mixing, strong shock, and adiabatic phenomena.

  8. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    PubMed

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function.

  9. Coupled kernel embedding for low resolution face image recognition.

    PubMed

    Ren, Chuan-Xian; Dai, Dao-Qing; Yan, Hong

    2012-08-01

    Practical video scene and face recognition systems are sometimes confronted with low-resolution (LR) images. The faces may be very small even if the video is clear, thus it is difficult to directly measure the similarity between the faces and the high-resolution (HR) training samples. Traditional super-resolution (SR) methods based face recognition usually have limited performance because the target of SR may not be consistent with that of classification, and time-consuming SR algorithms are not suitable for real-time applications. In this paper, a new feature extraction method called Coupled Kernel Embedding (CKE) is proposed for LR face recognition without any SR preprocessing. In this method, the final kernel matrix is constructed by concatenating two individual kernel matrices in the diagonal direction, and the (semi-)positively definite properties are preserved for optimization. CKE addresses the problem of comparing multi-modal data that are difficult for conventional methods in practice due to the lack of an efficient similarity measure. Particularly, different kernel types (e.g., linear, Gaussian, polynomial) can be integrated into an uniformed optimization objective, which cannot be achieved by simple linear methods. CKE solves this problem by minimizing the dissimilarities captured by their kernel Gram matrices in the low- and high-resolution spaces. In the implementation, the nonlinear objective function is minimized by a generalized eigenvalue decomposition. Experiments on benchmark and real databases show that our CKE method indeed improves the recognition performance.

  10. Optimizing spatial filters with kernel methods for BCI applications

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacai; Tang, Jianjun; Yao, Li

    2007-11-01

    Brain Computer Interface (BCI) is a communication or control system in which the user's messages or commands do not depend on the brain's normal output channels. The key step of BCI technology is to find a reliable method to detect the particular brain signals, such as the alpha, beta and mu components in EEG/ECOG trials, and then translate it into usable control signals. In this paper, our objective is to introduce a novel approach that is able to extract the discriminative pattern from the non-stationary EEG signals based on the common spatial patterns(CSP) analysis combined with kernel methods. The basic idea of our Kernel CSP method is performing a nonlinear form of CSP by the use of kernel methods that can efficiently compute the common and distinct components in high dimensional feature spaces related to input space by some nonlinear map. The algorithm described here is tested off-line with dataset I from the BCI Competition 2005. Our experiments show that the spatial filters employed with kernel CSP can effectively extract discriminatory information from single-trial EGOG recorded during imagined movements. The high recognition of linear discriminative rates and computational simplicity of "Kernel Trick" make it a promising method for BCI systems.

  11. Travel-time sensitivity kernels in long-range propagation.

    PubMed

    Skarsoulis, E K; Cornuelle, B D; Dzieciuch, M A

    2009-11-01

    Wave-theoretic travel-time sensitivity kernels (TSKs) are calculated in two-dimensional (2D) and three-dimensional (3D) environments and their behavior with increasing propagation range is studied and compared to that of ray-theoretic TSKs and corresponding Fresnel-volumes. The differences between the 2D and 3D TSKs average out when horizontal or cross-range marginals are considered, which indicates that they are not important in the case of range-independent sound-speed perturbations or perturbations of large scale compared to the lateral TSK extent. With increasing range, the wave-theoretic TSKs expand in the horizontal cross-range direction, their cross-range extent being comparable to that of the corresponding free-space Fresnel zone, whereas they remain bounded in the vertical. Vertical travel-time sensitivity kernels (VTSKs)-one-dimensional kernels describing the effect of horizontally uniform sound-speed changes on travel-times-are calculated analytically using a perturbation approach, and also numerically, as horizontal marginals of the corresponding TSKs. Good agreement between analytical and numerical VTSKs, as well as between 2D and 3D VTSKs, is found. As an alternative method to obtain wave-theoretic sensitivity kernels, the parabolic approximation is used; the resulting TSKs and VTSKs are in good agreement with normal-mode results. With increasing range, the wave-theoretic VTSKs approach the corresponding ray-theoretic sensitivity kernels.

  12. Face detection based on multiple kernel learning algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun

    2016-09-01

    Face detection is important for face localization in face or facial expression recognition, etc. The basic idea is to determine whether there is a face in an image or not, and also its location, size. It can be seen as a binary classification problem, which can be well solved by support vector machine (SVM). Though SVM has strong model generalization ability, it has some limitations, which will be deeply analyzed in the paper. To access them, we study the principle and characteristics of the Multiple Kernel Learning (MKL) and propose a MKL-based face detection algorithm. In the paper, we describe the proposed algorithm in the interdisciplinary research perspective of machine learning and image processing. After analyzing the limitation of describing a face with a single feature, we apply several ones. To fuse them well, we try different kernel functions on different feature. By MKL method, the weight of each single function is determined. Thus, we obtain the face detection model, which is the kernel of the proposed method. Experiments on the public data set and real life face images are performed. We compare the performance of the proposed algorithm with the single kernel-single feature based algorithm and multiple kernels-single feature based algorithm. The effectiveness of the proposed algorithm is illustrated. Keywords: face detection, feature fusion, SVM, MKL

  13. Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Theodoridis, Sergios

    2008-12-01

    Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.

  14. Spine labeling in axial magnetic resonance imaging via integral kernels.

    PubMed

    Miles, Brandon; Ben Ayed, Ismail; Hojjat, Seyed-Parsa; Wang, Michael H; Li, Shuo; Fenster, Aaron; Garvin, Gregory J

    2016-12-01

    This study investigates a fast integral-kernel algorithm for classifying (labeling) the vertebra and disc structures in axial magnetic resonance images (MRI). The method is based on a hierarchy of feature levels, where pixel classifications via non-linear probability product kernels (PPKs) are followed by classifications of 2D slices, individual 3D structures and groups of 3D structures. The algorithm further embeds geometric priors based on anatomical measurements of the spine. Our classifier requires evaluations of computationally expensive integrals at each pixel, and direct evaluations of such integrals would be prohibitively time consuming. We propose an efficient computation of kernel density estimates and PPK evaluations for large images and arbitrary local window sizes via integral kernels. Our method requires a single user click for a whole 3D MRI volume, runs nearly in real-time, and does not require an intensive external training. Comprehensive evaluations over T1-weighted axial lumbar spine data sets from 32 patients demonstrate a competitive structure classification accuracy of 99%, along with a 2D slice classification accuracy of 88%. To the best of our knowledge, such a structure classification accuracy has not been reached by the existing spine labeling algorithms. Furthermore, we believe our work is the first to use integral kernels in the context of medical images.

  15. Kernel Manifold Alignment for Domain Adaptation.

    PubMed

    Tuia, Devis; Camps-Valls, Gustau

    2016-01-01

    The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors' knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational

  16. Kernel Manifold Alignment for Domain Adaptation

    PubMed Central

    Tuia, Devis; Camps-Valls, Gustau

    2016-01-01

    The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors’ knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational

  17. Lifting kernel-based sprite codec

    NASA Astrophysics Data System (ADS)

    Dasu, Aravind R.; Panchanathan, Sethuraman

    2000-12-01

    The International Standards Organization (ISO) has proposed a family of standards for compression of image and video sequences, including the JPEG, MPEG-1 and MPEG-2. The latest MPEG-4 standard has many new dimensions to coding and manipulation of visual content. A video sequence usually contains a background object and many foreground objects. Portions of this background may not be visible in certain frames due to the occlusion of the foreground objects or camera motion. MPEG-4 introduces the novel concepts of Video Object Planes (VOPs) and Sprites. A VOP is a visual representation of real world objects with shapes that need not be rectangular. Sprite is a large image composed of pixels belonging to a video object visible throughout a video segment. Since a sprite contains all parts of the background that were at least visible once, it can be used for direct reconstruction of the background Video Object Plane (VOP). Sprite reconstruction is dependent on the mode in which it is transmitted. In the Static sprite mode, the entire sprite is decoded as an Intra VOP before decoding the individual VOPs. Since sprites consist of the information needed to display multiple frames of a video sequence, they are typically much larger than a single frame of video. Therefore a static sprite can be considered as a large static image. In this paper, a novel solution to address the problem of spatial scalability has been proposed, where the sprite is encoded in Discrete Wavelet Transform (DWT). A lifting kernel method of DWT implementation has been used for encoding and decoding sprites. Modifying the existing lifting scheme while maintaining it to be shape adaptive results in a reduced complexity. The proposed scheme has the advantages of (1) avoiding the need for any extensions to image or tile border pixels and is hence superior to the DCT based low latency scheme (used in the current MPEG-4 verification model), (2) mapping the in place computed wavelet coefficients into a zero

  18. Semisupervised kernel marginal Fisher analysis for face recognition.

    PubMed

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  19. Semisupervised Kernel Marginal Fisher Analysis for Face Recognition

    PubMed Central

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm. PMID:24163638

  20. A method of smoothed particle hydrodynamics using spheroidal kernels

    NASA Technical Reports Server (NTRS)

    Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.

    1995-01-01

    We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.

  1. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  2. Weighted Feature Gaussian Kernel SVM for Emotion Recognition

    PubMed Central

    Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443

  3. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    PubMed Central

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  4. Compression loading behaviour of sunflower seeds and kernels

    NASA Astrophysics Data System (ADS)

    Selvam, Thasaiya A.; Manikantan, Musuvadi R.; Chand, Tarsem; Sharma, Rajiv; Seerangurayar, Thirupathi

    2014-10-01

    The present study was carried out to investigate the compression loading behaviour of five Indian sunflower varieties (NIRMAL-196, NIRMAL-303, CO-2, KBSH-41, and PSH- 996) under four different moisture levels (6-18% d.b). The initial cracking force, mean rupture force, and rupture energy were measured as a function of moisture content. The observed results showed that the initial cracking force decreased linearly with an increase in moisture content for all varieties. The mean rupture force also decreased linearly with an increase in moisture content. However, the rupture energy was found to be increasing linearly for seed and kernel with moisture content. NIRMAL-196 and PSH-996 had maximum and minimum values of all the attributes studied for both seed and kernel, respectively. The values of all the studied attributes were higher for seed than kernel of all the varieties at all moisture levels. There was a significant effect of moisture and variety on compression loading behaviour.

  5. Weighted Feature Gaussian Kernel SVM for Emotion Recognition.

    PubMed

    Wei, Wei; Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods.

  6. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  7. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  8. The Effects of Kernel Feeding by Halyomorpha halys (Hemiptera: Pentatomidae) on Commercial Hazelnuts.

    PubMed

    Hedstrom, C S; Shearer, P W; Miller, J C; Walton, V M

    2014-10-01

    Halyomorpha halys Stål, the brown marmorated stink bug (Hemiptera: Pentatomidae), is an invasive pest with established populations in Oregon. The generalist feeding habits of H. halys suggest it has the potential to be a pest of many specialty crops grown in Oregon, including hazelnuts, Corylus avellana L. The objectives of this study were to: 1) characterize the damage to developing hazelnut kernels resulting from feeding by H. halys adults, 2) determine how the timing of feeding during kernel development influences damage to kernels, and 3) determine if hazelnut shell thickness has an effect on feeding frequency on kernels. Adult brown marmorated stink bugs were allowed to feed on developing nuts for 1-wk periods from initial kernel development (spring) until harvest (fall). Developing nuts not exposed to feeding by H. halys served as a control treatment. The degree of damage and diagnostic symptoms corresponded with the hazelnut kernels' physiological development. Our results demonstrated that when H. halys fed on hazelnuts before kernel expansion, development of the kernels could cease, resulting in empty shells. When stink bugs fed during kernel expansion, kernels appeared malformed. When stink bugs fed on mature nuts the kernels exhibited corky, necrotic areas. Although significant differences in shell thickness were observed among the cultivars, no significant differences occurred in the proportions of damaged kernels based on field tests and laboratory choice tests. The results of these studies demonstrated that commercial hazelnuts are susceptible to damage caused by the feeding of H. halys throughout the entire period of kernel development.

  9. Single aflatoxin contaminated corn kernel analysis with fluorescence hyperspectral image

    NASA Astrophysics Data System (ADS)

    Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Ononye, Ambrose; Brown, Robert L.; Cleveland, Thomas E.

    2010-04-01

    Aflatoxins are toxic secondary metabolites of the fungi Aspergillus flavus and Aspergillus parasiticus, among others. Aflatoxin contaminated corn is toxic to domestic animals when ingested in feed and is a known carcinogen associated with liver and lung cancer in humans. Consequently, aflatoxin levels in food and feed are regulated by the Food and Drug Administration (FDA) in the US, allowing 20 ppb (parts per billion) limits in food and 100 ppb in feed for interstate commerce. Currently, aflatoxin detection and quantification methods are based on analytical tests including thin-layer chromatography (TCL) and high performance liquid chromatography (HPLC). These analytical tests require the destruction of samples, and are costly and time consuming. Thus, the ability to detect aflatoxin in a rapid, nondestructive way is crucial to the grain industry, particularly to corn industry. Hyperspectral imaging technology offers a non-invasive approach toward screening for food safety inspection and quality control based on its spectral signature. The focus of this paper is to classify aflatoxin contaminated single corn kernels using fluorescence hyperspectral imagery. Field inoculated corn kernels were used in the study. Contaminated and control kernels under long wavelength ultraviolet excitation were imaged using a visible near-infrared (VNIR) hyperspectral camera. The imaged kernels were chemically analyzed to provide reference information for image analysis. This paper describes a procedure to process corn kernels located in different images for statistical training and classification. Two classification algorithms, Maximum Likelihood and Binary Encoding, were used to classify each corn kernel into "control" or "contaminated" through pixel classification. The Binary Encoding approach had a slightly better performance with accuracy equals to 87% or 88% when 20 ppb or 100 ppb was used as classification threshold, respectively.

  10. Aflatoxin detection in whole corn kernels using hyperspectral methods

    NASA Astrophysics Data System (ADS)

    Casasent, David; Chen, Xue-Wen

    2004-03-01

    Hyperspectral (HS) data for the inspection of whole corn kernels for aflatoxin is considered. The high-dimensionality of HS data requires feature extraction or selection for good classifier generalization. For fast and inexpensive data collection, only several features (λ responses) can be used. These are obtained by feature selection from the full HS response. A new high dimensionality branch and bound (HDBB) feature selection algorithm is used; it is found to be optimum, fast and very efficient. Initial results indicate that HS data is very promising for aflatoxin detection in whole kernel corn.

  11. Characterizations of linear Volterra integral equations with nonnegative kernels

    NASA Astrophysics Data System (ADS)

    Naito, Toshiki; Shin, Jong Son; Murakami, Satoru; Ngoc, Pham Huu Anh

    2007-11-01

    We first introduce the notion of positive linear Volterra integral equations. Then, we offer a criterion for positive equations in terms of the resolvent. In particular, equations with nonnegative kernels are positive. Next, we obtain a variant of the Paley-Wiener theorem for equations of this class and its extension to perturbed equations. Furthermore, we get a Perron-Frobenius type theorem for linear Volterra integral equations with nonnegative kernels. Finally, we give a criterion for positivity of the initial function semigroup of linear Volterra integral equations and provide a necessary and sufficient condition for exponential stability of the semigroups.

  12. Source identity and kernel functions for Inozemtsev-type systems

    NASA Astrophysics Data System (ADS)

    Langmann, Edwin; Takemura, Kouichi

    2012-08-01

    The Inozemtsev Hamiltonian is an elliptic generalization of the differential operator defining the BCN trigonometric quantum Calogero-Sutherland model, and its eigenvalue equation is a natural many-variable generalization of the Heun differential equation. We present kernel functions for Inozemtsev Hamiltonians and Chalykh-Feigin-Veselov-Sergeev-type deformations thereof. Our main result is a solution of a heat-type equation for a generalized Inozemtsev Hamiltonian which is the source of all these kernel functions. Applications are given, including a derivation of simple exact eigenfunctions and eigenvalues of the Inozemtsev Hamiltonian.

  13. FUV Continuum in Flare Kernels Observed by IRIS

    NASA Astrophysics Data System (ADS)

    Daw, Adrian N.; Kowalski, Adam; Allred, Joel C.; Cauzzi, Gianna

    2016-05-01

    Fits to Interface Region Imaging Spectrograph (IRIS) spectra observed from bright kernels during the impulsive phase of solar flares are providing long-sought constraints on the UV/white-light continuum emission. Results of fits of continua plus numerous atomic and molecular emission lines to IRIS far ultraviolet (FUV) spectra of bright kernels are presented. Constraints on beam energy and cross sectional area are provided by cotemporaneous RHESSI, FERMI, ROSA/DST, IRIS slit-jaw and SDO/AIA observations, allowing for comparison of the observed IRIS continuum to calculations of non-thermal electron beam heating using the RADYN radiative-hydrodynamic loop model.

  14. Research on classifying performance of SVMs with basic kernel in HCCR

    NASA Astrophysics Data System (ADS)

    Sun, Limin; Gai, Zhaoxin

    2006-02-01

    It still is a difficult task for handwritten chinese character recognition (HCCR) to put into practical use. An efficient classifier occupies very important position for increasing offline HCCR rate. SVMs offer a theoretically well-founded approach to automated learning of pattern classifiers for mining labeled data sets. As we know, the performance of SVM largely depends on the kernel function. In this paper, we investigated the classification performance of SVMs with various common kernels in HCCR. We found that except for sigmoid kernel, SVMs with polynomial kernel, linear kernel, RBF kernel and multi-quadratic kernel are all efficient classifier for HCCR, their behavior has a little difference, taking one with another, SVM with multi-quadratic kernel is the best.

  15. A multiple-kernel fuzzy C-means algorithm for image segmentation.

    PubMed

    Chen, Long; Chen, C L Philip; Lu, Mingzhu

    2011-10-01

    In this paper, a generalized multiple-kernel fuzzy C-means (FCM) (MKFCM) methodology is introduced as a framework for image-segmentation problems. In the framework, aside from the fact that the composite kernels are used in the kernel FCM (KFCM), a linear combination of multiple kernels is proposed and the updating rules for the linear coefficients of the composite kernel are derived as well. The proposed MKFCM algorithm provides us a new flexible vehicle to fuse different pixel information in image-segmentation problems. That is, different pixel information represented by different kernels is combined in the kernel space to produce a new kernel. It is shown that two successful enhanced KFCM-based image-segmentation algorithms are special cases of MKFCM. Several new segmentation algorithms are also derived from the proposed MKFCM framework. Simulations on the segmentation of synthetic and medical images demonstrate the flexibility and advantages of MKFCM-based approaches.

  16. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  17. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images.

  18. Genome-wide Association Analysis of Kernel Weight in Hard Winter Wheat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Wheat kernel weight is an important and heritable component of wheat grain yield and a key predictor of flour extraction. Genome-wide association analysis was conducted to identify genomic regions associated with kernel weight and kernel weight environmental response in 8 trials of 299 hard winter ...

  19. An Implementation of Multiprogramming and Process Management for a Security Kernel Operating System.

    DTIC Science & Technology

    1980-06-01

    multiplexing technique for a distributed kernel and presents a virtual interrupt mechanism. Its structure is loop free to permit future expansion into more...coordinates the asynchronous interaction of system processes. This implementation describes a processor multiplexing technique for a distributed kernel...system. This implementation employs a processor multiplexing technique for a distributed kernel and presents a virtual interrupt mechanism. The

  20. An Adaptive Genetic Association Test Using Double Kernel Machines.

    PubMed

    Zhan, Xiang; Epstein, Michael P; Ghosh, Debashis

    2015-10-01

    Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study.

  1. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    ERIC Educational Resources Information Center

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  2. Predicting disease trait with genomic data: a composite kernel approach.

    PubMed

    Yang, Haitao; Li, Shaoyu; Cao, Hongyan; Zhang, Chichen; Cui, Yuehua

    2016-06-02

    With the advancement of biotechniques, a vast amount of genomic data is generated with no limit. Predicting a disease trait based on these data offers a cost-effective and time-efficient way for early disease screening. Here we proposed a composite kernel partial least squares (CKPLS) regression model for quantitative disease trait prediction focusing on genomic data. It can efficiently capture nonlinear relationships among features compared with linear learning algorithms such as Least Absolute Shrinkage and Selection Operator or ridge regression. We proposed to optimize the kernel parameters and kernel weights with the genetic algorithm (GA). In addition to improved performance for parameter optimization, the proposed GA-CKPLS approach also has better learning capacity and generalization ability compared with single kernel-based KPLS method as well as other nonlinear prediction models such as the support vector regression. Extensive simulation studies demonstrated that GA-CKPLS had better prediction performance than its counterparts under different scenarios. The utility of the method was further demonstrated through two case studies. Our method provides an efficient quantitative platform for disease trait prediction based on increasing volume of omics data.

  3. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  4. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  5. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  6. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  7. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    SciTech Connect

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  8. Notes on a storage manager for the Clouds kernel

    NASA Technical Reports Server (NTRS)

    Pitts, David V.; Spafford, Eugene H.

    1986-01-01

    The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.

  9. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    DTIC Science & Technology

    2006-01-01

    successfully used by the machine learning community for pattern recognition and image denoising [14]. A Gaussian kernel was used by Cremers et al. [8] for...matrix M, where φi ∈ RNd . Using Singular Value Decomposition ( SVD ), the covariance matrix 1nMM T is decomposed as: UΣUT = 1 n MMT (1) where U is a

  10. Classification of Microarray Data Using Kernel Fuzzy Inference System.

    PubMed

    Kumar, Mukesh; Kumar Rath, Santanu

    2014-01-01

    The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature selection method. Kernel functions are used to map original data points into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function ϕ through a mathematical process called the kernel trick. This paper also presents a comparative study for classification using K-FIS along with support vector machine (SVM) for different set of features (genes). Performance parameters available in the literature such as precision, recall, specificity, F-measure, ROC curve, and accuracy are considered to analyze the efficiency of the classification model. From the proposed approach, it is apparent that K-FIS model obtains similar results when compared with SVM model. This is an indication that the proposed approach relies on kernel function.

  11. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  12. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  13. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  14. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  15. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  16. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  17. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  18. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND...

  19. Online multiple kernel similarity learning for visual search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2014-03-01

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in content-based image retrieval (CBIR). Despite their successes, most existing methods on distance metric learning are limited in two aspects. First, they usually assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multimodal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel similarity learning framework for learning kernel-based proximity functions which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel online multiple kernel similarity (OMKS) learning method which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets in which encouraging results show that OMKS outperforms the state-of-the-art techniques significantly.

  20. Online Multiple Kernel Similarity Learning for Visual Search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2013-08-13

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in Content-Based Image Retrieval (CBIR). Despite their popularity and success, most existing methods on distance metric learning are limited in two aspects. First, they typically assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multi-modal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel ranking framework for learning kernel-based proximity functions, which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel Online Multiple Kernel Ranking (OMKR) method, which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets, in which encouraging results show that OMKR outperforms the state-of-the-art techniques significantly.

  1. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2009-02-20

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  2. Microwave moisture meter for in-shell peanut kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    . A microwave moisture meter built with off-the-shelf components was developed, calibrated and tested in the laboratory and in the field for nondestructive and instantaneous in-shell peanut kernel moisture content determination from dielectric measurements on unshelled peanut pod samples. The meter ...

  3. Music emotion detection using hierarchical sparse kernel machines.

    PubMed

    Chin, Yu-Hao; Lin, Chang-Hong; Siahaan, Ernestasia; Wang, Jia-Ching

    2014-01-01

    For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA) is implemented to reduce the dimension. The acoustical features are utilized to generate the first-level decision vector, which is a vector with each element being a significant value of an emotion. The significant values of eight main emotional classes are utilized in this paper. To calculate the significant value of an emotion, we construct its 2-class SVM with calm emotion as the global (non-target) side of the SVM. The probability distributions of the adopted acoustical features are calculated and the probability product kernel is applied in the first-level SVMs to obtain first-level decision vector feature. In the second level of the hierarchical system, we merely construct a 2-class relevance vector machine (RVM) with happiness as the target side and other emotions as the background side of the RVM. The first-level decision vector is used as the feature with conventional radial basis function kernel. The happiness verification threshold is built on the probability value. In the experimental results, the detection error tradeoff (DET) curve shows that the proposed system has a good performance on verifying if a music clip reveals happiness emotion.

  4. Matrix kernels for MEG and EEG source localization and imaging

    SciTech Connect

    Mosher, J.C.; Lewis, P.S.; Leahy, R.M.

    1994-12-31

    The most widely used model for electroencephalography (EEG) and magnetoencephalography (MEG) assumes a quasi-static approximation of Maxwell`s equations and a piecewise homogeneous conductor model. Both models contain an incremental field element that linearly relates an incremental source element (current dipole) to the field or voltage at a distant point. The explicit form of the field element is dependent on the head modeling assumptions and sensor configuration. Proper characterization of this incremental element is crucial to the inverse problem. The field element can be partitioned into the product of a vector dependent on sensor characteristics and a matrix kernel dependent only on head modeling assumptions. We present here the matrix kernels for the general boundary element model (BEM) and for MEG spherical models. We show how these kernels are easily interchanged in a linear algebraic framework that includes sensor specifics such as orientation and gradiometer configuration. We then describe how this kernel is easily applied to ``gain`` or ``transfer`` matrices used in multiple dipole and source imaging models.

  5. Quality Characteristics of Soft Kernel Durum -- A New Cereal Crop

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Production of crops is in part limited by consumer demand and utilization. In this regard, world production of durum wheat (Triticum turgidum subsp. durum is limited by its culinary uses. The leading constraint is its very hard kernels. Puroindolines, which act to soften the endosperm, are completel...

  6. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) FOOD FOR HUMAN CONSUMPTION (CONTINUED) INDIRECT FOOD ADDITIVES: PAPER AND PAPERBOARD COMPONENTS Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind..., manufacturing, packing, processing, preparing, treating, packaging, transporting, or holding food, subject...

  7. Popping the Kernel Modeling the States of Matter

    ERIC Educational Resources Information Center

    Hitt, Austin; White, Orvil; Hanson, Debbie

    2005-01-01

    This article discusses how to use popcorn to engage students in model building and to teach them about the nature of matter. Popping kernels is a simple and effective method to connect the concepts of heat, motion, and volume with the different phases of matter. Before proceeding with the activity the class should discuss the nature of scientific…

  8. Distortion-invariant kernel correlation filters for general object recognition

    NASA Astrophysics Data System (ADS)

    Patnaik, Rohit

    General object recognition is a specific application of pattern recognition, in which an object in a background must be classified in the presence of several distortions such as aspect-view differences, scale differences, and depression-angle differences. Since the object can be present at different locations in the test input, a classification algorithm must be applied to all possible object locations in the test input. We emphasize one type of classifier, the distortion-invariant filter (DIF), for fast object recognition, since it can be applied to all possible object locations using a fast Fourier transform (FFT) correlation. We refer to distortion-invariant correlation filters simply as DIFs. DIFs all use a combination of training-set images that are representative of the expected distortions in the test set. In this dissertation, we consider a new approach that combines DIFs and the higher-order kernel technique; these form what we refer to as "kernel DIFs." Our objective is to develop higher-order classifiers that can be applied (efficiently and fast) to all possible locations of the object in the test input. All prior kernel DIFs ignored the issue of efficient filter shifts. We detail which kernel DIF formulations are computational realistic to use and why. We discuss the proper way to synthesize DIFs and kernel DIFs for the wide area search case (i.e., when a small filter must be applied to a much larger test input) and the preferable way to perform wide area search with these filters; this is new. We use computer-aided design (CAD) simulated infrared (IR) object imagery and real IR clutter imagery to obtain test results. Our test results on IR data show that a particular kernel DIF, the kernel SDF filter and its new "preprocessed" version, is promising, in terms of both test-set performance and on-line calculations, and is emphasized in this dissertation. We examine the recognition of object variants. We also quantify the effect of different constant

  9. Slow Down or Speed Up? Lowering Periapsis versus Escaping from a Circular Orbit

    ERIC Educational Resources Information Center

    Blanco, Philip

    2017-01-01

    Paul Hewitt's "Figuring Physics" in the Feb. 2016 issue asked whether it would take a larger velocity change to stop a satellite in a circular orbit or to cause it to escape. An extension of this problem asks: What "minimum" velocity change is required to crash a satellite into the planet, and how does that compare with the…

  10. Carnitines slow down tumor development of colon cancer in the DMH-chemical carcinogenesis mouse model.

    PubMed

    Roscilli, Giuseppe; Marra, Emanuele; Mori, Federica; Di Napoli, Arianna; Mancini, Rita; Serlupi-Crescenzi, Ottaviano; Virmani, Ashraf; Aurisicchio, Luigi; Ciliberto, Gennaro

    2013-07-01

    Dietary agents are receiving much attention for the chemoprevention of cancer. While curcumin is known to influence several pathways and affect tumor growth in vivo, carnitin and its congeners play a variety of important metabolic functions: are involved in the oxydation of long-chain fatty acids, regulate acyl-CoA levels and influence protein activity and stability by modifying the extent of protein acetylation. In this study we evaluated the efficacy of carnitines in the prevention of cancer development using the 1,2,-dimethylhydrazine (DMH)-induced colon carcinogenesis model. We also assessed whether their combination was able to give rise to increased protection from cancer development. Mice treated with DMH were dosed orally with curcumin and/or carnitine and acylcarnitines for 20 weeks. At the end of the treatment colon samples were collected, and scored for multiple ACF and adenomas. We observed that carnitine and acyl-carnitines had same, if not higher, efficacy than curcumin alone in inhibiting the formation of neoplastic lesions induced by DMH treatment. Interestingly, the combination of curcumin and acetyl-L-carnitine was able to fully inhibit the development of advanced adenoma lesions. Our data unveil the antitumor effects of carnitines and warrant additional studies to further support the adoption of carnitines as cancer chemopreventative agents.

  11. Slowing Down, Talking Back, and Moving Forward: Some Reflections on Digital Storytelling in the Humanities Curriculum

    ERIC Educational Resources Information Center

    Leon, Sharon M.

    2008-01-01

    Humanities teachers in higher education strive to locate and implement pedagogical approaches that allow our students to deepen their inquiry, to make significant intellectual connections, and to carry those questions and insights across the curriculum. Digital storytelling is one of those pedagogical approaches. Digital storytelling can create an…

  12. Motor Fatigue Measurement by Distance-Induced Slow Down of Walking Speed in Multiple Sclerosis

    PubMed Central

    Phan-Ba, Rémy; Calay, Philippe; Grodent, Patrick; Delrue, Gael; Lommers, Emilie; Delvaux, Valérie; Moonen, Gustave; Belachew, Shibeshih

    2012-01-01

    Background and rationale Motor fatigue and ambulation impairment are prominent clinical features of people with multiple sclerosis (pMS). We hypothesized that a multimodal and comparative assessment of walking speed on short and long distance would allow a better delineation and quantification of gait fatigability in pMS. Our objectives were to compare 4 walking paradigms: the timed 25-foot walk (T25FW), a corrected version of the T25FW with dynamic start (T25FW+), the timed 100-meter walk (T100MW) and the timed 500-meter walk (T500MW). Methods Thirty controls and 81 pMS performed the 4 walking tests in a single study visit. Results The 4 walking tests were performed with a slower WS in pMS compared to controls even in subgroups with minimal disability. The finishing speed of the last 100-meter of the T500MW was the slowest measurable WS whereas the T25FW+ provided the fastest measurable WS. The ratio between such slowest and fastest WS (Deceleration Index, DI) was significantly lower only in pMS with EDSS 4.0–6.0, a pyramidal or cerebellar functional system score reaching 3 or a maximum reported walking distance ≤4000 m. Conclusion The motor fatigue which triggers gait deceleration over a sustained effort in pMS can be measured by the WS ratio between performances on a very short distance and the finishing pace on a longer more demanding task. The absolute walking speed is abnormal early in MS whatever the distance of effort when patients are unaware of ambulation impairment. In contrast, the DI-measured ambulation fatigability appears to take place later in the disease course. PMID:22514661

  13. Small Crowders Slow Down Kinesin-1 Stepping by Hindering Motor Domain Diffusion.

    PubMed

    Sozański, Krzysztof; Ruhnow, Felix; Wiśniewska, Agnieszka; Tabaka, Marcin; Diez, Stefan; Hołyst, Robert

    2015-11-20

    The dimeric motor protein kinesin-1 moves processively along microtubules against forces of up to 7 pN. However, the mechanism of force generation is still debated. Here, we point to the crucial importance of diffusion of the tethered motor domain for the stepping of kinesin-1: small crowders stop the motor at a viscosity of 5 mPa·s-corresponding to a hydrodynamic load in the sub-fN (~10^{-4} pN) range-whereas large crowders have no impact even at viscosities above 100 mPa·s. This indicates that the scale-dependent, effective viscosity experienced by the tethered motor domain is a key factor determining kinesin's functionality. Our results emphasize the role of diffusion in the kinesin-1 stepping mechanism and the general importance of the viscosity scaling paradigm in nanomechanics.

  14. Relativistic and Slowing Down: The Flow in the Hotspots of Powerful Radio Galaxies and Quasars

    NASA Technical Reports Server (NTRS)

    Kazanas, D.

    2003-01-01

    The 'hotspots' of powerful radio galaxies (the compact, high brightness regions, where the jet flow collides with the intergalactic medium (IGM)) have been imaged in radio, optical and recently in X-ray frequencies. We propose a scheme that unifies their, at first sight, disparate broad band (radio to X-ray) spectral properties. This scheme involves a relativistic flow upstream of the hotspot that decelerates to the sub-relativistic speed of its inferred advance through the IGM and it is viewed at different angles to its direction of motion, as suggested by two independent orientation estimators (the presence or not of broad emission lines in their optical spectra and the core-to-extended radio luminosity). This scheme, besides providing an account of the hotspot spectral properties with jet orientation, it also suggests that the large-scale jets remain relativistic all the way to the hotspots.

  15. Slowing Down Differentiation of Engrafted Human Myoblasts Into Immunodeficient Mice Correlates With Increased Proliferation and Migration

    PubMed Central

    Riederer, Ingo; Negroni, Elisa; Bencze, Maximilien; Wolff, Annie; Aamiri, Ahmed; Di Santo, James P; Silva-Barbosa, Suse D.; Butler-Browne, Gillian; Savino, Wilson; Mouly, Vincent

    2012-01-01

    We have used a model of xenotransplantation in which human myoblasts were transplanted intramuscularly into immunodeficient Rag2-/-γC-/- mice, in order to investigate the kinetics of proliferation and differentiation of the transplanted cells. After injection, most of the human myoblasts had already differentiated by day 5. This differentiation correlated with reduction in proliferation and limited migration of the donor cells within the regenerating muscle. These results suggest that the precocious differentiation, already detected at 3 days postinjection, is a limiting factor for both the migration from the injection site and the participation of the donor cells to muscle regeneration. When we stimulated in vivo proliferation of human myoblasts, transplanting them in a serum-containing medium, we observed 5 days post-transplantation a delay of myogenic differentiation and an increase in cell numbers, which colonized a much larger area within the recipient's muscle. Importantly, these myoblasts maintained their ability to differentiate, since we found higher numbers of myofibers seen 1 month postengraftment, as compared to controls. Conceptually, these data suggest that in experimental myoblast transplantation, any intervention upon the donor cells and/or the recipient's microenvironment aimed at enhancing proliferation and migration should be done before differentiation of the implanted cells, e.g., day 3 postengraftment. PMID:21934656

  16. Slowing down differentiation of engrafted human myoblasts into immunodeficient mice correlates with increased proliferation and migration.

    PubMed

    Riederer, Ingo; Negroni, Elisa; Bencze, Maximilien; Wolff, Annie; Aamiri, Ahmed; Di Santo, James P; Silva-Barbosa, Suse D; Butler-Browne, Gillian; Savino, Wilson; Mouly, Vincent

    2012-01-01

    We have used a model of xenotransplantation in which human myoblasts were transplanted intramuscularly into immunodeficient Rag2(-/-)γC(-/-) mice, in order to investigate the kinetics of proliferation and differentiation of the transplanted cells. After injection, most of the human myoblasts had already differentiated by day 5. This differentiation correlated with reduction in proliferation and limited migration of the donor cells within the regenerating muscle. These results suggest that the precocious differentiation, already detected at 3 days postinjection, is a limiting factor for both the migration from the injection site and the participation of the donor cells to muscle regeneration. When we stimulated in vivo proliferation of human myoblasts, transplanting them in a serum-containing medium, we observed 5 days post-transplantation a delay of myogenic differentiation and an increase in cell numbers, which colonized a much larger area within the recipient's muscle. Importantly, these myoblasts maintained their ability to differentiate, since we found higher numbers of myofibers seen 1 month postengraftment, as compared to controls. Conceptually, these data suggest that in experimental myoblast transplantation, any intervention upon the donor cells and/or the recipient's microenvironment aimed at enhancing proliferation and migration should be done before differentiation of the implanted cells, e.g., day 3 postengraftment.

  17. Experimental Therapies and Ongoing Clinical Trials to Slow Down Progression of ADPKD

    PubMed Central

    Irazabal, Maria V.; Torres, Vicente E.

    2014-01-01

    The improvement of imaging techniques over the years has contributed to the understanding of the natural history of autosomal dominant polycystic kidney disease, and facilitated the observation of its structural progression. Advances in molecular biology and genetics have made possible a greater understanding of the genetics, molecular, and cellular pathophysiologic mechanisms responsible for its development and have laid the foundation for the development of potential new therapies. Therapies targeting genetic mechanisms in ADPKD have inherent limitations. As a result, most experimental therapies at the present time are aimed at delaying the growth of the cysts and associated interstitial inflammation and fibrosis by targeting tubular epithelial cell proliferation and fluid secretion by the cystic epithelium. Several interventions affecting many of the signaling pathways disrupted in ADPKD have been effective in animal models and some are currently being tested in clinical trials. PMID:23971644

  18. 49 CFR 392.11 - Railroad grade crossings; slowing down required.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit said... driven upon or over such crossing until due caution has been taken to ascertain that the course is...

  19. 49 CFR 392.11 - Railroad grade crossings; slowing down required.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit said... driven upon or over such crossing until due caution has been taken to ascertain that the course is clear....

  20. 49 CFR 392.11 - Railroad grade crossings; slowing down required.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit said... driven upon or over such crossing until due caution has been taken to ascertain that the course is clear....

  1. Cluster Concept Dynamics Leading to Creative Ideas Without Critical Slowing Down

    NASA Astrophysics Data System (ADS)

    Goldenberg, Y.; Solomon, S.; Mazursky, D.

    We present algorithmic procedures for generating systematically ideas and solutions to problems which are perceived as creative. Our method consists of identifying and characterizing the most creative ideas among a vast pool. We show that they fall within a few large classes (archetypes) which share the same conceptual structure (Macros). We prescribe well defined abstract algorithms which can act deterministically on arbitrary given objects. Each algorithm generates ideas with the same conceptual structure characteristic to one of the Macros. The resulting new ideas turn out to be perceived as highly creative. We support our claims by experiments in which senior advertising professionals graded advertisement ideas produced by our method according to their creativity. The marks (grade 4.6±0.2 on a 1-7 scale) obtained by laymen applying our algorithms (after being instructed for only two hours) were significantly better than the marks obtained by advertising professionals using standard methods (grade 3.6±0.2)). The method, which is currently taught in USA, Europe, and Israel and used by advertising agencies in Britain and Israel has received formal international recognition.

  2. Health benefits in 2005: premium increases slow down, coverage continues to erode.

    PubMed

    Gabel, Jon; Claxton, Gary; Gil, Isadora; Pickreign, Jeremy; Whitmore, Heidi; Finder, Benjamin; Hawkins, Samantha; Rowland, Diane

    2005-01-01

    This paper reports findings on the state of job-based health insurance in spring 2005 and how it has changed during recent years. Premiums rose 9.2 percent, the first year of single-digit increases since 2000. The percentage of firms offering health benefits has fallen from 69 percent in 2000 to 60 percent in 2005. Cost sharing did not grow appreciably in the past year. Enrollment in preferred provider organizations (PPOs) grew from 55 percent in 2004 to 61 percent in 2005, while enrollment in health maintenance organizations (HMOs) fell from 25 percent to 21 percent of the total.

  3. Application of calcium carbonate slows down organic amendments mineralization in reclaimed soils

    NASA Astrophysics Data System (ADS)

    Zornoza, Raúl; Faz, Ángel; Acosta, José A.; Martínez-Martínez, Silvia; Ángeles Muñoz, M.

    2014-05-01

    A field experiment was set up in Cartagena-La Unión Mining District, SE Spain, aimed at evaluating the short-term effects of pig slurry (PS) amendment alone and together with marble waste (MW) on organic matter mineralization, microbial activity and stabilization of heavy metals in two tailing ponds. These structures pose environmental risk owing to high metals contents, low organic matter and nutrients, and null vegetation. Carbon mineralization, exchangeable metals and microbiological properties were monitored during 67 days. The application of amendments led to a rapid decrease of exchangeable metals concentrations, except for Cu, with decreases up to 98%, 75% and 97% for Cd, Pb and Zn, respectively. The combined addition of MW+PS was the treatment with greater reduction in metals concentrations. The addition of PS caused a significant increase in respiration rates, although in MW+PS plots respiration was lower than in PS plots. The mineralised C from the pig slurry was low, approximately 25-30% and 4-12% for PS and MW+PS treatments, respectively. Soluble carbon (Csol), microbial biomass carbon (MBC) and β-galactosidase and β-glucosidase activities increased after the application of the organic amendment. However, after 3 days these parameters started a decreasing trend reaching similar values than control from approximately day 25 for Csol and MBC. The PS treatment promoted highest values in enzyme activities, which remained high upon time. Arylesterase activity increased in the MW+PS treatment. Thus, the remediation techniques used improved soil microbiological status and reduced metal availability. The combined application of PS+MW reduced the degradability of the organic compounds. Keywords: organic wastes, mine soils stabilization, carbon mineralization, microbial activity.

  4. Critical Slowing Down in Time-to-Extinction: An Example of Critical Phenomena in Ecology

    NASA Technical Reports Server (NTRS)

    Gandhi, Amar; Levin, Simon; Orszag, Steven

    1998-01-01

    We study a model for two competing species that explicitly accounts for effects due to discreteness, stochasticity and spatial extension of populations. The two species are equally preferred by the environment and do better when surrounded by others of the same species. We observe that the final outcome depends on the initial densities (uniformly distributed in space) of the two species. The observed phase transition is a continuous one and key macroscopic quantities like the correlation length of clusters and the time-to-extinction diverge at a critical point. Away from the critical point, the dynamics can be described by a mean-field approximation. Close to the critical point, however, there is a crossover to power-law behavior because of the gross mismatch between the largest and smallest scales in the system. We have developed a theory based on surface effects, which is in good agreement with the observed behavior. The course-grained reaction-diffusion system obtained from the mean-field dynamics agrees well with the particle system.

  5. Exercise: the lifelong supplement for healthy ageing and slowing down the onset of frailty.

    PubMed

    Viña, Jose; Rodriguez-Mañas, Leocadio; Salvador-Pascual, Andrea; Tarazona-Santabalbina, Francisco José; Gomez-Cabrera, Mari Carmen

    2016-04-15

    The beneficial effects of exercise have been well recognized for over half a century. Dr Jeremy Morris's pioneering studies in the fifties showed a striking difference in cardiovascular disease between the drivers and conductors on the double-decker buses in London. These studies sparked off a vast amount of research on the effects of exercise in health, and the general consensus is that exercise contributes to improved outcomes and treatment for several diseases including osteoporosis, diabetes, depression and atherosclerosis. Evidence of the beneficial effects of exercise is reviewed here. One way of highlighting the impact of exercise on disease is to consider it from the perspective of good practice. However, the intensity, duration, frequency (dosage) and counter indications of the exercise should be taken into consideration to individually tailor the exercise programme. An important case of the beneficial effect of exercise is that of ageing. Ageing is characterized by a loss of homeostatic mechanisms, on many occasions leading to the development of frailty, and hence frailty is one of the major geriatric syndromes and exercise is very useful to mitigate, or at least delay, it. Since exercise is so effective in reducing frailty, we would like to propose that exercise be considered as a supplement to other treatments. People all over the world have been taking nutritional supplements in the hopes of improving their health. We would like to think of exercise as a physiological supplement not only for treating diseases, but also for improving healthy ageing.

  6. Reducing CTGF/CCN2 slows down mdx muscle dystrophy and improves cell therapy.

    PubMed

    Morales, Maria Gabriela; Gutierrez, Jaime; Cabello-Verrugio, Claudio; Cabrera, Daniel; Lipson, Kenneth E; Goldschmeding, Roel; Brandan, Enrique

    2013-12-15

    In Duchenne muscular dystrophy (DMD) and the mdx mouse model, the absence of the cytoskeletal protein dystrophin causes defective anchoring of myofibres to the basal lamina. The resultant myofibre degeneration and necrosis lead to a progressive loss of muscle mass, increased fibrosis and ultimately fatal weakness. Connective tissue growth factor (CTGF/CCN-2) is critically involved in several chronic fibro-degenerative diseases. In DMD, the role of CTGF might extend well beyond replacement fibrosis secondary to loss of muscle fibres, since its overexpression in skeletal muscle could by itself induce a dystrophic phenotype. Using two independent approaches, we here show that mdx mice with reduced CTGF availability do indeed have less severe muscular dystrophy. Mdx mice with hemizygous CTGF deletion (mdx-Ctgf+/-), and mdx mice treated with a neutralizing anti-CTGF monoclonal antibody (FG-3019), performed better in an exercise endurance test, had better muscle strength in isolated muscles and reduced skeletal muscle impairment, apoptotic damage and fibrosis. Transforming growth factor type-β (TGF-β), pERK1/2 and p38 signalling remained unaffected during CTGF suppression. Moreover, both mdx-Ctgf+/- and FG-3019 treated mdx mice had improved grafting upon intramuscular injection of dystrophin-positive satellite cells. These findings reveal the potential of targeting CTGF to reduce disease progression and to improve cell therapy in DMD.

  7. Driving through the Great Recession: Why does motor vehicle fatality decrease when the economy slows down?

    PubMed Central

    He, Monica M.

    2016-01-01

    The relationship between short-term macroeconomic growth and temporary mortality increases remains strongest for motor vehicle (MV) crashes. In this paper, I investigate the mechanisms that explain falling MV fatality rates during the recent Great Recession. Using U.S. state-level panel data from 2003–2013, I first estimate the relationship between unemployment and MV fatality rate and then decompose it into risk and exposure factors for different types of MV crashes. Results reveal a significant 2.9 percent decrease in MV fatality rate for each percentage point increase in unemployment rate. This relationship is almost entirely explained by changes in the risk of driving rather than exposure to the amount of driving and is particularly robust for crashes involving large commercial trucks, multiple vehicles, and speeding cars. These findings provide evidence suggesting traffic patterns directly related to economic activity lead to higher risk of MV fatality rates when the economy improves. PMID:26967529

  8. Criticality in the slowed-down boiling crisis at zero gravity.

    PubMed

    Charignon, T; Lloveras, P; Chatain, D; Truskinovsky, L; Vives, E; Beysens, D; Nikolayev, V S

    2015-05-01

    Boiling crisis is a transition between nucleate and film boiling. It occurs at a threshold value of the heat flux from the heater called CHF (critical heat flux). Usually, boiling crisis studies are hindered by the high CHF and short transition duration (below 1 ms). Here we report on experiments in hydrogen near its liquid-vapor critical point, in which the CHF is low and the dynamics slow enough to be resolved. As under such conditions the surface tension is very small, the experiments are carried out in the reduced gravity to preserve the conventional bubble geometry. Weightlessness is created artificially in two-phase hydrogen by compensating gravity with magnetic forces. We were able to reveal the fractal structure of the contour of the percolating cluster of the dry areas at the heater that precedes the boiling crisis. We provide a direct statistical analysis of dry spot areas that confirms the boiling crisis at zero gravity as a scale-free phenomenon. It was observed that, in agreement with theoretical predictions, saturated boiling CHF tends to zero (within the precision of our thermal control system) in zero gravity, which suggests that the boiling crisis may be observed at any heat flux provided the experiment lasts long enough.

  9. The human tongue slows down to speak: muscle fibers of the human tongue.

    PubMed

    Sanders, Ira; Mu, Liancai; Amirali, Asif; Su, Hungxi; Sobotka, Stanislaw

    2013-10-01

    Little is known about the specializations of human tongue muscles. In this study, myofibrillar adenosine triphosphatase (mATPase) histochemical staining was used to study the percentage and distribution of slow twitch muscle fibers (slow MFs) within tongue muscles of four neurologically normal human adults and specimens from a 2-year-old human, a newborn human, an adult with idiopathic Parkinson's disease (IPD), and a macaque monkey. The average percentage of slow MFs in adult and the 2-year-old muscle specimens was 54%, the IPD was 45%, while the neonatal human (32%) and macaque monkey (28%) had markedly fewer slow MFs. In contrast, the tongue muscles of the rat and cat have been reported to have no slow MFs. There was a marked spatial gradient in the distribution of slow MFs with the highest percentages found medially and posteriorly. Normal adult tongue muscles were found to have a variety of uniquely specialized features including MF-type grouping (usually found in neuromuscular disorders), large amounts of loose connective tissue, and short branching MFs. In summary, normal adult human tongue muscles have by far the highest proportion of slow MFs of any mammalian tongue studied to date. Moreover, adult human tongue muscles have multiple unique anatomic features. As the tongue shape changes that are seen during speech articulation are unique to humans, we hypothesize that the large proportion of slow MFs and the anatomical specializations observed in the adult human tongue have evolved to perform these movements.

  10. Moving Clocks Do Not Always Appear to Slow down: Don't Neglect the Doppler Effect

    ERIC Educational Resources Information Center

    Wang, Frank

    2013-01-01

    In popular accounts of the time dilation effect in Einstein's special relativity, one often encounters the statement that moving clocks run slow. For instance, in the acclaimed PBS program "NOVA," Professor Brian Greene says, "[I]f I walk toward that guy... he'll perceive my watch ticking slower." Also in his earlier piece for The New York Times,…

  11. Slow down and remember to remember! A delay theory of prospective memory costs.

    PubMed

    Heathcote, Andrew; Loft, Shayne; Remington, Roger W

    2015-04-01

    Event-based prospective memory (PM) requires a deferred action to be performed when a target event is encountered in the future. Individuals are often slower to perform a concurrent ongoing task when they have PM task requirements relative to performing the ongoing task in isolation. Theories differ in their detailed interpretations of this PM cost, but all assume that the PM task shares limited-capacity resources with the ongoing task. In what was interpreted as support of this core assumption, diffusion model fits reported by Boywitt and Rummel (2012) and Horn, Bayen, and Smith (2011) indicated that PM demands reduced the rate of accumulation of evidence about ongoing task choices. We revaluate this support by fitting both the diffusion and linear ballistic accumulator (Brown & Heathcote, 2008) models to these same data sets and 2 new data sets better suited to model fitting. There was little effect of PM demands on evidence accumulation rates, but PM demands consistently increased the evidence required for ongoing task response selection (response thresholds). A further analysis of data reported by Lourenço, White, and Maylor (2013) found that participants differentially adjusted their response thresholds to slow responses associated with stimuli potentially containing PM targets. These findings are consistent with a delay theory account of costs, which contends that individuals slow ongoing task responses to allow more time for PM response selection to occur. Our results call for a fundamental reevaluation of current capacity-sharing theories of PM cost that until now have dominated the PM literature.

  12. Hypercapnia slows down proliferation and apoptosis of human bone marrow promyeloblasts.

    PubMed

    Hamad, Mouna; Irhimeh, Mohammad R; Abbas, Ali

    2016-09-01

    Stem cells are being applied in increasingly diverse fields of research and therapy; as such, growing and culturing them in scalable quantities would be a huge advantage for all concerned. Gas mixtures containing 5 % CO2 are a typical concentration for the in vitro culturing of cells. The effect of varying the CO2 concentration on promyeloblast KG-1a cells was investigated in this paper. KG-1a cells are characterized by high expression of CD34 surface antigen, which is an important clinical surface marker for human hematopoietic stem cells (HSCs) transplantation. KG-1a cells were cultured in three CO2 concentrations (1, 5 and 15 %). Cells were batch-cultured and analyzed daily for viability, size, morphology, proliferation, and apoptosis using flow cytometry. No considerable differences were noted in KG-1a cell morphological properties at all three CO2 levels as they retained their myeloblast appearance. Calculated population doubling time increased with an increase in CO2 concentration. Enhanced cell proliferation was seen in cells cultured in hypercapnic conditions, in contrast to significantly decreased proliferation in hypocapnic populations. Flow cytometry analysis revealed that apoptosis was significantly (p = 0.0032) delayed in hypercapnic cultures, in parallel to accelerated apoptosis in hypocapnic ones. These results, which to the best of our knowledge are novel, suggest that elevated levels of CO2 are favored for the enhanced proliferation of bone marrow (BM) progenitor cells such as HSCs.

  13. Intermittent flow in yield-stress fluids slows down chaotic mixing.

    PubMed

    Wendell, D M; Pigeonneau, F; Gouillart, E; Jop, P

    2013-08-01

    We present experimental results of chaotic mixing of Newtonian fluids and yield-stress fluids using a rod-stirring protocol with a rotating vessel. We show how the mixing of yield-stress fluids by chaotic advection is reduced compared to the mixing of Newtonian fluids and explain our results, bringing to light the relevant mechanisms: the presence of fluid that only flows intermittently, a phenomenon enhanced by the yield stress, and the importance of the peripheral region. This finding is confirmed via numerical simulations. Anomalously slow mixing is observed when the synchronization of different stirring elements leads to the repetition of slow stretching for the same fluid particles.

  14. Being "Lazy" and Slowing Down: Toward Decolonizing Time, Our Body, and Pedagogy

    ERIC Educational Resources Information Center

    Shahjahan, Riyad A.

    2015-01-01

    In recent years, scholars have critiqued norms of neoliberal higher education (HE) by calling for embodied and anti-oppressive teaching and learning. Implicit in these accounts, but lacking elaboration, is a concern with reformulating the notion of "time" and temporalities of academic life. Employing a coloniality perspective, this…

  15. Stathmin slows down guanosine diphosphate dissociation from tubulin in a phosphorylation-controlled fashion.

    PubMed

    Amayed, P; Carlier, M F; Pantaloni, D

    2000-10-10

    Stathmin is an important protein that interacts with tubulin and regulates microtubule dynamics in a phosphorylation-controlled fashion. Here we show that the dissociation of guanosine 5'-diphosphate (GDP) from beta-tubulin is slowed 20-fold in the (tubulin)(2)-stathmin ternary complex (T(2)S). The kinetics of GDP or guanosine 5'-triphosphate (GTP) dissociation from tubulin have been monitored by the change in tryptophan fluorescence of tubulin upon exchanging 2-amino-6-mercapto-9-beta-ribofuranosylpurine 5'-diphosphate (S6-GDP) for tubulin-bound guanine nucleotide. At molar ratios of stathmin to tubulin lower than 0.5, biphasic kinetics were observed, indicating that the dynamics of the complex is extremely slow, consistent with its high stability. The method was used to characterize the effects of phosphorylation of stathmin on its interaction with tubulin. The serine-to-glutamate substitution of all four phosphorylatable serines of stathmin (4E-stathmin) weakens the stability of the T(2)S complex by about 2 orders of magnitude. The phosphorylation of serines 16 and 63 in stathmin has a more severe effect and weakens the stability of T(2)S 10(4)-fold. The rate of GDP dissociation is lowered only 7-fold and 4-fold in the complexes of tubulin with 4E-stathmin and diphosphostathmin, respectively. Sedimentation velocity studies support the conclusions of nucleotide exchange data and show that the T(2)S complexes formed between tubulin and 4E-stathmin or diphosphostathmin are less compact than the highly stable T(2)S complex. The correlation between the effect of phosphorylation of stathmin on the stability of T(2)S complex measured in vitro and on the function of stathmin in vivo is discussed.

  16. Glassy properties and viscous slowing down: An analysis of the correlation between nonergodicity factor and fragility

    NASA Astrophysics Data System (ADS)

    Niss, Kristine; Dalle-Ferrier, Cécile; Giordano, Valentina M.; Monaco, Giulio; Frick, Bernhard; Alba-Simionesco, Christiane

    2008-11-01

    We present an extensive analysis of the proposed relationship [T. Scopigno et al., Science 302, 849 (2003)] between the fragility of glass-forming liquids and the nonergodicity factor as measured by inelastic x-ray scattering. We test the robustness of the correlation through the investigation of the relative change under pressure of the speed of sound, nonergodicity factor, and broadening of the acoustic exitations of a molecular glass former, cumene, and of a polymer, polyisobutylene. For polyisobutylene, we also perform a similar study by varying its molecular weight. Moreover, we have included new results on liquids presenting an exceptionally high fragility index m under ambient conditions. We show that the linear relation, proposed by Scopigno et al. [Science 302, 849 (2003)] between fragility, measured in the liquid state, and the slope α of the inverse nonergodicity factor as a function of T /Tg, measured in the glassy state, is not verified when increasing the data base. In particular, while there is still a trend in the suggested direction at atmospheric pressure, its consistency is not maintained by introducing pressure as an extra control parameter modifying the fragility: whatever is the variation in the isobaric fragility, the inverse nonergodicity factor increases or remains constant within the error bars, and one observes a systematic increase in the slope α when the temperature is scaled by Tg(P). To avoid any particular aspects that might cause the relation to fail, we have replaced the fragility by other related properties often evoked, e.g., thermodynamic fragility, for the understanding of its concept. Moreover, we find, as previously proposed by two of us [K. Niss and C. Alba-Simionesco, Phys. Rev. B 74, 024205 (2006)], that the nonergodicity factor evaluated at the glass transition qualitatively reflects the effect of density on the relaxation time even though in this case no clear quantitative correlations appear.

  17. Previous physical exercise slows down the complications from experimental diabetes in the calcaneal tendon

    PubMed Central

    Bezerra, Márcio Almeida; da Silva Nery, Cybelle; de Castro Silveira, Patrícia Verçoza; de Mesquita, Gabriel Nunes; de Gomes Figueiredo, Thainá; Teixeira, Magno Felipe Holanda Barboza Inácio; de Moraes, Silvia Regina Arruda

    2016-01-01

    Summary Background the complications caused by diabetes increase fragility in the muscle-tendon system, resulting in degeneration and easier rupture. To avoid this issue, therapies that increase the metabolism of glucose by the body, with physical activity, have been used after the confirmation of diabetes. We evaluate the biomechanical behavior of the calcaneal tendon and the metabolic parameters in rats induced to experimental diabetes and submitted to pre- and post-induction exercise. Methods 54-male-Wistar rats were randomly divided into four groups: Control Group (CG), Swimming Group (SG), Diabetic Group (DG), and Diabetic Swimming Group (DSG). The trained groups were submitted to swimming exercise, while unexercised groups remained restricted to the cages. Metabolic and biomechanical parameters were assessed. Results the clinical parameters of DSG showed no change due to exercise protocol. The tendon analysis of the DSG showed increased values for the elastic modulus (p<0.01) and maximum tension (p<0.001) and lowest value for transverse area (p<0.001) when compared to the SG, however it showed no difference when compared to DG. Conclusion the homogeneous values presented by the tendons of the DG and DSG show that physical exercise applied in the pre- and post-induction wasn’t enough to promote a protective effect against the tendinopathy process, but prevent the progress of degeneration. PMID:27331036

  18. What's the Rush?: Slowing down Our "Hurried" Approach to Infant and Toddler Development

    ERIC Educational Resources Information Center

    Bonnett, Tina

    2012-01-01

    What high expectations people place on their infants and toddlers who are just beginning to understand this great big world and all of its complexities! In an attempt to ensure that growth and learning occur, the fundamental needs of infants and toddlers are often pushed aside as people rush the young child to achieve the next developmental…

  19. Problematic assumptions have slowed down depression research: why symptoms, not syndromes are the way forward

    PubMed Central

    Fried, Eiko I.

    2015-01-01

    Major depression (MD) is a highly heterogeneous diagnostic category. Diverse symptoms such as sad mood, anhedonia, and fatigue are routinely added to an unweighted sum-score, and cutoffs are used to distinguish between depressed participants and healthy controls. Researchers then investigate outcome variables like MD risk factors, biomarkers, and treatment response in such samples. These practices presuppose that (1) depression is a discrete condition, and that (2) symptoms are interchangeable indicators of this latent disorder. Here I review these two assumptions, elucidate their historical roots, show how deeply engrained they are in psychological and psychiatric research, and document that they contrast with evidence. Depression is not a consistent syndrome with clearly demarcated boundaries, and depression symptoms are not interchangeable indicators of an underlying disorder. Current research practices lump individuals with very different problems into one category, which has contributed to the remarkably slow progress in key research domains such as the development of efficacious antidepressants or the identification of biomarkers for depression. The recently proposed network framework offers an alternative to the problematic assumptions. MD is not understood as a distinct condition, but as heterogeneous symptom cluster that substantially overlaps with other syndromes such as anxiety disorders. MD is not framed as an underlying disease with a number of equivalent indicators, but as a network of symptoms that have direct causal influence on each other: insomnia can cause fatigue which then triggers concentration and psychomotor problems. This approach offers new opportunities for constructing an empirically based classification system and has broad implications for future research. PMID:25852621

  20. One for the To-Do List: Slow Down and Think.

    ERIC Educational Resources Information Center

    Barnett, Bruce G.; O'Mahoney, Gary

    2002-01-01

    Describes a professional learning effort in Victoria, Australia, designed to help principals process and learn from their experiences with recent school reforms. Principals reported that learning to reflect helped them do their jobs more effectively, challenged their conventional ways of thinking and acting, helped them be more proactive, and…

  1. The automatic visual simulation of words: A memory reactivated mask slows down conceptual access.

    PubMed

    Rey, Amandine E; Riou, Benoit; Vallet, Guillaume T; Versace, Rémy

    2017-03-01

    How do we represent the meaning of words? The present study assesses whether access to conceptual knowledge requires the reenactment of the sensory components of a concept. The reenactment-that is, simulation-was tested in a word categorisation task using an innovative masking paradigm. We hypothesised that a meaningless reactivated visual mask should interfere with the simulation of the visual dimension of concrete words. This assumption was tested in a paradigm in which participants were not aware of the link between the visual mask and the words to be processed. In the first phase, participants created a tone-visual mask or tone-control stimulus association. In the test phase, they categorised words that were presented with 1 of the tones. Results showed that words were processed more slowly when they were presented with the reactivated mask. This interference effect was only correlated with and explained by the value of the visual perceptual strength of the words (i.e., our experience with the visual dimensions associated with concepts) and not with other characteristics. We interpret these findings in terms of word access, which may involve the simulation of sensory features associated with the concept, even if participants were not explicitly required to access visual properties. (PsycINFO Database Record

  2. A compensatory algorithm for the slow-down effect on constant-time-separation approaches

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.

    1991-01-01

    In seeking methods to improve airport capacity, the question arose as to whether an electronic display could provide information which would enable the pilot to be responsible for self-separation under instrument conditions to allow for the practical implementation of reduced separation, multiple glide path approaches. A time based, closed loop algorithm was developed and simulator validated for in-trail (one aircraft behind the other) approach and landing. The algorithm was designed to reduce the effects of approach speed reduction prior to landing for the trailing aircraft as well as the dispersion of the interarrival times. The operational task for the validation was an instrument approach to landing while following a single lead aircraft on the same approach path. The desired landing separation was 60 seconds for these approaches. An open loop algorithm, previously developed, was used as a basis for comparison. The results showed that relative to the open loop algorithm, the closed loop one could theoretically provide for a 6 pct. increase in runway throughput. Also, the use of the closed loop algorithm did not affect the path tracking performance and pilot comments indicated that the guidance from the closed loop algorithm would be acceptable from an operational standpoint. From these results, it is concluded that by using a time based, closed loop spacing algorithm, precise interarrival time intervals may be achievable with operationally acceptable pilot workload.

  3. Experimental therapies and ongoing clinical trials to slow down progression of ADPKD.

    PubMed

    Irazabal, Maria V; Torres, Vicente E

    2013-02-01

    The improvement of imaging techniques over the years has contributed to the understanding of the natural history of autosomal dominant polycystic kidney disease, and facilitated the observation of its structural progression. Advances in molecular biology and genetics have made possible a greater understanding of the genetics, molecular, and cellular pathophysiologic mechanisms responsible for its development and have laid the foundation for the development of potential new therapies. Therapies targeting genetic mechanisms in ADPKD have inherent limitations. As a result, most experimental therapies at the present time are aimed at delaying the growth of the cysts and associated interstitial inflammation and fibrosis by targeting tubular epithelial cell proliferation and fluid secretion by the cystic epithelium. Several interventions affecting many of the signaling pathways disrupted in ADPKD have been effective in animal models and some are currently being tested in clinical trials.

  4. Enhancing light slow-down in semiconductor optical amplifiers by optical filtering.

    PubMed

    Xue, Weiqi; Chen, Yaohui; Ohman, Filip; Sales, Salvador; Mørk, Jesper

    2008-05-15

    We show that the degree of light-speed control in a semiconductor optical amplifier can be significantly extended by the introduction of optical filtering. We achieve a phase shift of approximately 150 degrees at 19 GHz modulation frequency, corresponding to a several-fold increase of the absolute phase shift as well as the achievable bandwidth. We show good quantitative agreement with numerical simulations, including the effects of population oscillations and four-wave mixing, and provide a simple physical explanation based on an analytical perturbation approach.

  5. A naturally heterogeneous landscape can effectively slow down the dispersal of aquatic microcrustaceans.

    PubMed

    Juračka, Petr J; Declerck, Steven A J; Vondrák, Daniel; Beran, Luboš; Černý, Martin; Petrusek, Adam

    2016-03-01

    Several studies have suggested that aquatic microcrustaceans are relatively efficient dispersers in a variety of landscapes, whereas others have indicated dispersal limitation at large spatial scales or under specific circumstances. Based on a survey of a set of recently created ponds in an area of approximately 18 × 25 km, we found multiple indications of dispersal limitation affecting the community assembly of microcrustacean communities. Spatial patterns in the community composition were better explained by the geomorphological structure of the landscape than by mere geographic distances. This suggests that ridges separating the network of valleys act as dispersal barriers, and as such may channel the dispersal routes of the studied taxa and, likely, also of their animal vectors. Dispersal limitation was further supported by a strong positive relationship between species richness and the abundance of neighboring water bodies, suggesting that isolation affects colonization rates. Finally, the apparent dispersal limitation of microcrustaceans is further corroborated by the observation of low colonization rates in newly dug experimental ponds in the study area.

  6. Similar slow down in running speed progression in species under human pressure.

    PubMed

    Desgorces, F-D; Berthelot, G; Charmantier, A; Tafflet, M; Schaal, K; Jarne, P; Toussaint, J-F

    2012-09-01

    Running speed in animals depends on both genetic and environmental conditions. Maximal speeds were here analysed in horses, dogs and humans using data sets on the 10 best performers covering more than a century of races. This includes a variety of distances in humans (200-1500 m). Speed has been progressing fast in the three species, and this has been followed by a plateau. Based on a Gompertz model, the current best performances reach 97.4% of maximal velocity in greyhounds to 100.3 in humans. Further analysis based on a subset of individuals and using an 'animal model' shows that running speed is heritable in horses (h(2) = 0.438, P = 0.01) and almost so in dogs (h(2) = 0.183, P = 0.08), suggesting the involvement of genetic factors. Speed progression in humans is more likely due to an enlarged population of runners, associated with improved training practices. The analysis of a data subset (40 last years in 800 and 1500 m) further showed that East Africans have strikingly improved their speed, now reaching the upper part of the human distribution, whereas that of Nordic runners stagnated in the 800 m and even declined in the 1500 m. Although speed progression in dogs and horses on one side and humans on the other has not been affected by the same genetic/environmental balance of forces, it is likely that further progress will be extremely limited.

  7. REAC technology and hyaluron synthase 2, an interesting network to slow down stem cell senescence.

    PubMed

    Maioli, Margherita; Rinaldi, Salvatore; Pigliaru, Gianfranco; Santaniello, Sara; Basoli, Valentina; Castagna, Alessandro; Fontani, Vania; Ventura, Carlo

    2016-06-24

    Hyaluronic acid (HA) plays a fundamental role in cell polarity and hydrodynamic processes, affording significant modulation of proliferation, migration, morphogenesis and senescence, with deep implication in the ability of stem cells to execute their differentiating plans. The Radio Electric Asymmetric Conveyer (REAC) technology is aimed to optimize the ions fluxes at the molecular level in order to optimize the molecular mechanisms driving cellular asymmetry and polarization. Here, we show that treatment with 4-methylumbelliferone (4-MU), a potent repressor of type 2 HA synthase and endogenous HA synthesis, dramatically antagonized the ability of REAC to recover the gene and protein expression of Bmi1, Oct4, Sox2, and Nanog in ADhMSCs that had been made senescent by prolonged culture up to the 30(th) passage. In senescent ADhMSCs, 4-MU also counteracted the REAC ability to rescue the gene expression of TERT, and the associated resumption of telomerase activity. Hence, the anti-senescence action of REAC is largely dependent upon the availability of endogenous HA synthesis. Endogenous HA and HA-binding proteins with REAC technology create an interesting network that acts on the modulation of cell polarity and intracellular environment. This suggests that REAC technology is effective on an intracellular niche level of stem cell regulation.

  8. Slow Down or Speed up? Lowering Periapsis Versus Escaping from a Circular Orbit

    NASA Astrophysics Data System (ADS)

    Blanco, Philip

    2017-01-01

    Paul Hewitt's Figuring Physics in the Feb. 2016 issue asked whether it would take a larger velocity change to stop a satellite in a circular orbit or to cause it to escape. An extension of this problem asks: What minimum velocity change is required to crash a satellite into the planet, and how does that compare with the velocity change required for escape? The solution presented here, using conservation principles taught in a mechanics course, serves as an introduction to orbital maneuvers, and can be applied to questions regarding the removal of objects orbiting Earth, other planets, and the Sun.

  9. Slow Down and Enjoy: The Effects of Cycling Cadence on Pleasure.

    PubMed

    Agrícola, Pedro M D; da Silva Machado, Daniel G; de Farias Junior, Luiz F; do Nascimento Neto, Luiz I; Fonteles, André I; da Silva, Samara K A; Chao, Cheng H N; Fontes, Eduardo B; Elsangedy, Hassan M; Okano, Alexandre H

    2016-10-17

    Pleasure plays a key role in exercise behavior. However, the influence of cycling cadence needs to be elucidated. Here, we verified the effects of cycling cadence on affect, perceived exertion (ratings of perceived exertion), and physiological responses. In three sessions, 15 men performed a maximal cycling incremental test followed by two 30-min constant workload (50% of peak power) bouts at 60 and 100 r/min. The pleasure was higher when participants cycled at 60 r/min, whereas ratings of perceived exertion, heart rate, and oxygen uptake were lower (p < .05). Additionally, the rate of decrease in pleasure and increase in ratings of perceived exertion was less steep at 60 r/min (p < .01). Cycling at 60 r/min is more pleasant, and the perceived effort and physiological demand are lower than at 100 r/min.

  10. Dispersal evolution in the presence of Allee effects can speed up or slow down invasions.

    PubMed

    Shaw, Allison K; Kokko, Hanna

    2015-05-01

    Successful invasions by sexually reproducing species depend on the ability of individuals to mate. Finding mates can be particularly challenging at low densities (a mate-finding Allee effect), a factor that is only implicitly accounted for by most invasion models, which typically assume asexual populations. Existing theory on single-sex populations suggests that dispersal evolution in the presence of a mate-finding Allee effect slows invasions. Here we develop a two-sex model to determine how mating system, strength of an Allee effect, and dispersal evolution influence invasion speed. We show that mating system differences can dramatically alter the spread rate. We also find a broader spectrum of outcomes than earlier work suggests. Allowing dispersal to evolve in a spreading context can sometimes alleviate the mate-finding Allee effect and slow the rate of spread. However, we demonstrate the opposite when resource competition among females remains high: evolution then acts to speed up the spread rate, despite simultaneously exacerbating the Allee effect. Our results highlight the importance of the timing of mating relative to dispersal and the strength of resource competition for consideration in future empirical studies.

  11. Rhythm perception: Speeding up or slowing down affects different subcomponents of the ERP P3 complex.

    PubMed

    Jongsma, Marijtje L A; Meeuwissen, Esther; Vos, Piet G; Maes, Roald

    2007-07-01

    The aim of this study was to investigate, by measuring the event related potential (ERP) P3 complex, whether the perception of small accelerations differs from that of small decelerations. Participants had to decide whether the last beat of a short sequence was presented 'too early' or 'too late'. Target beats were accelerated or decelerated with 0%, 2%, 5%, or 10%. Individuals differed in their capability to detect small tempo changes. We found that good responders were able to identify all tempo changes whereas poor responders were only able to identify large (10%) tempo changes. In addition, we found that tempo changes affected two subcomponents of the ERP P3 in good performers. Accelerations increased a late-P3 amplitude whereas decelerations increased an early-P3 amplitude. These results imply the principle possibility to measure differential P3 effects within one task. This is important for acquiring more refined knowledge concerning different subcomponents of the ERP P3 complex and the cognitive processes by which they are elicited.

  12. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template.

  13. A locally adaptive kernel regression method for facies delineation

    NASA Astrophysics Data System (ADS)

    Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.

    2015-12-01

    Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.

  14. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  15. Choosing parameters of kernel subspace LDA for recognition of face images under pose and illumination variations.

    PubMed

    Huang, Jian; Yuen, Pong C; Chen, Wen-Sheng; Lai, Jian Huang

    2007-08-01

    This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.

  16. Determining the Parameters of the Hereditary Kernels of Nonlinear Viscoelastic Isotropic Materials in Torsion

    NASA Astrophysics Data System (ADS)

    Golub, V. P.; Ragulina, V. S.; Fernati, P. V.

    2015-03-01

    A method for determining the parameters of the hereditary kernels for nonlinear viscoelastic materials is tested in conditions of pure torsion. A Rabotnov-type model is chosen. The parameters of the hereditary kernels are determined by fitting discrete values of the kernels found using a similarity condition. The discrete values of the kernels in the zone of singularity occurring in short-term tests are found using weight functions. The Abel kernel, a combination of power and exponential functions, and a fractional-exponential function are considered

  17. Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.

    PubMed

    Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I

    2016-03-01

    The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor.

  18. Mexican Hat Wavelet Kernel ELM for Multiclass Classification

    PubMed Central

    Wang, Jie; Ma, Tian-Lei

    2017-01-01

    Kernel extreme learning machine (KELM) is a novel feedforward neural network, which is widely used in classification problems. To some extent, it solves the existing problems of the invalid nodes and the large computational complexity in ELM. However, the traditional KELM classifier usually has a low test accuracy when it faces multiclass classification problems. In order to solve the above problem, a new classifier, Mexican Hat wavelet KELM classifier, is proposed in this paper. The proposed classifier successfully improves the training accuracy and reduces the training time in the multiclass classification problems. Moreover, the validity of the Mexican Hat wavelet as a kernel function of ELM is rigorously proved. Experimental results on different data sets show that the performance of the proposed classifier is significantly superior to the compared classifiers. PMID:28321249

  19. Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework.

    PubMed

    Guo, Qing; Dong, Fangmin; Sun, Shuifa; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi

    A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images.

  20. Robust visual tracking via adaptive kernelized correlation filter

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Wang, Desheng; Liao, Qingmin

    2016-10-01

    Correlation filter based trackers have proved to be very efficient and robust in object tracking with a notable performance competitive with state-of-art trackers. In this paper, we propose a novel object tracking method named Adaptive Kernelized Correlation Filter (AKCF) via incorporating Kernelized Correlation Filter (KCF) with Structured Output Support Vector Machines (SOSVM) learning method in a collaborative and adaptive way, which can effectively handle severe object appearance changes with low computational cost. AKCF works by dynamically adjusting the learning rate of KCF and reversely verifies the intermediate tracking result by adopting online SOSVM classifier. Meanwhile, we bring Color Names in this formulation to effectively boost the performance owing to its rich feature information encoded. Experimental results on several challenging benchmark datasets reveal that our approach outperforms numerous state-of-art trackers.