Science.gov

Sample records for kernels slowing-down

  1. Is cosmic acceleration slowing down?

    SciTech Connect

    Shafieloo, Arman; Sahni, Varun; Starobinsky, Alexei A.

    2009-11-15

    We investigate the course of cosmic expansion in its recent past using the Constitution SN Ia sample, along with baryon acoustic oscillations (BAO) and cosmic microwave background (CMB) data. Allowing the equation of state of dark energy (DE) to vary, we find that a coasting model of the universe (q{sub 0}=0) fits the data about as well as Lambda cold dark matter. This effect, which is most clearly seen using the recently introduced Om diagnostic, corresponds to an increase of Om and q at redshifts z < or approx. 0.3. This suggests that cosmic acceleration may have already peaked and that we are currently witnessing its slowing down. The case for evolving DE strengthens if a subsample of the Constitution set consisting of SNLS+ESSENCE+CfA SN Ia data is analyzed in combination with BAO+CMB data. The effect we observe could correspond to DE decaying into dark matter (or something else)

  2. Critical slowing down in a dynamic duopoly

    NASA Astrophysics Data System (ADS)

    Escobido, M. G. O.; Hatano, N.

    2015-01-01

    Anticipating critical transitions is very important in economic systems as it can mean survival or demise of firms under stressful competition. As such identifying indicators that can provide early warning to these transitions are very crucial. In other complex systems, critical slowing down has been shown to anticipate critical transitions. In this paper, we investigate the applicability of the concept in the heterogeneous quantity competition between two firms. We develop a dynamic model where the duopoly can adjust their production in a logistic process. We show that the resulting dynamics is formally equivalent to a competitive Lotka-Volterra system. We investigate the behavior of the dominant eigenvalues and identify conditions that critical slowing down can provide early warning to the critical transitions in the dynamic duopoly.

  3. Lead Slowing Down Spectrometer Status Report

    SciTech Connect

    Warren, Glen A.; Anderson, Kevin K.; Bonebrake, Eric; Casella, Andrew M.; Danon, Yaron; Devlin, M.; Gavron, Victor A.; Haight, R. C.; Imel, G. R.; Kulisek, Jonathan A.; O'Donnell, J. M.; Weltz, Adam

    2012-06-07

    This report documents the progress that has been completed in the first half of FY2012 in the MPACT-funded Lead Slowing Down Spectrometer project. Significant progress has been made on the algorithm development. We have an improve understanding of the experimental responses in LSDS for fuel-related material. The calibration of the ultra-depleted uranium foils was completed, but the results are inconsistent from measurement to measurement. Future work includes developing a conceptual model of an LSDS system to assay plutonium in used fuel, improving agreement between simulations and measurement, design of a thorium fission chamber, and evaluation of additional detector techniques.

  4. Lead Slowing Down Spectrometer Research Plans

    SciTech Connect

    Warren, Glen A.; Kulisek, Jonathan A.; Gavron, Victor; Danon, Yaron; Weltz, Adam; Harris, Jason; Stewart, T.

    2013-03-22

    The MPACT-funded Lead Slowing Down Spectrometry (LSDS) project has been evaluating the feasibility of using LSDS techniques to assay fissile isotopes in used nuclear fuel assemblies. The approach has the potential to provide considerable improvement in the assay of fissile isotopic masses in fuel assemblies compared to other non-destructive techniques in a direct and independent manner. The LSDS collaborations suggests that the next step to in empirically testing the feasibility is to conduct measurements on fresh fuel assemblies to understand investigate self-attenuation and fresh mixed-oxide (MOX) fuel rodlets so we may betterto understand extraction of masses for 235U and 239Pu. While progressing toward these goals, the collaboration also strongly suggests the continued development of enabling technology such as detector development and algorithm development, thatwhich could provide significant performance benefits.

  5. Critical Slowing Down Governs the Transition to Neuron Spiking

    PubMed Central

    Meisel, Christian; Klaus, Andreas; Kuehn, Christian; Plenz, Dietmar

    2015-01-01

    Many complex systems have been found to exhibit critical transitions, or so-called tipping points, which are sudden changes to a qualitatively different system state. These changes can profoundly impact the functioning of a system ranging from controlled state switching to a catastrophic break-down; signals that predict critical transitions are therefore highly desirable. To this end, research efforts have focused on utilizing qualitative changes in markers related to a system’s tendency to recover more slowly from a perturbation the closer it gets to the transition—a phenomenon called critical slowing down. The recently studied scaling of critical slowing down offers a refined path to understand critical transitions: to identify the transition mechanism and improve transition prediction using scaling laws. Here, we outline and apply this strategy for the first time in a real-world system by studying the transition to spiking in neurons of the mammalian cortex. The dynamical system approach has identified two robust mechanisms for the transition from subthreshold activity to spiking, saddle-node and Hopf bifurcation. Although theory provides precise predictions on signatures of critical slowing down near the bifurcation to spiking, quantitative experimental evidence has been lacking. Using whole-cell patch-clamp recordings from pyramidal neurons and fast-spiking interneurons, we show that 1) the transition to spiking dynamically corresponds to a critical transition exhibiting slowing down, 2) the scaling laws suggest a saddle-node bifurcation governing slowing down, and 3) these precise scaling laws can be used to predict the bifurcation point from a limited window of observation. To our knowledge this is the first report of scaling laws of critical slowing down in an experiment. They present a missing link for a broad class of neuroscience modeling and suggest improved estimation of tipping points by incorporating scaling laws of critical slowing down as a

  6. Report on First Activations with the Lead Slowing Down Spectrometer

    SciTech Connect

    Warren, Glen A.; Mace, Emily K.; Pratt, Sharon L.; Stave, Sean; Woodring, Mitchell L.

    2011-03-03

    On Feb. 17 and 18 2011, six items were irradiated with neutrons using the Lead Slowing Down Spectrometer. After irradiation, dose measurements and gamma-spectrometry measurements were completed on all of the samples. No contamination was found on the samples, and all but one provided no dose. Gamma-spectroscopy measurements qualitatively agreed with expectations based on the materials, with the exception of silver. We observed activation in the room in general, mostly due to 56Mn and 24Na. Most of the activation was short lived, with half-lives on the scale of hours, except for 198Au which has a half-life of 2.7 d.

  7. Anomalous slowing down in the metastable liquid of hard spheres

    NASA Astrophysics Data System (ADS)

    Dzugutov, M.

    2002-03-01

    It is demonstrated that a straightforward extension of the Arrhenius law accurately describes diffusion in the thermodynamically stable liquid of hard spheres. A sharp negative deviation from this behavior is observed as the liquid is compressed beyond its stability limit. This dynamical anomaly can be compared with the nonlinear slowing down characteristic of the supercooled dynamics regime in liquids with continuous interaction. It is suggested that the observed dynamical transition is caused by long-time decomposition of the configuration space. This interpretation is corroborated by the observation of characteristic anomalies in the geometry of a particle trajectory in the metastable domain.

  8. The promise of slow down ageing may come from curcumin.

    PubMed

    Sikora, E; Bielak-Zmijewska, A; Mosieniak, G; Piwocka, K

    2010-01-01

    No genes exist that have been selected to promote aging. The evolutionary theory of aging tells us that there is a trade-off between body maintenance and investment in reproduction. It is commonly acceptable that the ageing process is driven by the lifelong accumulation of molecular damages mainly due to reactive oxygen species (ROS) produced by mitochondria as well as random errors in DNA replication. Although ageing itself is not a disease, numerous diseases are age-related, such as cancer, Alzheimer's disease, atherosclerosis, metabolic disorders and others, likely caused by low grade inflammation driven by oxygen stress and manifested by increased level of pro-inflammatory cytokines such as IL-1, IL-6 and TNF-alpha, encoded by genes activated by the transcription factor NF-kappaB. It is believed that ageing is plastic and can be slowed down by caloric restriction as well as by some nutraceuticals. As the low grade inflammatory process is believed substantially to contribute to ageing, slowing ageing and postponing the onset of age-related diseases may be achieved by blocking the NF-kappaB-dependent inflammation. In this review we consider the possibility of the natural spice curcumin, a powerful antioxidant, anti-inflammatory agent and efficient inhibitor of NF-kappaB and the mTOR signaling pathway which overlaps that of NF-kappaB, to slow down ageing.

  9. Cosmic slowing down of acceleration for several dark energy parametrizations

    SciTech Connect

    Magaña, Juan; Cárdenas, Víctor H.; Motta, Verónica E-mail: victor.cardenas@uv.cl

    2014-10-01

    We further investigate slowing down of acceleration of the universe scenario for five parametrizations of the equation of state of dark energy using four sets of Type Ia supernovae data. In a maximal probability analysis we also use the baryon acoustic oscillation and cosmic microwave background observations. We found the low redshift transition of the deceleration parameter appears, independently of the parametrization, using supernovae data alone except for the Union 2.1 sample. This feature disappears once we combine the Type Ia supernovae data with high redshift data. We conclude that the rapid variation of the deceleration parameter is independent of the parametrization. We also found more evidence for a tension among the supernovae samples, as well as for the low and high redshift data.

  10. Report on Second Activations with the Lead Slowing Down Spectrometer

    SciTech Connect

    Stave, Sean C.; Mace, Emily K.; Pratt, Sharon L.; Warren, Glen A.

    2012-04-27

    Summary On August 18 and 19 2011, five items were irradiated with neutrons using the Lead Slowing Down Spectrometer (LSDS). After irradiation, dose measurements and gamma-spectrometry measurements were completed on all of the samples. No contamination was found on the samples, and all but one provided no dose. Gamma-spectroscopy measurements qualitatively agreed with expectations based on the materials. As during the first activation run, we observed activation in the room in general, mostly due to 56Mn and 24Na. Most of the activation of the samples was short lived, with half-lives on the scale of hours to days, except for 60Co which has a half-life of 5.3 y.

  11. Lead Slowing-Down Spectrometer Research at Lansce

    NASA Astrophysics Data System (ADS)

    Haight, R. C.; Bredeweg, T. A.; Devlin, M.; Gavron, A.; Jandel, M.; O'Donnell, J. M.; Wender, S. A.; Bélier, G.; Granier, T.; Laurent, B.; Taieb, J.; Danon, Y.; Thompson, J. T.

    2013-03-01

    The lead slowing-down spectrometer (LSDS) at Los Alamos is a 20 ton cube of lead with numerous channels, one for the proton beam from the LANSCE accelerator and others for samples and detectors. A pulsed spallation neutron source at the center of the cube is produced by the 800 MeV proton beam incident on an air-cooled tungsten target. Neutrons from this source are quickly downscattered by various reactions until their energies are less than the first excited state of 207Pb (0.57 MeV). After that, the neutrons slow down by elastic scattering where they lose on the average 1% of their energy per collision. The mean energy of the neutron distribution then changes with time as ~ 1/(t + to)2, where "to" is a constant. The low neutron absorption cross section of lead and multiple scattering of the neutrons leads to a very large neutron flux, approximately 1000 times that available in beams at the intense neutron source at the Lujan Center at LANSCE. Thus nuclear cross sections can be measured with very small samples, or conversely, very small cross sections can be measured with somewhat larger samples. Present research with the LSDS at LANSCE includes measuring fission cross sections on short-lived isotopes such as 237U, developing techniques to measure (n,p) and (n, α) cross sections, testing new types of detectors for use in the extreme radiation environment, and, in an applied context, assessing the possibility of measuring the isotopic content of actinide samples with the eventual goal of characterizing fresh and used reactor fuel rods.

  12. Did growth of high Andes slow down Nazca plate subduction?

    NASA Astrophysics Data System (ADS)

    Quinteros, J.; Sobolev, S. V.

    2010-12-01

    The convergence velocity rate of the Nazca and South-American plate and its variations during the last 100 My are quite well-known from the global plate reconstructions. The key observation is that the rate of Nazca plate subduction has decreased by about 2 times during last 20 Myr and particularly since 10 Ma. During the same time the Central Andes have grown to its present 3-4 km height. Based on the thin-shell model, coupled with mantle convection, it was suggested that slowing down of Nazca plate resulted from the additional load exerted by the Andes. However, the thin-shell model, that integrates stresses and velocities vertically and therefore has no vertical resolution, is not an optimal tool to model a subduction zone. More appropriate would be modeling it with full thermomechanical formulation and self-consistent subduction. We performed a set of experiments to estimate the influence that an orogen like the Andes could have on an ongoing subduction. We used an enhanced 2D version of the SLIM-3D code suitable to simulate the evolution of a subducting slab in a self-consistent manner (gravity driven) at vertical crossections through upper mantle, transition zone and shallower lower mantle. The model utilizes non-linear temperature- and stress-dependant visco-elasto-plastic rheology and phase transitions at 410 and 660 km depth. We started from a reference case with a similar configuration as both Nazca and South-America plates. After some Mys of slow kinematicaly imposed subduction, to develop a coherent thermo-mechanical state, subduction was totally dynamic. On the other cases, the crust was slowly thickened artificially during 10 My to generate the Andean topography. Although our first results show no substantial changes on the velocity pattern of the subduction, we, however, consider this result as preliminary. At the meeting we plan to report completed and verified modeling results and discuss other possible cases of the late Cenozoic slowing down of

  13. Slowing Down Downhill Folding: A Three-Probe Study

    PubMed Central

    Kim, Seung Joong; Matsumura, Yoshitaka; Dumont, Charles; Kihara, Hiroshi; Gruebele, Martin

    2009-01-01

    Abstract The mutant Tyr22Trp/Glu33Tyr/Gly46Ala/Gly48Ala of λ repressor fragment λ6−85 was previously assigned as an incipient downhill folder. We slow down its folding in a cryogenic water-ethylene-glycol solvent (−18 to −28°C). The refolding kinetics are probed by small-angle x-ray scattering, circular dichroism, and fluorescence to measure the radius of gyration, the average secondary structure content, and the native packing around the single tryptophan residue. The main resolved kinetic phase of the mutant is probe independent and faster than the main phase observed for the pseudo-wild-type. Excess helical structure formed early on by the mutant may reduce the formation of turns and prevent the formation of compact misfolded states, speeding up the overall folding process. Extrapolation of our main cryogenic folding phase and previous T-jump measurements to 37°C yields nearly the same refolding rate as extrapolated by Oas and co-workers from NMR line-shape data. Taken together, all the data consistently indicate a folding speed limit of ∼4.5 μs for this fast folder. PMID:19580767

  14. Ligands Slow Down Pure-Dephasing in Semiconductor Quantum Dots.

    PubMed

    Liu, Jin; Kilina, Svetlana V; Tretiak, Sergei; Prezhdo, Oleg V

    2015-09-22

    It is well-known experimentally and theoretically that surface ligands provide additional pathways for energy relaxation in colloidal semiconductor quantum dots (QDs). They increase the rate of inelastic charge-phonon scattering and provide trap sites for the charges. We show that, surprisingly, ligands have the opposite effect on elastic electron-phonon scattering. Our simulations demonstrate that elastic scattering slows down in CdSe QDs passivated with ligands compared to that in bare QDs. As a result, the pure-dephasing time is increased, and the homogeneous luminescence line width is decreased in the presence of ligands. The lifetime of quantum superpositions of single and multiple excitons increases as well, providing favorable conditions for multiple excitons generation (MEG). Ligands reduce the pure-dephasing rates by decreasing phonon-induced fluctuations of the electronic energy levels. Surface atoms are most mobile in QDs, and therefore, they contribute greatly to the electronic energy fluctuations. The mobility is reduced by interaction with ligands. A simple analytical model suggests that the differences between the bare and passivated QDs persist for up to 5 nm diameters. Both low-frequency acoustic and high-frequency optical phonons participate in the dephasing processes in bare QDs, while low-frequency acoustic modes dominate in passivated QDs. The theoretical predictions regarding the pure-dephasing time, luminescence line width, and MEG can be verified experimentally by studying QDs with different surface passivation. PMID:26284384

  15. How to slow down light and where relativity theory fails

    NASA Astrophysics Data System (ADS)

    Zhang, Meggie

    2013-03-01

    Research found logical errors in mathematics and in physics. After discovered wave-particle duality made an assumption I reinterpreted quantum mechanic and I was able to find new information from existing publications and concluded that photon is not a fundamental particle which has a structure. These work has been presented at several APS meetings and EuNPC2012. During my research I also arrived at the exact same conclusion using Newton's theory of space-time, then found the assumptions that relativity theory made failed logical test and violated basic mathematical logic. And Minkowski space violated Newton's law of motion, Lorenz 4-dimensional transformation was mathematically incomplete. After modifying existing physics theories I designed an experiment to demonstrate where light can be slow down or stop for structural study. Such method were also turn into a continuous room temperature fusion method. However the discoveries involves large amount of complex logical analysis. Physicists are generally not philosophers, therefore to make the discovery fully understood by most physicists is very challenging. This work is supported by Dr. Kursh at Northeastern University.

  16. Hydrogen Bonding Slows Down Surface Diffusion of Molecular Glasses.

    PubMed

    Chen, Yinshan; Zhang, Wei; Yu, Lian

    2016-08-18

    Surface-grating decay has been measured for three organic glasses with extensive hydrogen bonding: sorbitol, maltitol, and maltose. For 1000 nm wavelength gratings, the decay occurs by viscous flow in the entire range of temperature studied, covering the viscosity range 10(5)-10(11) Pa s, whereas under the same conditions, the decay mechanism transitions from viscous flow to surface diffusion for organic glasses of similar molecular sizes but with no or limited hydrogen bonding. These results indicate that extensive hydrogen bonding slows down surface diffusion in organic glasses. This effect arises because molecules can preserve hydrogen bonding even near the surface so that the loss of nearest neighbors does not translate into a proportional decrease of the kinetic barrier for diffusion. This explanation is consistent with a strong correlation between liquid fragility and the surface enhancement of diffusion, both reporting resistance of a liquid to dynamic excitation. Slow surface diffusion is expected to hinder any processes that rely on surface transport, for example, surface crystal growth and formation of stable glasses by vapor deposition. PMID:27404465

  17. Lead Slowing Down Spectrometer FY2013 Annual Report

    SciTech Connect

    Warren, Glen A.; Kulisek, Jonathan A.; Gavron, Victor A.; Danon, Yaron; Weltz, Adam; Harris, Jason; Stewart, T.

    2013-10-29

    Executive Summary The Lead Slowing Down Spectrometry (LSDS) project, funded by the Materials Protection And Control Technology campaign, has been evaluating the feasibility of using LSDS techniques to assay fissile isotopes in used nuclear fuel assemblies. The approach has the potential to provide considerable improvement in the assay of fissile isotopic masses in fuel assemblies compared to other non-destructive techniques in a direct and independent manner. This report is a high level summary of the progress completed in FY2013. This progress included: • Fabrication of a 4He scintillator detector to detect fast neutrons in the LSDS operating environment. Testing of the detector will be conducted in FY2014. • Design of a large area 232Th fission chamber. • Analysis using the Los Alamos National Laboratory perturbation model estimated the required number of neutrons for an LSDS measurement to be 10 to the 16th source neutrons. • Application of the algorithms developed at Pacific Northwest National Laboratory to LSDS measurement data of various fissile samples conducted in 2012. The results concluded that the 235U could be measured to 2.7% and the 239Pu could be measured to 6.3%. Significant effort is yet needed to demonstrate the applicability of these algorithms for used-fuel assemblies, but the results reported here are encouraging in demonstrating that we are making progress toward that goal. • Development and cost-analysis of a research plan for the next critical demonstration measurements. The plan suggests measurements on fresh fuel sub assemblies as a means to experimentally test self-attenuation and the use of fresh mixed-oxide fuel as a means to test simultaneous measurement of 235U and 239Pu.

  18. Simplified treatment of exact resonance elastic scattering model in deterministic slowing down equation

    SciTech Connect

    Ono, M.; Wada, K.; Kitada, T.

    2012-07-01

    Simplified treatment of resonance elastic scattering model considering thermal motion of heavy nuclides and the energy dependence of the resonance cross section was implemented into NJOY [1]. In order to solve deterministic slowing down equation considering the effect of up-scattering without iterative calculations, scattering kernel for heavy nuclides is pre-calculated by the formula derived by Ouisloumen and Sanchez [2], and neutron spectrum in up-scattering term is expressed by NR approximation. To check the verification of the simplified treatment, the treatment is applied to U-238 for the energy range from 4 eV to 200 eV. Calculated multi-group capture cross section of U-238 is greater than that of conventional method and the increase of the capture cross sections is remarkable as the temperature becomes high. Therefore Doppler coefficient calculated in UO{sub 2} fuel pin is calculated more negative value than that on conventional method. The impact on Doppler coefficient is equivalent to the results of exact treatment of resonance elastic scattering reported in previous studies [2-7]. The agreement supports the validation of the simplified treatment and therefore this treatment is applied for other heavy nuclide to evaluate the Doppler coefficient in MOX fuel. The result shows that the impact of considering thermal agitation in resonance scattering in Doppler coefficient comes mainly from U-238 and that of other heavy nuclides such as Pu-239, 240 etc. is not comparable in MOX fuel. (authors)

  19. Critical slowing down and hyperuniformity on approach to jamming.

    PubMed

    Atkinson, Steven; Zhang, Ge; Hopkins, Adam B; Torquato, Salvatore

    2016-07-01

    Hyperuniformity characterizes a state of matter that is poised at a critical point at which density or volume-fraction fluctuations are anomalously suppressed at infinite wavelengths. Recently, much attention has been given to the link between strict jamming (mechanical rigidity) and (effective or exact) hyperuniformity in frictionless hard-particle packings. However, in doing so, one must necessarily study very large packings in order to access the long-ranged behavior and to ensure that the packings are truly jammed. We modify the rigorous linear programming method of Donev et al. [J. Comput. Phys. 197, 139 (2004)JCTPAH0021-999110.1016/j.jcp.2003.11.022] in order to test for jamming in putatively collectively and strictly jammed packings of hard disks in two dimensions. We show that this rigorous jamming test is superior to standard ways to ascertain jamming, including the so-called "pressure-leak" test. We find that various standard packing protocols struggle to reliably create packings that are jammed for even modest system sizes of N≈10^{3} bidisperse disks in two dimensions; importantly, these packings have a high reduced pressure that persists over extended amounts of time, meaning that they appear to be jammed by conventional tests, though rigorous jamming tests reveal that they are not. We present evidence that suggests that deviations from hyperuniformity in putative maximally random jammed (MRJ) packings can in part be explained by a shortcoming of the numerical protocols to generate exactly jammed configurations as a result of a type of "critical slowing down" as the packing's collective rearrangements in configuration space become locally confined by high-dimensional "bottlenecks" from which escape is a rare event. Additionally, various protocols are able to produce packings exhibiting hyperuniformity to different extents, but this is because certain protocols are better able to approach exactly jammed configurations. Nonetheless, while one should

  20. Critical slowing down and hyperuniformity on approach to jamming

    NASA Astrophysics Data System (ADS)

    Atkinson, Steven; Zhang, Ge; Hopkins, Adam B.; Torquato, Salvatore

    2016-07-01

    Hyperuniformity characterizes a state of matter that is poised at a critical point at which density or volume-fraction fluctuations are anomalously suppressed at infinite wavelengths. Recently, much attention has been given to the link between strict jamming (mechanical rigidity) and (effective or exact) hyperuniformity in frictionless hard-particle packings. However, in doing so, one must necessarily study very large packings in order to access the long-ranged behavior and to ensure that the packings are truly jammed. We modify the rigorous linear programming method of Donev et al. [J. Comput. Phys. 197, 139 (2004), 10.1016/j.jcp.2003.11.022] in order to test for jamming in putatively collectively and strictly jammed packings of hard disks in two dimensions. We show that this rigorous jamming test is superior to standard ways to ascertain jamming, including the so-called "pressure-leak" test. We find that various standard packing protocols struggle to reliably create packings that are jammed for even modest system sizes of N ≈103 bidisperse disks in two dimensions; importantly, these packings have a high reduced pressure that persists over extended amounts of time, meaning that they appear to be jammed by conventional tests, though rigorous jamming tests reveal that they are not. We present evidence that suggests that deviations from hyperuniformity in putative maximally random jammed (MRJ) packings can in part be explained by a shortcoming of the numerical protocols to generate exactly jammed configurations as a result of a type of "critical slowing down" as the packing's collective rearrangements in configuration space become locally confined by high-dimensional "bottlenecks" from which escape is a rare event. Additionally, various protocols are able to produce packings exhibiting hyperuniformity to different extents, but this is because certain protocols are better able to approach exactly jammed configurations. Nonetheless, while one should not generally

  1. How Accurately Can We Calculate Neutrons Slowing Down In Water ?

    SciTech Connect

    Cullen, D E; Blomquist, R; Greene, M; Lent, E; MacFarlane, R; McKinley, S; Plechaty, E; Sublet, J C

    2006-03-30

    We have compared the results produced by a variety of currently available Monte Carlo neutron transport codes for the relatively simple problem of a fast source of neutrons slowing down and thermalizing in water. Initial comparisons showed rather large differences in the calculated flux; up to 80% differences. By working together we iterated to improve the results by: (1) insuring that all codes were using the same data, (2) improving the models used by the codes, and (3) correcting errors in the codes; no code is perfect. Even after a number of iterations we still found differences, demonstrating that our Monte Carlo and supporting codes are far from perfect; in particularly we found that the often overlooked nuclear data processing codes can be the weakest link in our systems of codes. The results presented here represent the today's state-of-the-art, in the sense that all of the Monte Carlo codes are modern, widely available and used codes. They all use the most up-to-date nuclear data, and the results are very recent, weeks or at most a few months old; these are the results that current users of these codes should expect to obtain from them. As such, the accuracy and limitations of the codes presented here should serve as guidelines to code users in interpreting their results for similar problems. We avoid crystal ball gazing, in the sense that we limit the scope of this report to what is available to code users today, and we avoid predicting future improvements that may or may not actual come to pass. An exception that we make is in presenting results for an improved thermal scattering model currently being testing using advanced versions of NJOY and MCNP that are not currently available to users, but are planned for release in the not too distant future. The other exception is to show comparisons between experimentally measured water cross sections and preliminary ENDF/B-VII thermal scattering law, S({alpha},{beta}) data; although these data are strictly

  2. The Pedagogy of Slowing Down: Teaching Talmud in a Summer Kollel

    ERIC Educational Resources Information Center

    Kanarek, Jane

    2010-01-01

    This article explores a set of practices in the teaching of Talmud called "the pedagogy of slowing down." Through the author's analysis of her own teaching in an intensive Talmud class, "the pedagogy of slowing down" emerges as a pedagogical and cultural model in which the students learn to read more closely and to investigate the multiplicity of…

  3. Fast growth of infants of overweight mothers: can it be slowed down?

    PubMed

    Haschke, Ferdinand; Ziegler, Ekhard E; Grathwohl, Dominik

    2014-01-01

    Data from 3 recently completed studies were pooled and analyzed to answer the question whether breastfed infants of overweight/obese mothers show accelerated growth. It was shown that these infants gain weight faster than indicated by the WHO standards and that they grow significantly faster than infants of lean mothers. The question whether fast infant growth can be slowed down by lowering the protein content of formulas was examined. It was shown that formulas with a protein content that is just moderately above that of human milk support normal growth while significantly slowing down fast growth. PMID:25059802

  4. Fast growth of infants of overweight mothers: can it be slowed down?

    PubMed

    Haschke, Ferdinand; Ziegler, Ekhard E; Grathwohl, Dominik

    2014-01-01

    Data from 3 recently completed studies were pooled and analyzed to answer the question whether breastfed infants of overweight/obese mothers show accelerated growth. It was shown that these infants gain weight faster than indicated by the WHO standards and that they grow significantly faster than infants of lean mothers. The question whether fast infant growth can be slowed down by lowering the protein content of formulas was examined. It was shown that formulas with a protein content that is just moderately above that of human milk support normal growth while significantly slowing down fast growth.

  5. ACTIV: Sandwich Detector Activity from In-Pile Slowing-Down Spectra Experiment

    SciTech Connect

    2013-08-01

    ACTIV calculates the activities of a sandwich detector, to be used for in-pile measurements in slowing-down spectra below a few keV. The effect of scattering with energy degradation in the filter and in the detectors has been included to a first approximation.

  6. "Slow Down, You Move Too Fast:" Literature Circles as Reflective Practice

    ERIC Educational Resources Information Center

    Sanacore, Joseph

    2013-01-01

    Becoming an effective literacy learner requires a bit of slowing down and appreciating the reflective nature of reading and writing. Literature circles support this instructional direction because they provide opportunities for immersing students in discussions that encourage their personal responses. When students feel their personal responses…

  7. Critical slowing down and critical exponents in LD/PIN optically-bistable semiconductor lasers

    SciTech Connect

    Zhong Lichen; Guo Yili

    1988-04-01

    Critical slowing down for LD/PIN bistable optical semiconductor lasers and the critical exponents ..gamma.. for this system have been experimentally investigated. The experimental value ..gamma..approx.0.53 is basically in agreement with the theoretically predicted value of 0.5.

  8. Critical slowing down as early warning for the onset of collapse in mutualistic communities

    PubMed Central

    Dakos, Vasilis; Bascompte, Jordi

    2014-01-01

    Tipping points are crossed when small changes in external conditions cause abrupt unexpected responses in the current state of a system. In the case of ecological communities under stress, the risk of approaching a tipping point is unknown, but its stakes are high. Here, we test recently developed critical slowing-down indicators as early-warning signals for detecting the proximity to a potential tipping point in structurally complex ecological communities. We use the structure of 79 empirical mutualistic networks to simulate a scenario of gradual environmental change that leads to an abrupt first extinction event followed by a sequence of species losses until the point of complete community collapse. We find that critical slowing-down indicators derived from time series of biomasses measured at the species and community level signal the proximity to the onset of community collapse. In particular, we identify specialist species as likely the best-indicator species for monitoring the proximity of a community to collapse. In addition, trends in slowing-down indicators are strongly correlated to the timing of species extinctions. This correlation offers a promising way for mapping species resilience and ranking species risk to extinction in a given community. Our findings pave the road for combining theory on tipping points with patterns of network structure that might prove useful for the management of a broad class of ecological networks under global environmental change. PMID:25422412

  9. Slowing down of North Pacific climate variability and its implications for abrupt ecosystem change.

    PubMed

    Boulton, Chris A; Lenton, Timothy M

    2015-09-15

    Marine ecosystems are sensitive to stochastic environmental variability, with higher-amplitude, lower-frequency--i.e., "redder"--variability posing a greater threat of triggering large ecosystem changes. Here we show that fluctuations in the Pacific Decadal Oscillation (PDO) index have slowed down markedly over the observational record (1900-present), as indicated by a robust increase in autocorrelation. This "reddening" of the spectrum of climate variability is also found in regionally averaged North Pacific sea surface temperatures (SSTs), and can be at least partly explained by observed deepening of the ocean mixed layer. The progressive reddening of North Pacific climate variability has important implications for marine ecosystems. Ecosystem variables that respond linearly to climate forcing will have become prone to much larger variations over the observational record, whereas ecosystem variables that respond nonlinearly to climate forcing will have become prone to more frequent "regime shifts." Thus, slowing down of North Pacific climate variability can help explain the large magnitude and potentially the quick succession of well-known abrupt changes in North Pacific ecosystems in 1977 and 1989. When looking ahead, despite model limitations in simulating mixed layer depth (MLD) in the North Pacific, global warming is robustly expected to decrease MLD. This could potentially reverse the observed trend of slowing down of North Pacific climate variability and its effects on marine ecosystems.

  10. LSP simulations of fast ions slowing down in cool magnetized plasma

    NASA Astrophysics Data System (ADS)

    Evans, Eugene S.; Cohen, Samuel A.

    2015-11-01

    In MFE devices, rapid transport of fusion products, e.g., tritons and alpha particles, from the plasma core into the scrape-off layer (SOL) could perform the dual roles of energy and ash removal. Through these two processes in the SOL, the fast particle slowing-down time will have a major effect on the energy balance of a fusion reactor and its neutron emissions, topics of great importance. In small field-reversed configuration (FRC) devices, the first-orbit trajectories of most fusion products will traverse the SOL, potentially allowing those particles to deposit their energy in the SOL and eventually be exhausted along the open field lines. However, the dynamics of the fast-ion energy loss processes under conditions expected in the FRC SOL, where the Debye length is greater than the electron gyroradius, are not fully understood. What modifications to the classical slowing down rate are necessary? Will instabilities accelerate the energy loss? We use LSP, a 3D PIC code, to examine the effects of SOL plasma parameters (density, temperature and background magnetic field strength) on the slowing down time of fast ions in a cool plasma with parameters similar to those expected in the SOL of small FRC reactors. This work supported by DOE contract DE-AC02-09CH11466.

  11. Slowing down of North Pacific climate variability and its implications for abrupt ecosystem change

    PubMed Central

    Boulton, Chris A.; Lenton, Timothy M.

    2015-01-01

    Marine ecosystems are sensitive to stochastic environmental variability, with higher-amplitude, lower-frequency––i.e., “redder”––variability posing a greater threat of triggering large ecosystem changes. Here we show that fluctuations in the Pacific Decadal Oscillation (PDO) index have slowed down markedly over the observational record (1900–present), as indicated by a robust increase in autocorrelation. This “reddening” of the spectrum of climate variability is also found in regionally averaged North Pacific sea surface temperatures (SSTs), and can be at least partly explained by observed deepening of the ocean mixed layer. The progressive reddening of North Pacific climate variability has important implications for marine ecosystems. Ecosystem variables that respond linearly to climate forcing will have become prone to much larger variations over the observational record, whereas ecosystem variables that respond nonlinearly to climate forcing will have become prone to more frequent “regime shifts.” Thus, slowing down of North Pacific climate variability can help explain the large magnitude and potentially the quick succession of well-known abrupt changes in North Pacific ecosystems in 1977 and 1989. When looking ahead, despite model limitations in simulating mixed layer depth (MLD) in the North Pacific, global warming is robustly expected to decrease MLD. This could potentially reverse the observed trend of slowing down of North Pacific climate variability and its effects on marine ecosystems. PMID:26324900

  12. Measurements with the high flux lead slowing-down spectrometer at LANL

    NASA Astrophysics Data System (ADS)

    Danon, Y.; Romano, C.; Thompson, J.; Watson, T.; Haight, R. C.; Wender, S. A.; Vieira, D. J.; Bond, E.; Wilhelmy, J. B.; O'Donnell, J. M.; Michaudon, A.; Bredeweg, T. A.; Schurman, T.; Rochman, D.; Granier, T.; Ethvignot, T.; Taieb, J.; Becker, J. A.

    2007-08-01

    A Lead Slowing-Down Spectrometer (LSDS) was recently installed at LANL [D. Rochman, R.C. Haight, J.M. O'Donnell, A. Michaudon, S.A. Wender, D.J. Vieira, E.M. Bond, T.A. Bredeweg, A. Kronenberg, J.B. Wilhelmy, T. Ethvignot, T. Granier, M. Petit, Y. Danon, Characteristics of a lead slowing-down spectrometer coupled to the LANSCE accelerator, Nucl. Instr. and Meth. A 550 (2005) 397]. The LSDS is comprised of a cube of pure lead 1.2 m on the side, with a spallation pulsed neutron source in its center. The LSDS is driven by 800 MeV protons with a time-averaged current of up to 1 μA, pulse widths of 0.05-0.25 μs and a repetition rate of 20-40 Hz. Spallation neutrons are created by directing the proton beam into an air-cooled tungsten target in the center of the lead cube. The neutrons slow down by scattering interactions with the lead and thus enable measurements of neutron-induced reaction rates as a function of the slowing-down time, which correlates to neutron energy. The advantage of an LSDS as a neutron spectrometer is that the neutron flux is 3-4 orders of magnitude higher than a standard time-of-flight experiment at the equivalent flight path, 5.6 m. The effective energy range is 0.1 eV to 100 keV with a typical energy resolution of 30% from 1 eV to 10 keV. The average neutron flux between 1 and 10 keV is about 1.7 × 109 n/cm2/s/μA. This high flux makes the LSDS an important tool for neutron-induced cross section measurements of ultra-small samples (nanograms) or of samples with very low cross sections. The LSDS at LANL was initially built in order to measure the fission cross section of the short-lived metastable isotope of U-235, however it can also be used to measure (n, α) and (n, p) reactions. Fission cross section measurements were made with samples of 235U, 236U, 238U and 239Pu. The smallest sample measured was 10 ng of 239Pu. Measurement of (n, α) cross section with 760 ng of Li-6 was also demonstrated. Possible future cross section measurements

  13. Lattice Cell Calculations, Slowing Down Theory and Computer Code Wims; Vver Type Reactors

    NASA Astrophysics Data System (ADS)

    Moen, J.; Brekke, A.; Hall, C.

    1991-01-01

    The following sections are included: * INTRODUCTION * WIMS AS A TOOL FOR REACTOR CORE CALCULATIONS * GENERAL STRUCTURE OF THE WIMS CODE * WIMS APPROACH TO THE SLOWING DOWN CALCULATIONS * MULTIGROUP OSCOPIC CROSS SECTIONS, RESONANCE TREATMENT * DETERMINATION OF MULTIGROUP SPECTRA * PHYSICAL MODELS IN MAIN TRANSPORT CALCULATIONS * BURNUP CALCULATIONS * APPLICATION OF WIMSD-4 TO VVER TYPE LATTICES * FINAL REMARKS * REFERENCES * APPENDIX A: DANCOFF FACTOR - STANDARD APPROACH * APPENDIX B: FORMULAS FOR DANCOFF AND BELL FACTORS CALCULATIONS APPLIED IN PREWIM * APPENDIX C: CALCULATION OF ONE GROUP PROBABILITIES Pij IN AN ANNULAR SYSTEM * APPENDIX D: SCHAEFER'S METHOD

  14. Early warning of climate tipping points from critical slowing down: comparing methods to improve robustness

    PubMed Central

    Lenton, T. M.; Livina, V. N.; Dakos, V.; Van Nes, E. H.; Scheffer, M.

    2012-01-01

    We address whether robust early warning signals can, in principle, be provided before a climate tipping point is reached, focusing on methods that seek to detect critical slowing down as a precursor of bifurcation. As a test bed, six previously analysed datasets are reconsidered, three palaeoclimate records approaching abrupt transitions at the end of the last ice age and three models of varying complexity forced through a collapse of the Atlantic thermohaline circulation. Approaches based on examining the lag-1 autocorrelation function or on detrended fluctuation analysis are applied together and compared. The effects of aggregating the data, detrending method, sliding window length and filtering bandwidth are examined. Robust indicators of critical slowing down are found prior to the abrupt warming event at the end of the Younger Dryas, but the indicators are less clear prior to the Bølling-Allerød warming, or glacial termination in Antarctica. Early warnings of thermohaline circulation collapse can be masked by inter-annual variability driven by atmospheric dynamics. However, rapidly decaying modes can be successfully filtered out by using a long bandwidth or by aggregating data. The two methods have complementary strengths and weaknesses and we recommend applying them together to improve the robustness of early warnings. PMID:22291229

  15. Slowing down of ring polymer diffusion caused by inter-ring threading.

    PubMed

    Lee, Eunsang; Kim, Soree; Jung, YounJoon

    2015-06-01

    Diffusion of long ring polymers in a melt is much slower than the reorganization of their internal structures. While direct evidence for entanglements has not been observed in the long ring polymers unlike linear polymer melts, threading between the rings is suspected to be the main reason for slowing down of ring polymer diffusion. It is, however, difficult to define the threading configuration between two rings because the rings have no chain end. In this work, evidence for threading dynamics of ring polymers is presented by using molecular dynamics simulation and applying a novel analysis method. The simulation results are analyzed in terms of the statistics of persistence and exchange times that have proved useful in studying heterogeneous dynamics of glassy systems. It is found that the threading time of ring polymer melts increases more rapidly with the degree of polymerization than that of linear polymer melts. This indicates that threaded ring polymers cannot diffuse until an unthreading event occurs, which results in the slowing down of ring polymer diffusion.

  16. Synchronous slowing down in coupled logistic maps via random network topology

    PubMed Central

    Wang, Sheng-Jun; Du, Ru-Hai; Jin, Tao; Wu, Xing-Sen; Qu, Shi-Xian

    2016-01-01

    The speed and paths of synchronization play a key role in the function of a system, which has not received enough attention up to now. In this work, we study the synchronization process of coupled logistic maps that reveals the common features of low-dimensional dissipative systems. A slowing down of synchronization process is observed, which is a novel phenomenon. The result shows that there are two typical kinds of transient process before the system reaches complete synchronization, which is demonstrated by both the coupled multiple-period maps and the coupled multiple-band chaotic maps. When the coupling is weak, the evolution of the system is governed mainly by the local dynamic, i.e., the node states are attracted by the stable orbits or chaotic attractors of the single map and evolve toward the synchronized orbit in a less coherent way. When the coupling is strong, the node states evolve in a high coherent way toward the stable orbit on the synchronized manifold, where the collective dynamics dominates the evolution. In a mediate coupling strength, the interplay between the two paths is responsible for the slowing down. The existence of different synchronization paths is also proven by the finite-time Lyapunov exponent and its distribution. PMID:27021897

  17. Early warning of climate tipping points from critical slowing down: comparing methods to improve robustness.

    PubMed

    Lenton, T M; Livina, V N; Dakos, V; van Nes, E H; Scheffer, M

    2012-03-13

    We address whether robust early warning signals can, in principle, be provided before a climate tipping point is reached, focusing on methods that seek to detect critical slowing down as a precursor of bifurcation. As a test bed, six previously analysed datasets are reconsidered, three palaeoclimate records approaching abrupt transitions at the end of the last ice age and three models of varying complexity forced through a collapse of the Atlantic thermohaline circulation. Approaches based on examining the lag-1 autocorrelation function or on detrended fluctuation analysis are applied together and compared. The effects of aggregating the data, detrending method, sliding window length and filtering bandwidth are examined. Robust indicators of critical slowing down are found prior to the abrupt warming event at the end of the Younger Dryas, but the indicators are less clear prior to the Bølling-Allerød warming, or glacial termination in Antarctica. Early warnings of thermohaline circulation collapse can be masked by inter-annual variability driven by atmospheric dynamics. However, rapidly decaying modes can be successfully filtered out by using a long bandwidth or by aggregating data. The two methods have complementary strengths and weaknesses and we recommend applying them together to improve the robustness of early warnings.

  18. Dynamic slowing-down in dense microemulsions near the percolation threshold

    NASA Astrophysics Data System (ADS)

    Chen, S. H.; Mallamace, F.; Rouch, J.; Tartaglia, P.

    1992-05-01

    We review a series of investigations of the static and dynamic properties of a three-component water-in-oil microemulsion system in which the molar ratio of water to surfactant is kept constant. This system behaves effectively like a two-domponent macromolecular fluid in which there are spherical, surfactant coated water droplets of macroscopic dimensions dispersed in a continuum of oil. The properties investigated include electrical conductivity, dielectric relaxation, shear viscosity and viscoelastic relaxation, static neutron and light scattering and dynamic light scattering. We focus mainly on the phenomena of the dynamic slowing-down of the dielectric relaxation and the droplet density fluctuations as the system approaches the percolation threshold from below, both in temperature and in volume fraction. A theory of static and dynamic light scattering, formulated along the lines of scattering from a system of polydisperse fractal clusters, quantitatively accounts for the dynamic slowing-down phenomenon and the non-exponential decay of the time correlation function.

  19. Disentangling density and temperature effects in the viscous slowing down of glassforming liquids

    NASA Astrophysics Data System (ADS)

    Alba-Simionesco, Christiane

    2004-03-01

    We try to provide a consistent picture of the respective role of density r and temperature T in the viscous slowing down of glassforming liquids and polymers. Building upon our previous work and recent studies also done by others, including an analysis of simulation and experimental data on fragile and intermediate liquids (from ortho-terphenyl to glycerol and binary Lennard-Jones) and several polymers (PMMA, PIB, PB, PVME), we conclude that while r plays a role at a quantitative level, its effect on the viscosity and the a-relaxation time can be simply described via a single parameter, an effective interaction energy which is characteristic of the high-T liquid regime, with the important consequence that it does not affect the "fragility" of the glassforming system. A zeroth-order description of the viscous slowing down of liquids and polymers as one approaches the glass transition should thus be formulated in terms of a temperature-driven super-Arrhenius activated behavior rather than a density-driven congestion or jamming phenomenon. This work has been done in collaboration with G. Tarjus, S. Mossa, A. Cailliaux, A. Alegria and our late colleague and friend Daniel Kivelson.

  20. Development for fissile assay in recycled fuel using lead slowing down spectrometer

    SciTech Connect

    Lee, Yong Deok; Je Park, C.; Kim, Ho-Dong; Song, Kee Chan

    2013-07-01

    A future nuclear energy system is under development to turn spent fuels produced by PWRs into fuels for a SFR (Sodium Fast Reactor) through the pyrochemical process. The knowledge of the isotopic fissile content of the new fuel is very important for fuel safety. A lead slowing down spectrometer (LSDS) is under development to analyze the fissile material content (Pu{sup 239}, Pu{sup 241} and U{sup 235}) of the fuel. The LSDS requires a neutron source, the neutrons will be slowed down through their passage in a lead medium and will finally enter the fuel and will induce fission reactions that will be analysed and the isotopic content of the fuel will be then determined. The issue is that the spent fuel emits intense gamma rays and neutrons by spontaneous fission. The threshold fission detector screens the prompt fast fission neutrons and as a result the LSDS is not influenced by the high level radiation background. The energy resolution of LSDS is good in the range 0.1 eV to 1 keV. It is also the range in which the fission reaction is the most discriminating for the considered fissile isotopes. An electron accelerator has been chosen to produce neutrons with an adequate target through (e{sup -},γ)(γ,n) reactions.

  1. Early warning of climate tipping points from critical slowing down: comparing methods to improve robustness.

    PubMed

    Lenton, T M; Livina, V N; Dakos, V; van Nes, E H; Scheffer, M

    2012-03-13

    We address whether robust early warning signals can, in principle, be provided before a climate tipping point is reached, focusing on methods that seek to detect critical slowing down as a precursor of bifurcation. As a test bed, six previously analysed datasets are reconsidered, three palaeoclimate records approaching abrupt transitions at the end of the last ice age and three models of varying complexity forced through a collapse of the Atlantic thermohaline circulation. Approaches based on examining the lag-1 autocorrelation function or on detrended fluctuation analysis are applied together and compared. The effects of aggregating the data, detrending method, sliding window length and filtering bandwidth are examined. Robust indicators of critical slowing down are found prior to the abrupt warming event at the end of the Younger Dryas, but the indicators are less clear prior to the Bølling-Allerød warming, or glacial termination in Antarctica. Early warnings of thermohaline circulation collapse can be masked by inter-annual variability driven by atmospheric dynamics. However, rapidly decaying modes can be successfully filtered out by using a long bandwidth or by aggregating data. The two methods have complementary strengths and weaknesses and we recommend applying them together to improve the robustness of early warnings. PMID:22291229

  2. Critical slowing down associated with regime shifts in the US housing market

    NASA Astrophysics Data System (ADS)

    Tan, James Peng Lung; Cheong, Siew Siew Ann

    2014-02-01

    Complex systems are described by a large number of variables with strong and nonlinear interactions. Such systems frequently undergo regime shifts. Combining insights from bifurcation theory in nonlinear dynamics and the theory of critical transitions in statistical physics, we know that critical slowing down and critical fluctuations occur close to such regime shifts. In this paper, we show how universal precursors expected from such critical transitions can be used to forecast regime shifts in the US housing market. In the housing permit, volume of homes sold and percentage of homes sold for gain data, we detected strong early warning signals associated with a sequence of coupled regime shifts, starting from a Subprime Mortgage Loans transition in 2003-2004 and ending with the Subprime Crisis in 2007-2008. Weaker signals of critical slowing down were also detected in the US housing market data during the 1997-1998 Asian Financial Crisis and the 2000-2001 Technology Bubble Crisis. Backed by various macroeconomic data, we propose a scenario whereby hot money flowing back into the US during the Asian Financial Crisis fueled the Technology Bubble. When the Technology Bubble collapsed in 2000-2001, the hot money then flowed into the US housing market, triggering the Subprime Mortgage Loans transition in 2003-2004 and an ensuing sequence of transitions. We showed how this sequence of couple transitions unfolded in space and in time over the whole of US.

  3. Slow down of a globally neutral relativistic e‑e+ beam shearing the vacuum

    NASA Astrophysics Data System (ADS)

    Alves, E. P.; Grismayer, T.; Silveirinha, M. G.; Fonseca, R. A.; Silva, L. O.

    2016-01-01

    The microphysics of relativistic collisionless shear flows is investigated in a configuration consisting of a globally neutral, relativistic {{e}-}{{e}+} beam streaming through a hollow plasma/dielectric channel. We show through multidimensional particle-in-cell simulations that this scenario excites the mushroom instability (MI), a transverse shear instability on the electron-scale, when there is no overlap (no contact) between the {{e}-}{{e}+} beam and the walls of the hollow plasma channel. The onset of the MI leads to the conversion of the beam’s kinetic energy into magnetic (and electric) field energy, effectively slowing down a globally neutral body in the absence of contact. The collisionless shear physics explored in this configuration may operate in astrophysical environments, particularly in highly relativistic and supersonic settings where macroscopic shear processes are stable.

  4. The study of dynamics heterogeneity and slow down of silica by molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    San, L. T.; Hung, P. K.; Hue, H. V.

    2016-06-01

    We have numerically studied the diffusion in silica liquids via the SiOx → SiOx±1, OSiy → OSiy±1 reactions and coordination cells (CC). Five models with temperatures from 1000 to 3500 K have been constructed by molecular dynamics simulation. We reveal that the reactions happen not randomly in the space. In addition, the reactions correlated strongly with the mobility of CC atom. Further we examine the clustering of atoms having unbroken bonds and restored bonds. The time evolution of these clusters under temperature is also considered. The simulation shows that both slow down and dynamic heterogeneity (DH) is related not only to the percolation of restored-rigid clusters near glass transition but also to their long lifetime.

  5. Disentangling density and temperature effects in the viscous slowing down of glassforming liquids.

    PubMed

    Tarjus, G; Kivelson, D; Mossa, S; Alba-Simionesco, C

    2004-04-01

    We present a consistent picture of the respective role of density (rho) and temperature (T) in the viscous slowing down of glassforming liquids and polymers. Specifically, based in part upon a new analysis of simulation and experimental data on liquid ortho-terphenyl, we conclude that a zeroth-order description of the approach to the glass transition (in the range of experimentally accessible pressures) should be formulated in terms of a temperature-driven super-Arrhenius activated behavior rather than a density-driven congestion or jamming phenomenon. The density plays a role at a quantitative level, but its effect on the viscosity and the alpha-relaxation time can be simply described via a single parameter, an effective interaction energy that is characteristic of the high-T liquid regime; as a result, rho does not affect the "fragility" of the glassforming system.

  6. Structure and dynamics of water in crowded environments slows down peptide conformational changes

    SciTech Connect

    Lu, Cheng; Prada-Gracia, Diego; Rao, Francesco

    2014-07-28

    The concentration of macromolecules inside the cell is high with respect to conventional in vitro experiments or simulations. In an effort to characterize the effects of crowding on the thermodynamics and kinetics of disordered peptides, molecular dynamics simulations were run at different concentrations by varying the number of identical weakly interacting peptides inside the simulation box. We found that the presence of crowding does not influence very much the overall thermodynamics. On the other hand, peptide conformational dynamics was found to be strongly affected, resulting in a dramatic slowing down at larger concentrations. The observation of long lived water bridges between peptides at higher concentrations points to a nontrivial role of the solvent in the altered peptide kinetics. Our results reinforce the idea for an active role of water in molecular crowding, an effect that is expected to be relevant for problems influenced by large solvent exposure areas like in intrinsically disordered proteins.

  7. Critical slowing down, phase relations, and dissipation in driven oscillatory systems

    SciTech Connect

    Tsarouhas, G.E.; Ross, J. )

    1989-04-06

    Three dynamical properties of forced nonlinear systems are discussed with approximate analytic solutions obtained from the dynamic equations for oscillatory systems, near a supercritical Hopf bifurcation, driven by periodic perturbations of small amplitude. With these solutions we first obtain the phase difference between the response of the system and the periodic perturbation and its dependence on the parameters, and hence the mechanism, of the system. Second, we derive expressions for critical slowing down near edges of entrainment bands, with consideration of possible variation of both the radius and phase of the perturbed limit cycle with the amplitude of perturbation. Third, we show by analysis the previously numerically calculated variation of the dissipation within entrainment bands, which depends on the square of the amplitude of the response of the perturbed system.

  8. Microdosimetry of the full slowing down of protons using Monte Carlo track structure simulations.

    PubMed

    Liamsuwan, T; Uehara, S; Nikjoo, H

    2015-09-01

    The article investigates two approaches in microdosimetric calculations based on Monte Carlo track structure (MCTS) simulations of a 160-MeV proton beam. In the first approach, microdosimetric parameters of the proton beam were obtained using the weighted sum of proton energy distributions and microdosimetric parameters of proton track segments (TSMs). In the second approach, phase spaces of energy depositions obtained using MCTS simulations in the full slowing down (FSD) mode were used for the microdosimetric calculations. Targets of interest were water cylinders of 2.3-100 nm in diameters and heights. Frequency-averaged lineal energies ([Formula: see text]) obtained using both approaches agreed within the statistical uncertainties. Discrepancies beyond this level were observed for dose-averaged lineal energies ([Formula: see text]) towards the Bragg peak region due to the small number of proton energies used in the TSM approach and different energy deposition patterns in the TSM and FSD of protons.

  9. Slow down of a globally neutral relativistic e-e+ beam shearing the vacuum

    NASA Astrophysics Data System (ADS)

    Alves, E. P.; Grismayer, T.; Silveirinha, M. G.; Fonseca, R. A.; Silva, L. O.

    2016-01-01

    The microphysics of relativistic collisionless shear flows is investigated in a configuration consisting of a globally neutral, relativistic {{e}-}{{e}+} beam streaming through a hollow plasma/dielectric channel. We show through multidimensional particle-in-cell simulations that this scenario excites the mushroom instability (MI), a transverse shear instability on the electron-scale, when there is no overlap (no contact) between the {{e}-}{{e}+} beam and the walls of the hollow plasma channel. The onset of the MI leads to the conversion of the beam’s kinetic energy into magnetic (and electric) field energy, effectively slowing down a globally neutral body in the absence of contact. The collisionless shear physics explored in this configuration may operate in astrophysical environments, particularly in highly relativistic and supersonic settings where macroscopic shear processes are stable.

  10. Lead Slowing Down Spectrometry Analysis of Data from Measurements on Nuclear Fuel

    SciTech Connect

    Warren, Glen A.; Anderson, Kevin K.; Kulisek, Jonathan A.; Danon, Yaron; Weltz, Adam; Gavron, Victor A.; Harris, Jason; Stewart, Trevor N.

    2015-01-12

    Improved non-destructive assay of isotopic masses in used nuclear fuel would be valuable for nuclear safeguards operations associated with the transport, storage and reprocessing of used nuclear fuel. Our collaboration is examining the feasibility of using lead slowing down spectrometry techniques to assay the isotopic fissile masses in used nuclear fuel assemblies. We present the application of our analysis algorithms on measurements conducted with a lead spectrometer. The measurements involved a single fresh fuel pin and discrete 239Pu and 235U samples. We are able to describe the isotopic fissile masses with root mean square errors over seven different configurations to 6.35% for 239Pu and 2.7% for 235U over seven different configurations. Funding Source(s):

  11. Analysis of spent fuel assay with a lead slowing down spectrometer

    SciTech Connect

    Gavron, Victor I; Smith, L. Eric; Ressler, Jennifer J

    2010-10-29

    Assay of fissile materials in spent fuel that are produced or depleted during the operation of a reactor, is of paramount importance to nuclear materials accounting, verification of the reactor operation history, as well as for criticality considerations for storage. In order to prevent future proliferation following the spread of nuclear energy, we must develop accurate methods to assay large quantities of nuclear fuels. We analyze the potential of using a Lead Slowing Down Spectrometer for assaying spent fuel. We conclude that it is possible to design a system that will provide around 1% statistical precision in the determination of the {sup 239}Pu, {sup 241}Pu and {sup 235}U concentrations in a PWR spent-fuel assembly, for intermediate-to-high burnup levels, using commercial neutron sources, and a system of {sup 238}U threshold fission detectors. Pending further analysis of systematic errors, it is possible that missing pins can be detected, as can asymmetry in the fuel bundle.

  12. Analysis of spent fuel assay with a lead slowing down spectrometer

    SciTech Connect

    Gavron, Victor I; Smith, L Eric; Ressler, Jennifer J

    2008-01-01

    Assay of fissile materials in spent fuel that are produced or depleted during the operation of a reactor, is of paramount importance to nuclear materials accounting, verification of the reactor operation history, as well as for criticality considerations for storage. In order to prevent future proliferation following the spread of nuclear energy, we must develop accurate methods to assay large quantities of nuclear fuels. We analyze the potential of using a Lead Slowing Down Spectrometer for assaying spent fuel. We conclude that it is possible to design a system that will provide around 1% statistical precision in the determination of the {sup 239}Pu, {sup 241}Pu and {sup 235}U concentrations in a PWR spent-fuel assembly, for intermediate-to-high burnup levels, using commercial neutron sources, and a system of {sup 238}U threshold fission detectors. Pending further analysis of systematic errors, it is possible that missing pins can be detected, as can asymmetry in the fuel bundle.

  13. Expertise makes the world slow down: judgements of duration are influenced by domain knowledge.

    PubMed

    Rhodes, Matthew G; McCabe, David P

    2009-12-01

    Experts often appear to perceive time differently from novices. The current study thus examined perceptions of time as a function of domain expertise. Specifically, individuals with high or low levels of knowledge of American football made judgements of duration for briefly presented words that were unrelated to football (e.g., rooster), football specific (e.g., touchdown), or ambiguous (e.g., huddle). Results showed that high-knowledge individuals judged football-specific words as having been presented for a longer duration than unrelated or ambiguous words. In contrast, low-knowledge participants exhibited no systematic differences in judgements of duration based on the type of word presented. These findings are discussed within a fluency attribution framework, which suggests that experts' fluent perception of domain-relevant stimuli leads to the subjective impression that time slows down in one's domain of expertise. PMID:19691007

  14. Equilibrium and stability in a heliotron with anisotropic hot particle slowing-down distribution

    SciTech Connect

    Cooper, W. A.; Asahi, Y.; Narushima, Y.; Suzuki, Y.; Watanabe, K. Y.; Graves, J. P.; Isaev, M. Yu.

    2012-10-15

    The equilibrium and linear fluid Magnetohydrodynamic (MHD) stability in an inward-shifted large helical device heliotron configuration are investigated with the 3D ANIMEC and TERPSICHORE codes, respectively. A modified slowing-down distribution function is invoked to study anisotropic pressure conditions. An appropriate choice of coefficients and exponents allows the simulation of neutral beam injection in which the angle of injection is varied from parallel to perpendicular. The fluid stability analysis concentrates on the application of the Johnson-Kulsrud-Weimer energy principle. The growth rates are maximum at <{beta}>{approx}2%, decrease significantly at <{beta}>{approx}4.5%, do not vary significantly with variations of the injection angle and are similar to those predicted with a bi-Maxwellian hot particle distribution function model. Stability is predicted at <{beta}>{approx}2.5% with a sufficiently peaked energetic particle pressure profile. Electrostatic potential forms from the MHD instability necessary for guiding centre orbit following are calculated.

  15. Slowing down DNA translocation through a nanopore by lowering fluid temperature.

    PubMed

    Yeh, Li-Hsien; Zhang, Mingkan; Joo, Sang W; Qian, Shizhi

    2012-12-01

    In the next-generation nanopore-based DNA sequencing technique, the DNA nanoparticles are electrophoretically driven through a nanopore by an external electric field, and the ionic current through the nanopore is simultaneously altered and recorded during the DNA translocation process. The change in the ionic current through the nanopore as the DNA molecule passes through the nanopore represents a direct reading of the DNA sequence. Due to the large mismatch of the cross-sectional areas of the nanopore and the microfluidic reservoirs, the electric field inside the nanopore is significantly higher than that in the fluid reservoirs. This results in high-speed DNA translocation through the nanopore and consequently low read-out accuracy on the DNA sequences. Slowing down DNA translocation through the nanopore thus is one of the challenges in the nanopore-based DNA sequencing technique. Slowing down DNA translocation by lowering the fluid temperature is theoretically investigated for the first time using a continuum model, composed of the coupled Poisson-Nernst-Planck equations for the ionic mass transport and the Navier-Stokes equations for the hydrodynamic field. The results qualitatively agree with the existing experimental results. Lowering the fluid temperature from 25 to 0°C reduces the translocation speed by a magnitude of about 6.21 to 2.50 mm/sK (i.e. 49.82 to 49.71%) for the salt concentration at 200 and 2000 mM, respectively, improving the read-out accuracy considerably. As the fluid temperature decreases, the magnitude of the ionic current signal decreases (increases) when the salt concentration is high (sufficiently low).

  16. Hydrogen sulfide slows down progression of experimental Alzheimer's disease by targeting multiple pathophysiological mechanisms.

    PubMed

    Giuliani, Daniela; Ottani, Alessandra; Zaffe, Davide; Galantucci, Maria; Strinati, Flavio; Lodi, Renzo; Guarini, Salvatore

    2013-09-01

    It has been previously reported that brain hydrogen sulfide (H2S) synthesis is severely decreased in Alzheimer's disease (AD) patients, and plasma H2S levels are negatively correlated with the severity of AD. Here we extensively investigated whether treatment with a H2S donor and spa-waters rich in H2S induces neuroprotection and slows down progression of AD. Studies with sodium hydrosulfide (a H2S donor) and Tabiano's spa-water were carried out in three experimental models of AD. Short-term and long-term treatments with sodium hydrosulfide and/or Tabiano's spa-water significantly protected against impairment in learning and memory in rat models of AD induced by brain injection of β-amyloid1-40 (Aβ) or streptozotocin, and in an AD mouse model harboring human transgenes APPSwe, PS1M146V and tauP301L (3xTg-AD mice). The improvement in behavioral performance was associated with hippocampus was size of Aβ plaques and preservation of the morphological picture, as found in AD rats. Further, lowered concentration/phosphorylation levels of proteins thought to be the central events in AD pathophysiology, namely amyloid precursor protein, presenilin-1, Aβ1-42 and tau phosphorylated at Thr181, Ser396 and Ser202, were detected in 3xTg-AD mice treated with spa-water. The excitotoxicity-triggered oxidative and nitrosative stress was counteracted in 3xTg-AD mice, as indicated by the decreased levels of malondialdehyde and nitrites in the cerebral cortex. Hippocampus reduced activity of c-jun N-terminal kinases, extracellular signal-regulated kinases and p38, which have an established role not only in phosphorylation of tau protein but also in inflammation and apoptosis, was also found. Consistently, decrease in tumor necrosis factor-α level, up-regulation of Bcl-2, and down-regulation of BAX and the downstream executioner caspase-3, also occurred in the hippocampus of 3xTg-AD mice after treatment with Tabiano's spa-water, thus suggesting that it is also able to modulate

  17. Lead Slowing-Down Spectrometry for Spent Fuel Assay: FY11 Status Report

    SciTech Connect

    Warren, Glen A.; Casella, Andrew M.; Haight, R. C.; Anderson, Kevin K.; Danon, Yaron; Hatchett, D.; Becker, Bjorn; Devlin, M.; Imel, G. R.; Beller, D.; Gavron, A.; Kulisek, Jonathan A.; Bowyer, Sonya M.; Gesh, Christopher J.; O'Donnell, J. M.

    2011-08-01

    Executive Summary Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today’s confirmatory assay methods. This document is a progress report for FY2011 collaboration activities. Progress made by the collaboration in FY2011 continues to indicate the promise of LSDS techniques applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model demonstrated the potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space. Similar results were obtained using a perturbation approach developed by LANL. Benchmark measurements have been successfully conducted at LANL and at RPI using their respective LSDS instruments. The ISU and UNLV collaborative effort is focused on the fabrication and testing of prototype fission chambers lined with ultra-depleted 238U and 232Th, and uranium deposition on a stainless steel disc using spiked U3O8 from room temperature ionic liquid was successful, with improving thickness obtained. In FY2012, the collaboration plans a broad array of activities. PNNL will focus on optimizing its empirical model and minimizing its reliance on calibration data, as well continuing efforts on developing an analytical model. Additional measurements are

  18. Mechanical slowing-down of cytoplasmic diffusion allows in vivo counting of proteins in individual cells.

    PubMed

    Okumus, Burak; Landgraf, Dirk; Lai, Ghee Chuan; Bakhsi, Somenath; Arias-Castro, Juan Carlos; Yildiz, Sadik; Huh, Dann; Fernandez-Lopez, Raul; Peterson, Celeste N; Toprak, Erdal; El Karoui, Meriem; Paulsson, Johan

    2016-01-01

    Many key regulatory proteins in bacteria are present in too low numbers to be detected with conventional methods, which poses a particular challenge for single-cell analyses because such proteins can contribute greatly to phenotypic heterogeneity. Here we develop a microfluidics-based platform that enables single-molecule counting of low-abundance proteins by mechanically slowing-down their diffusion within the cytoplasm of live Escherichia coli (E. coli) cells. Our technique also allows for automated microscopy at high throughput with minimal perturbation to native physiology, as well as viable enrichment/retrieval. We illustrate the method by analysing the control of the master regulator of the E. coli stress response, RpoS, by its adapter protein, SprE (RssB). Quantification of SprE numbers shows that though SprE is necessary for RpoS degradation, it is expressed at levels as low as 3-4 molecules per average cell cycle, and fluctuations in SprE are approximately Poisson distributed during exponential phase with no sign of bursting. PMID:27189321

  19. Non-destructive Assay Measurements Using the RPI Lead Slowing Down Spectrometer

    SciTech Connect

    Becker, Bjorn; Weltz, Adam; Kulisek, Jonathan A.; Thompson, J. T.; Thompson, N.; Danon, Yaron

    2013-10-01

    The use of a Lead Slowing-Down Spectrometer (LSDS) is consid- ered as a possible option for non-destructive assay of fissile material of used nuclear fuel. The primary objective is to quantify the 239Pu and 235U fissile content via a direct measurement, distinguishing them through their characteristic fission spectra in the LSDS. In this pa- per, we present several assay measurements performed at the Rensse- laer Polytechnic Institute (RPI) to demonstrate the feasibility of such a method and to provide benchmark experiments for Monte Carlo cal- culations of the assay system. A fresh UOX fuel rod from the RPI Criticality Research Facility, a 239PuBe source and several highly en- riched 235U discs were assayed in the LSDS. The characteristic fission spectra were measured with 238U and 232Th threshold fission cham- bers, which are only sensitive to fission neutron with energy above the threshold. Despite the constant neutron and gamma background from the PuBe source and the intense interrogation neutron flux, the LSDS system was able to measure the characteristic 235U and 239Pu responses. All measurements were compared to Monte Carlo simula- tions. It was shown that the available simulation tools and models are well suited to simulate the assay, and that it is possible to calculate the absolute count rate in all investigated cases.

  20. Fenpropimorph slows down the sterol pathway and the development of the arbuscular mycorrhizal fungus Glomus intraradices.

    PubMed

    Campagnac, E; Fontaine, J; Lounès-Hadj Sahraoui, A; Laruelle, F; Durand, R; Grandmougin-Ferjani, A

    2009-08-01

    The direct impact of fenpropimorph on the sterol biosynthesis pathway of Glomus intraradices when extraradical mycelia alone are in contact with the fungicide was investigated using monoxenic cultures. Bi-compartmental Petri plates allowed culture of mycorrhizal chicory roots in a compartment without fenpropimorph and exposure of extraradical hyphae to the presence of increasing concentrations of fenpropimorph (0, 0.02, 0.2, 2, 20 mg l(-1)). In the fungal compartment, sporulation, hyphal growth, and fungal biomass were already reduced at the lowest fungicide concentration. A decrease in total sterols, in addition to an increase in the amount of squalene and no accumulation of abnormal sterols, suggests that the sterol pathway is severely slowed down or that squalene epoxidase was inhibited by fenpropimorph in G. intraradices. In the root compartment, neither extraradical and intraradical development of the arbuscular mycorrhizal (AM) fungus nor root growth was affected when they were not in direct contact with the fungicide; only hyphal length was significantly affected at 2 mg l(-1) of fenpropimorph. Our results clearly demonstrate a direct impact of fenpropimorph on the AM fungus by a perturbation of its sterol metabolism.

  1. Spines slow down dendritic chloride diffusion and affect short-term ionic plasticity of GABAergic inhibition

    NASA Astrophysics Data System (ADS)

    Mohapatra, Namrata; Tønnesen, Jan; Vlachos, Andreas; Kuner, Thomas; Deller, Thomas; Nägerl, U. Valentin; Santamaria, Fidel; Jedlicka, Peter

    2016-03-01

    Cl‑ plays a crucial role in neuronal function and synaptic inhibition. However, the impact of neuronal morphology on the diffusion and redistribution of intracellular Cl‑ is not well understood. The role of spines in Cl‑ diffusion along dendritic trees has not been addressed so far. Because measuring fast and spatially restricted Cl‑ changes within dendrites is not yet technically possible, we used computational approaches to predict the effects of spines on Cl‑ dynamics in morphologically complex dendrites. In all morphologies tested, including dendrites imaged by super-resolution STED microscopy in live brain tissue, spines slowed down longitudinal Cl‑ diffusion along dendrites. This effect was robust and could be observed in both deterministic as well as stochastic simulations. Cl‑ extrusion altered Cl‑ diffusion to a much lesser extent than the presence of spines. The spine-dependent slowing of Cl‑ diffusion affected the amount and spatial spread of changes in the GABA reversal potential thereby altering homosynaptic as well as heterosynaptic short-term ionic plasticity at GABAergic synapses in dendrites. Altogether, our results suggest a fundamental role of dendritic spines in shaping Cl‑ diffusion, which could be of relevance in the context of pathological conditions where spine densities and neural excitability are perturbed.

  2. Spines slow down dendritic chloride diffusion and affect short-term ionic plasticity of GABAergic inhibition

    PubMed Central

    Mohapatra, Namrata; Tønnesen, Jan; Vlachos, Andreas; Kuner, Thomas; Deller, Thomas; Nägerl, U. Valentin; Santamaria, Fidel; Jedlicka, Peter

    2016-01-01

    Cl− plays a crucial role in neuronal function and synaptic inhibition. However, the impact of neuronal morphology on the diffusion and redistribution of intracellular Cl− is not well understood. The role of spines in Cl− diffusion along dendritic trees has not been addressed so far. Because measuring fast and spatially restricted Cl− changes within dendrites is not yet technically possible, we used computational approaches to predict the effects of spines on Cl− dynamics in morphologically complex dendrites. In all morphologies tested, including dendrites imaged by super-resolution STED microscopy in live brain tissue, spines slowed down longitudinal Cl− diffusion along dendrites. This effect was robust and could be observed in both deterministic as well as stochastic simulations. Cl− extrusion altered Cl− diffusion to a much lesser extent than the presence of spines. The spine-dependent slowing of Cl− diffusion affected the amount and spatial spread of changes in the GABA reversal potential thereby altering homosynaptic as well as heterosynaptic short-term ionic plasticity at GABAergic synapses in dendrites. Altogether, our results suggest a fundamental role of dendritic spines in shaping Cl− diffusion, which could be of relevance in the context of pathological conditions where spine densities and neural excitability are perturbed. PMID:26987404

  3. Traffic and Environmental Cues and Slow-Down Behaviors in Virtual Driving.

    PubMed

    Hsu, Chun-Chia; Chuang, Kai-Hsiang

    2016-02-01

    This study used a driving simulator to investigate whether the presence of pedestrians and traffic engineering designs that reported to have reduction effects on overall traffic speed at intersections can facilitate drivers adopting lower impact speed behaviors at pedestrian crossings. Twenty-eight men (M age = 39.9 yr., SD = 11.5) with drivers' licenses participated. Nine studied measures were obtained from the speed profiles of each participant. A 14-km virtual road was presented to the participants. It included experimental scenarios of base intersection, pedestrian presence, pedestrian warning sign at intersection and in advance of intersection, and perceptual lane narrowing by hatching lines. Compared to the base intersection, the presence of pedestrians caused drivers to slow down earlier and reach a lower minimum speed before the pedestrian crossing. This speed behavior was not completely evident when adding a pedestrian warning sign at an intersection or having perceptual lane narrowing to the stop line. Additionally, installing pedestrian warning signs in advance of the intersections rather at the intersections was associated with higher impact speeds at pedestrian crossings. PMID:27420310

  4. Slowing down single-molecule trafficking through a protein nanopore reveals intermediates for peptide translocation

    NASA Astrophysics Data System (ADS)

    Mereuta, Loredana; Roy, Mahua; Asandei, Alina; Lee, Jong Kook; Park, Yoonkyung; Andricioaei, Ioan; Luchian, Tudor

    2014-01-01

    The microscopic details of how peptides translocate one at a time through nanopores are crucial determinants for transport through membrane pores and important in developing nano-technologies. To date, the translocation process has been too fast relative to the resolution of the single molecule techniques that sought to detect its milestones. Using pH-tuned single-molecule electrophysiology and molecular dynamics simulations, we demonstrate how peptide passage through the α-hemolysin protein can be sufficiently slowed down to observe intermediate single-peptide sub-states associated to distinct structural milestones along the pore, and how to control residence time, direction and the sequence of spatio-temporal state-to-state dynamics of a single peptide. Molecular dynamics simulations of peptide translocation reveal the time- dependent ordering of intermediate structures of the translocating peptide inside the pore at atomic resolution. Calculations of the expected current ratios of the different pore-blocking microstates and their time sequencing are in accord with the recorded current traces.

  5. Mechanical slowing-down of cytoplasmic diffusion allows in vivo counting of proteins in individual cells

    PubMed Central

    Okumus, Burak; Landgraf, Dirk; Lai, Ghee Chuan; Bakhsi, Somenath; Arias-Castro, Juan Carlos; Yildiz, Sadik; Huh, Dann; Fernandez-Lopez, Raul; Peterson, Celeste N.; Toprak, Erdal; El Karoui, Meriem; Paulsson, Johan

    2016-01-01

    Many key regulatory proteins in bacteria are present in too low numbers to be detected with conventional methods, which poses a particular challenge for single-cell analyses because such proteins can contribute greatly to phenotypic heterogeneity. Here we develop a microfluidics-based platform that enables single-molecule counting of low-abundance proteins by mechanically slowing-down their diffusion within the cytoplasm of live Escherichia coli (E. coli) cells. Our technique also allows for automated microscopy at high throughput with minimal perturbation to native physiology, as well as viable enrichment/retrieval. We illustrate the method by analysing the control of the master regulator of the E. coli stress response, RpoS, by its adapter protein, SprE (RssB). Quantification of SprE numbers shows that though SprE is necessary for RpoS degradation, it is expressed at levels as low as 3–4 molecules per average cell cycle, and fluctuations in SprE are approximately Poisson distributed during exponential phase with no sign of bursting. PMID:27189321

  6. Mechanical slowing-down of cytoplasmic diffusion allows in vivo counting of proteins in individual cells

    NASA Astrophysics Data System (ADS)

    Okumus, Burak; Landgraf, Dirk; Lai, Ghee Chuan; Bakhsi, Somenath; Arias-Castro, Juan Carlos; Yildiz, Sadik; Huh, Dann; Fernandez-Lopez, Raul; Peterson, Celeste N.; Toprak, Erdal; El Karoui, Meriem; Paulsson, Johan

    2016-05-01

    Many key regulatory proteins in bacteria are present in too low numbers to be detected with conventional methods, which poses a particular challenge for single-cell analyses because such proteins can contribute greatly to phenotypic heterogeneity. Here we develop a microfluidics-based platform that enables single-molecule counting of low-abundance proteins by mechanically slowing-down their diffusion within the cytoplasm of live Escherichia coli (E. coli) cells. Our technique also allows for automated microscopy at high throughput with minimal perturbation to native physiology, as well as viable enrichment/retrieval. We illustrate the method by analysing the control of the master regulator of the E. coli stress response, RpoS, by its adapter protein, SprE (RssB). Quantification of SprE numbers shows that though SprE is necessary for RpoS degradation, it is expressed at levels as low as 3-4 molecules per average cell cycle, and fluctuations in SprE are approximately Poisson distributed during exponential phase with no sign of bursting.

  7. Experimental observation of critical slowing down as an early warning of population collapse

    NASA Astrophysics Data System (ADS)

    Vorselen, Daan; Dai, Lei; Korolev, Kirill; Gore, Jeff

    2012-02-01

    Near tipping points marking population collapse or other critical transitions in complex systems small changes in conditions can result in drastic shifts in the system state. In theoretical models it is known that early warning signals can be used to predict the approach of these tipping points (bifurcations), but little is known about how these signals can be detected in practice. Here we use the budding yeast Saccharomyces cerevisiae to study these early warning signals in controlled experimental populations. We grow yeast in the sugar sucrose, where cooperative feeding dynamics causes a fold bifurcation; falling below a critical population size results in sudden collapse. We demonstrate the experimental observation of an increase in both the size and timescale of the fluctuations of population density near this fold bifurcation. Furthermore, we test the utility of theoretically predicted warning signals by observing them in two different slowly deteriorating environments. These findings suggest that these generic indicators of critical slowing down can be useful in predicting catastrophic changes in population biology.

  8. Slowing Down the Presentation of Facial and Body Movements Enhances Imitation Performance in Children with Severe Autism

    ERIC Educational Resources Information Center

    Laine, France; Rauzy, Stephane; Tardif, Carole; Gepner, Bruno

    2011-01-01

    Imitation deficits observed among individuals with autism could be partly explained by the excessive speed of biological movements to be perceived and then reproduced. Along with this assumption, slowing down the speed of presentation of these movements might improve their imitative performances. To test this hypothesis, 19 children with autism,…

  9. Do calcium buffers always slow down the propagation of calcium waves?

    PubMed

    Tsai, Je-Chiang

    2013-12-01

    Calcium buffers are large proteins that act as binding sites for free cytosolic calcium. Since a large fraction of cytosolic calcium is bound to calcium buffers, calcium waves are widely observed under the condition that free cytosolic calcium is heavily buffered. In addition, all physiological buffered excitable systems contain multiple buffers with different affinities. It is thus important to understand the properties of waves in excitable systems with the inclusion of buffers. There is an ongoing controversy about whether or not the addition of calcium buffers into the system always slows down the propagation of calcium waves. To solve this controversy, we incorporate the buffering effect into the generic excitable system, the FitzHugh-Nagumo model, to get the buffered FitzHugh-Nagumo model, and then to study the effect of the added buffer with large diffusivity on traveling waves of such a model in one spatial dimension. We can find a critical dissociation constant (K = K(a)) characterized by system excitability parameter a such that calcium buffers can be classified into two types: weak buffers (K ∈ (K(a), ∞)) and strong buffers (K ∈ (0, K(a))). We analytically show that the addition of weak buffers or strong buffers but with its total concentration b(0)(1) below some critical total concentration b(0,c)(1) into the system can generate a traveling wave of the resulting system which propagates faster than that of the origin system, provided that the diffusivity D1 of the added buffers is sufficiently large. Further, the magnitude of the wave speed of traveling waves of the resulting system is proportional to √D1 as D1 --> ∞. In contrast, the addition of strong buffers with the total concentration b(0)(1) > b(0,c)(1) into the system may not be able to support the formation of a biologically acceptable wave provided that the diffusivity D1 of the added buffers is sufficiently large.

  10. Cosmic-ray slowing down in molecular clouds: Effects of heavy nuclei

    NASA Astrophysics Data System (ADS)

    Chabot, Marin

    2016-01-01

    Context. A cosmic ray (CR) spectrum propagated through ISM contains very few low-energy (<100 MeV) particles. Recently, a local CR spectrum, with strong low energy components, has been proposed to be responsible for the over production of H3+ molecule in some molecular clouds. Aims: We aim to explore the effects of the chemical composition of low-energy cosmic rays (CRs) when they slow down in dense molecular clouds without magnetic fields. We considered both ionization and solid material processing rates. Methods: We used galatic CR chemical composition from proton to iron. We propagated two types of CR spectra through a cloud made of H2: those CR spectra with different contents of low energy CRs and those assumed to be initially identical for all CR species. The stopping and range of ions in matter (SRIM) package provided the necessary stopping powers. The ionization rates were computed with cross sections from recent semi-empirical laws, while effective cross sections were parametrized for solid processing rates using a power law of the stopping power (power 1 to 2). Results: The relative contribution to the cloud ionization of proton and heavy CRs was found identical everywhere in the irradiated cloud, no matter which CR spectrum we used. As compared to classical calculations, using protons and high-energy behaviour of ionization processes (Z2 scaling), we reduced absolute values of ionization rates by few a tens of percents but only in the case of spectrum with a high content of low-energy CRs. We found, using the same CR spectrum, the solid material processing rates to be reduced between the outer and inner part of thick cloud by a factor 10 (as in case of the ionization rates) or by a factor 100, depending on the type of process.

  11. Pyrroloquinoline Quinone Slows Down the Progression of Osteoarthritis by Inhibiting Nitric Oxide Production and Metalloproteinase Synthesis.

    PubMed

    Tao, Ran; Wang, Shitao; Xia, Xiaopeng; Wang, Youhua; Cao, Yi; Huang, Yuejiao; Xu, Xinbao; Liu, Zhongbing; Liu, Peichao; Tang, Xiaohang; Liu, Chun; Shen, Gan; Zhang, Dongmei

    2015-08-01

    Osteoarthritis (OA) is the most common arthritis and also one of the major causes of joint pain in elderly people. The aim of this study was to investigate the effects of pyrroloquinoline quinone (PQQ) on degenerated-related changes in osteoarthritis (OA). SW1353 cells were stimulated with IL-1β to establish the chondrocyte injury model in vitro. PQQ was administrated into SW1353 cultures 1 h before IL-1β treatment. Amounts of MMP-1, MMP-13, P65, IκBα, ERK, p-ERK, P38, and p-P38 were measured via western blot. The production of NO was determined by Griess reaction assay and reflected by the iNOS level. Meniscal-ligamentous injury (MLI) was performed on 8-week-old rats to establish the OA rat model. PQQ was injected intraperitoneally 3 days before MLI and consecutively until harvest, and the arthritis cartilage degeneration level was assessed. The expressions of MMP-1 and MMP-13 were significantly downregulated after PQQ treatment compared with that in IL-1β alone group. NO production and iNOS expression were decreased by PQQ treatment compared with control group. Amounts of nucleus P65 were upregulated in SW1353 after stimulated with IL-1β, while PQQ significantly inhibited the translocation. In rat OA model, treatment with PQQ markedly decelerated the degeneration of articular cartilage. These findings suggested that PQQ could inhibit OA-related catabolic proteins MMPs expression, NO production, and thus, slow down the articular cartilage degeneration and OA progression. Owing to its beneficial effects, PQQ is expected to be a novel pharmacological application in OA clinical prevention and treatment in the near future.

  12. Does time ever fly or slow down? The difficult interpretation of psychophysical data on time perception

    PubMed Central

    García-Pérez, Miguel A.

    2014-01-01

    Time perception is studied with subjective or semi-objective psychophysical methods. With subjective methods, observers provide quantitative estimates of duration and data depict the psychophysical function relating subjective duration to objective duration. With semi-objective methods, observers provide categorical or comparative judgments of duration and data depict the psychometric function relating the probability of a certain judgment to objective duration. Both approaches are used to study whether subjective and objective time run at the same pace or whether time flies or slows down under certain conditions. We analyze theoretical aspects affecting the interpretation of data gathered with the most widely used semi-objective methods, including single-presentation and paired-comparison methods. For this purpose, a formal model of psychophysical performance is used in which subjective duration is represented via a psychophysical function and the scalar property. This provides the timing component of the model, which is invariant across methods. A decisional component that varies across methods reflects how observers use subjective durations to make judgments and give the responses requested under each method. Application of the model shows that psychometric functions in single-presentation methods are uninterpretable because the various influences on observed performance are inextricably confounded in the data. In contrast, data gathered with paired-comparison methods permit separating out those influences. Prevalent approaches to fitting psychometric functions to data are also discussed and shown to be inconsistent with widely accepted principles of time perception, implicitly assuming instead that subjective time equals objective time and that observed differences across conditions do not reflect differences in perceived duration but criterion shifts. These analyses prompt evidence-based recommendations for best methodological practice in studies on time

  13. Slowing down fat digestion and absorption by an oxadiazolone inhibitor targeting selectively gastric lipolysis.

    PubMed

    Point, Vanessa; Bénarouche, Anais; Zarrillo, Julie; Guy, Alexandre; Magnez, Romain; Fonseca, Laurence; Raux, Brigitt; Leclaire, Julien; Buono, Gérard; Fotiadu, Frédéric; Durand, Thierry; Carrière, Frédéric; Vaysse, Carole; Couëdelo, Leslie; Cavalier, Jean-François

    2016-11-10

    Based on a previous study and in silico molecular docking experiments, we have designed and synthesized a new series of ten 5-Alkoxy-N-3-(3-PhenoxyPhenyl)-1,3,4-Oxadiazol-2(3H)-one derivatives (RmPPOX). These molecules were further evaluated as selective and potent inhibitors of mammalian digestive lipases: purified dog gastric lipase (DGL) and guinea pig pancreatic lipase related protein 2 (GPLRP2), as well as porcine (PPL) and human (HPL) pancreatic lipases contained in porcine pancreatic extracts (PPE) and human pancreatic juices (HPJ), respectively. These compounds were found to strongly discriminate classical pancreatic lipases (poorly inhibited) from gastric lipase (fully inhibited). Among them, the 5-(2-(Benzyloxy)ethoxy)-3-(3-PhenoxyPhenyl)-1,3,4-Oxadiazol-2(3H)-one (BemPPOX) was identified as the most potent inhibitor of DGL, even more active than the FDA-approved drug Orlistat. BemPPOX and Orlistat were further compared in vitro in the course of test meal digestion, and in vivo with a mesenteric lymph duct cannulated rat model to evaluate their respective impacts on fat absorption. While Orlistat inhibited both gastric and duodenal lipolysis and drastically reduced fat absorption in rats, BemPPOX showed a specific action on gastric lipolysis that slowed down the overall lipolysis process and led to a subsequent reduction of around 55% of the intestinal absorption of fatty acids compared to controls. All these data promote BemPPOX as a potent candidate to efficiently regulate the gastrointestinal lipolysis, and to investigate its link with satiety mechanisms and therefore develop new strategies to "fight against obesity".

  14. Slowing down fat digestion and absorption by an oxadiazolone inhibitor targeting selectively gastric lipolysis.

    PubMed

    Point, Vanessa; Bénarouche, Anais; Zarrillo, Julie; Guy, Alexandre; Magnez, Romain; Fonseca, Laurence; Raux, Brigitt; Leclaire, Julien; Buono, Gérard; Fotiadu, Frédéric; Durand, Thierry; Carrière, Frédéric; Vaysse, Carole; Couëdelo, Leslie; Cavalier, Jean-François

    2016-11-10

    Based on a previous study and in silico molecular docking experiments, we have designed and synthesized a new series of ten 5-Alkoxy-N-3-(3-PhenoxyPhenyl)-1,3,4-Oxadiazol-2(3H)-one derivatives (RmPPOX). These molecules were further evaluated as selective and potent inhibitors of mammalian digestive lipases: purified dog gastric lipase (DGL) and guinea pig pancreatic lipase related protein 2 (GPLRP2), as well as porcine (PPL) and human (HPL) pancreatic lipases contained in porcine pancreatic extracts (PPE) and human pancreatic juices (HPJ), respectively. These compounds were found to strongly discriminate classical pancreatic lipases (poorly inhibited) from gastric lipase (fully inhibited). Among them, the 5-(2-(Benzyloxy)ethoxy)-3-(3-PhenoxyPhenyl)-1,3,4-Oxadiazol-2(3H)-one (BemPPOX) was identified as the most potent inhibitor of DGL, even more active than the FDA-approved drug Orlistat. BemPPOX and Orlistat were further compared in vitro in the course of test meal digestion, and in vivo with a mesenteric lymph duct cannulated rat model to evaluate their respective impacts on fat absorption. While Orlistat inhibited both gastric and duodenal lipolysis and drastically reduced fat absorption in rats, BemPPOX showed a specific action on gastric lipolysis that slowed down the overall lipolysis process and led to a subsequent reduction of around 55% of the intestinal absorption of fatty acids compared to controls. All these data promote BemPPOX as a potent candidate to efficiently regulate the gastrointestinal lipolysis, and to investigate its link with satiety mechanisms and therefore develop new strategies to "fight against obesity". PMID:27543878

  15. Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY12 Status Report

    SciTech Connect

    Kulisek, Jonathan A.; Anderson, Kevin K.; Casella, Andrew M.; Siciliano, Edward R.; Warren, Glen A.

    2012-09-28

    Executive Summary Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today’s confirmatory methods. This document is a progress report for FY2012 PNNL analysis and algorithm development. Progress made by PNNL in FY2012 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel assemblies. PNNL further refined the semi-empirical model developed in FY2011 based on singular value decomposition (SVD) to numerically account for the effects of self-shielding. The average uncertainty in the Pu mass across the NGSI-64 fuel assemblies was shown to be less than 3% using only six calibration assemblies with a 2% uncertainty in the isotopic masses. When calibrated against the six NGSI-64 fuel assemblies, the algorithm was able to determine the total Pu mass within <2% uncertainty for the 27 diversion cases also developed under NGSI. Two purely empirical algorithms were developed that do not require the use of Pu isotopic fission chambers. The semi-empirical and purely empirical algorithms were successfully tested using MCNPX simulations as well applied to experimental data measured by RPI using their LSDS. The algorithms were able to describe the 235U masses of the RPI measurements with an average uncertainty of 2.3%. Analyses were conducted that provided valuable insight with regard to design requirements (e

  16. Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY11 Status Report

    SciTech Connect

    Kulisek, Jonathan A.; Anderson, Kevin K.; Bowyer, Sonya M.; Casella, Andrew M.; Gesh, Christopher J.; Warren, Glen A.

    2011-09-30

    Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today's confirmatory assay methods. This document is a progress report for FY2011 PNNL analysis and algorithm development. Progress made by PNNL in FY2011 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model, which accounts for self-shielding effects using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the true self-shielding functions of the used fuel assembly models. The potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space was demonstrated. Also, in FY2011, PNNL continued to develop an analytical model. Such efforts included the addition of six more non-fissile absorbers in the analytical shielding function and the non-uniformity of the neutron flux across the LSDS assay chamber. A hybrid analytical-empirical approach was developed to determine the mass of total Pu (sum of the masses of 239Pu, 240Pu, and 241Pu), which is an important quantity in safeguards. Results using this hybrid method were of approximately the same accuracy as the pure

  17. First test experiment to produce the slowed-down RI beam with the momentum-compression mode at RIBF

    NASA Astrophysics Data System (ADS)

    Sumikama, T.; Ahn, D. S.; Fukuda, N.; Inabe, N.; Kubo, T.; Shimizu, Y.; Suzuki, H.; Takeda, H.; Aoi, N.; Beaumel, D.; Hasegawa, K.; Ideguchi, E.; Imai, N.; Kobayashi, T.; Matsushita, M.; Michimasa, S.; Otsu, H.; Shimoura, S.; Teranishi, T.

    2016-06-01

    The 82Ge beam has been produced by the in-flight fission reaction of the 238U primary beam with 345 MeV/u at the RIKEN RI beam factory, and slowed down to about 15 MeV/u using the energy degraders. The momentum-compression mode was applied to the second stage of the BigRIPS separator to reduce the momentum spread. The energy was successfully reduced down to 13 ± 2.5 MeV/u as expected. The focus was not optimized at the end of the second stage, therefore the beam size was larger than the expectation. The transmission of the second stage was half of the simulated value mainly due to out of focus. The two-stage separation worked very well for the slowed-down beam with the momentum-compression mode.

  18. 4-Methoxyanilinium perrhenate 18-crown-6: a new ferroelectric with order originating in swinglike motion slowing down.

    PubMed

    Fu, Da-Wei; Cai, Hong-Ling; Li, Shen-Hui; Ye, Qiong; Zhou, Lei; Zhang, Wen; Zhang, Yi; Deng, Feng; Xiong, Ren-Gen

    2013-06-21

    A supramolecular adduct 4-methoxyanilinium perrhenate 18-crown-6 was synthesized, which undergoes a disorder-order structural phase transition at about 153 K (T(c)) due to slowing down of a pendulumlike motion of the 4-methoxyanilinium group upon cooling. Ferroelectric hysteresis loop measurements give a spontaneous polarization of 1.2  μC/cm2. Temperature-dependent solid-state nuclear magnetic resonance measurements reveal three kinds of molecular motions existing in the compound: pendulumlike swing of 4-methoxyanilinium cation, rotation of 18-crown-6 ring, and rotation of the methoxyl group. When the temperature decreases, the first two motions are frozen at about 153 K and the methoxyl group becomes rigid at around 126 K. The slowing down or freezing of pendulumlike motion of the cation triggered by temperature decreasing corresponds to the centrosymmetric-to-noncentrosymmetric arrangement of the compound, resulting in the formation of ferroelectricity.

  19. HF(v′ = 3) forward scattering in the F + H2 reaction: Shape resonance and slow-down mechanism

    PubMed Central

    Wang, Xingan; Dong, Wenrui; Qiu, Minghui; Ren, Zefeng; Che, Li; Dai, Dongxu; Wang, Xiuyan; Yang, Xueming; Sun, Zhigang; Fu, Bina; Lee, Soo-Y.; Xu, Xin; Zhang, Dong H.

    2008-01-01

    Crossed molecular beam experiments and accurate quantum dynamics calculations have been carried out to address the long standing and intriguing issue of the forward scattering observed in the F + H2 → HF(v′ = 3) + H reaction. Our study reveals that forward scattering in the reaction channel is not caused by Feshbach or dynamical resonances as in the F + H2 → HF(v′ = 2) + H reaction. It is caused predominantly by the slow-down mechanism over the centrifugal barrier in the exit channel, with some small contribution from the shape resonance mechanism in a very small collision energy regime slightly above the HF(v′ = 3) threshold. Our analysis also shows that forward scattering caused by dynamical resonances can very likely be accompanied by forward scattering in a different product vibrational state caused by a slow-down mechanism. PMID:18434547

  20. Lack of Critical Slowing Down Suggests that Financial Meltdowns Are Not Critical Transitions, yet Rising Variability Could Signal Systemic Risk

    PubMed Central

    Hoarau, Quentin

    2016-01-01

    Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms. PMID:26761792

  1. Lack of Critical Slowing Down Suggests that Financial Meltdowns Are Not Critical Transitions, yet Rising Variability Could Signal Systemic Risk.

    PubMed

    Guttal, Vishwesha; Raghavendra, Srinivas; Goel, Nikunj; Hoarau, Quentin

    2016-01-01

    Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms.

  2. Lack of Critical Slowing Down Suggests that Financial Meltdowns Are Not Critical Transitions, yet Rising Variability Could Signal Systemic Risk.

    PubMed

    Guttal, Vishwesha; Raghavendra, Srinivas; Goel, Nikunj; Hoarau, Quentin

    2016-01-01

    Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms. PMID:26761792

  3. Lead Slowing-Down Spectrometry for Spent Fuel Assay: FY12 Status Report

    SciTech Connect

    Warren, Glen A.; Anderson, Kevin K.; Casella, Andrew M.; Danon, Yaron; Devlin, M.; Gavron, A.; Haight, R. C.; Harris, Jason; Imel, G. R.; Kulisek, Jonathan A.; O'Donnell, J. M.; Stewart, T.; Weltz, Adam

    2012-10-01

    Executive Summary The Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign is supporting a multi-institutional collaboration to study the feasibility of using Lead Slowing Down Spectroscopy (LSDS) to conduct direct, independent and accurate assay of fissile isotopes in used fuel assemblies. The collaboration consists of Pacific Northwest National Laboratory (PNNL), Los Alamos National Laboratory (LANL), Rensselaer Polytechnic Institute (RPI), Idaho State University (ISU). There are three main challenges to implementing LSDS to assay used fuel assemblies. These challenges are the development of an algorithm for interpreting the data with an acceptable accuracy for the fissile masses, the development of suitable detectors for the technique, and the experimental benchmarking of the approach. This report is a summary of the progress in these areas made by the collaboration during FY2012. Significant progress was made on the project in FY2012. Extensive characterization of a “semi-empirical” algorithm was conducted. For example, we studied the impact on the accuracy of this algorithm by the minimization of the calibration set, uncertainties in the calibration masses, and by the choice of time window. Issues such a lead size, number of required neutrons, placement of the neutron source and the impact of cadmium around the detectors were also studied. In addition, new algorithms were developed that do not require the use of plutonium fission chambers. These algorithms were applied to measurement data taken by RPI and shown to determine the 235U mass within 4%. For detectors, a new concept for a fast neutron detector involving 4He recoil from neutron scattering was investigated. The detector has the potential to provide a couple of orders of magnitude more sensitivity than 238U fission chambers. Progress was also made on the more conventional approach of using 232Th fission chambers as fast neutron detectors. For

  4. Measurement of low energy neutron spectrum below 10 keV with the slowing down time method

    NASA Astrophysics Data System (ADS)

    Maekawa, F.; Oyama, Y.

    1996-02-01

    No general-purpose method of neutron spectrum measurement in the energy region around eV has been established so far. Neutron spectrum measurement in this energy region was attempted by applying the slowing down time (SDT) method, for the first time, inside two types of shield for fusion reactors, type 316 stainless steel (SS316) and SS316/water layered assemblies, incorporating with pulsed neutrons. In the SS316 assembly, neutron spectra below 1 keV were measured with an accuracy less than 10%. Although application of the SDT method was expected very difficult for SS316/water assembly since it contained lightest atoms of hydrogen, the measurement demonstrated that the SDT method was still effective for such shield assembly. The SDT method was also extended to thermal flux measurement in the SS316/water assembly. The present study demonstrated that the SDT method was effective for neutron spectrum measurement in the energy region around eV.

  5. Slowing down terahertz waves with tunable group velocities in a broad frequency range by surface magneto plasmons.

    PubMed

    Hu, Bin; Wang, Qi Jie; Zhang, Ying

    2012-04-23

    This paper proposes one broadly tunable terahertz (THz) slow-light system in a semiconductor-insulator-semiconductor structure. Subject to an external magnetic field, the structure supports in total two surface magneto plasmons (SMPs) bands above and below the surface plasma frequency, respectively. Both the SMPs bands can be tuned by the external magnetic field. Numerical studies show that leveraging on the two tunable bands, the frequency and the group velocity of the slowed-down THz wave can be widely tuned from 0.3 THz to 10 THz and from 1 c to 10(-6) c, respectively, when the external magnetic field increases up to 6 Tesla. The proposed method based on the two SMPs bands can be widely used for many other plasmonic devices.

  6. Regulation reform slows down

    SciTech Connect

    1995-03-29

    Regulatory reformers in Congress are easing off the accelerator as they recognize that some of their more far-reaching proposals lack sufficient support to win passage. Last week the proposed one-year moratorium on new regulations was set back in the Senate by it main sponsor, Sen. Non Nickles (R., OK), who now seeks to replace it with a more moderate bill. Nickel`s substitute bill would give Congress 45 days after a regulation is issued to decide whether to reject it. It also retroactively allows for review of 80 regulations issued since last November 9, 1994. Asked how his new proposal is superior to a moratorium, which is sharply opposed by the Clinton Administration, Nickles says he thinks it is better because its permanent. The Chemical Manufacturer`s Association (CMA) has not publicly made a regulatory moratorium a top priority, but has quietly supported it by joining with other industry groups lobbying on the issue. A moratorium would halt EPA expansion of the Toxics Release Inventory (TRI) and alloys the delisting of several TRI chemicals.

  7. Rural Growth Slows Down.

    ERIC Educational Resources Information Center

    Henry, Mark; And Others

    1987-01-01

    After decade of growth, rural income, population, and overall economic activity have stalled and again lag behind urban trends. Causes include banking and transportation deregulation, international competition, agricultural finance problems. Only nonmetropolitan counties dependent on retirement, government, and trade show continuing income growth…

  8. Measurements of (n,α) cross-section of small samples using a lead-slowing-down-spectrometer

    NASA Astrophysics Data System (ADS)

    Romano, Catherine; Danon, Yaron; Haight, Robert C.; Wender, Stephen A.; Vieira, David J.; Bond, Evelyn M.; Rundberg, Robert S.; Wilhelmy, Jerry B.; O'Donnell, John M.; Michaudon, Andre F.; Bredeweg, Todd A.; Rochman, Dimitri; Granier, Thierry; Ethvignot, Thierry

    2006-06-01

    At the Los Alamos Neutron Science Center (LANSCE) a compensated ionization chamber (CIC) was placed in a lead slowing down spectrometer (LSDS) to measure the 6Li(n,α) 3H cross-section as a feasibility test for further work. The LSDS consists of a 1.2 m cube of lead with a tungsten target in the center where spallation neutrons are produced when bombarded with pulses of 800 MeV protons. The resulting neutron flux is of the order of 10 14 n/cm 2 /s which allows the cross-section measurement of samples of the order of 10's of nanograms. The initial experiment measured a 91 μg sample of natural lithium flouride. Cross-section measurements were obtained in the 0.1 eV-2 keV energy range. A 62 μg sample was placed in the chamber with a higher neutron beam intensity, and data was obtained in the 0.1-300 eV range. Adjustments in chamber dimensions and electronic configuration will improve gamma flash compensation at high beam intensity, decrease the dead time, and increase the energy range where data can be obtained. The intense neutron flux will allow the use of a smaller sample.

  9. Time-Spectral Analysis Methods for Spent Fuel Assay Using Lead Slowing-Down Spectroscopy of Spent Fuel

    SciTech Connect

    Smith, Leon E.; Anderson, Kevin K.; Ressler, Jennifer J.; Shaver, Mark W.

    2010-08-08

    Nondestructive techniques for measuring the mass of fissile isotopes in spent nuclear fuel is a considerable challenge in the safeguarding of nuclear fuel cycles. A nondestructive assay technology that could provide direct measurement of fissile mass, particularly for the plutonium (Pu) isotopes, and improve upon the uncertainty of today’s confirmatory methods is needed. Lead slowing-down spectroscopy (LSDS) has been studied for the spent fuel application previously, but the nonlinear effects of assembly self shielding (of the interrogating neutron population) have led to discouraging assay accuracy for realistic pressurized water reactor fuels. In this paper, we describe the development of time-spectral analysis algorithms for LSDS intended to overcome these self-shielding effects. The algorithm incorporates the tabulated energy-dependent cross sections from key fissile and absorbing isotopes, but leaves their mass as free variables. Multi-parameter regression analysis is then used to directly calculate not only the mass of fissile isotopes in the fuel assembly (e.g., Pu-239, U-235, and Pu-241), but also the mass of key absorbing isotopes such as Pu-240 and U-238. Modeling-based assay results using a first-order self-shielding relationship indicate that LSDS has the potential to directly measure fissile isotopes with less than 5% average relative error, over a wide fuel burnup range. Shortcomings in the first-order self-shielding model and methods to improve the formulation are described.

  10. D-Factor: A Quantitative Model of Application Slow-Down in Multi-Resource Shared Systems

    SciTech Connect

    Lim, Seung-Hwan; Huh, Jae-Seok; Kim, Youngjae; Shipman, Galen M; Das, Chita

    2012-01-01

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price - resource contention among jobs increases job completion time. In this paper, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job is characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We also show that the model can be integrated with an existing on-line scheduler to minimize the makespan of workloads.

  11. MicroRNA-124 slows down the progression of Huntington’s disease by promoting neurogenesis in the striatum

    PubMed Central

    Liu, Tian; Im, Wooseok; Mook-Jung, Inhee; Kim, Manho

    2015-01-01

    MicroRNA-124 contributes to neurogenesis through regulating its targets, but its expression both in the brain of Huntington’s disease mouse models and patients is decreased. However, the effects of microRNA-124 on the progression of Huntington’s disease have not been reported. Results from this study showed that microRNA-124 increased the latency to fall for each R6/2 Huntington’s disease transgenic mouse in the rotarod test. 5-Bromo-2’-deoxyuridine (BrdU) staining of the striatum shows an increase in neurogenesis. In addition, brain-derived neurotrophic factor and peroxisome proliferator-activated receptor gamma coactivator 1-alpha protein levels in the striatum were increased and SRY-related HMG box transcription factor 9 protein level was decreased. These findings suggest that microRNA-124 slows down the progression of Huntington’s disease possibly through its important role in neuronal differentiation and survival. PMID:26109954

  12. FOXO/DAF-16 Activation Slows Down Turnover of the Majority of Proteins in C. elegans.

    PubMed

    Dhondt, Ineke; Petyuk, Vladislav A; Cai, Huaihan; Vandemeulebroucke, Lieselot; Vierstraete, Andy; Smith, Richard D; Depuydt, Geert; Braeckman, Bart P

    2016-09-13

    Most aging hypotheses assume the accumulation of damage, resulting in gradual physiological decline and, ultimately, death. Avoiding protein damage accumulation by enhanced turnover should slow down the aging process and extend the lifespan. However, lowering translational efficiency extends rather than shortens the lifespan in C. elegans. We studied turnover of individual proteins in the long-lived daf-2 mutant by combining SILeNCe (stable isotope labeling by nitrogen in Caenorhabditiselegans) and mass spectrometry. Intriguingly, the majority of proteins displayed prolonged half-lives in daf-2, whereas others remained unchanged, signifying that longevity is not supported by high protein turnover. This slowdown was most prominent for translation-related and mitochondrial proteins. In contrast, the high turnover of lysosomal hydrolases and very low turnover of cytoskeletal proteins remained largely unchanged. The slowdown of protein dynamics and decreased abundance of the translational machinery may point to the importance of anabolic attenuation in lifespan extension, as suggested by the hyperfunction theory. PMID:27626670

  13. Measurement and Analysis Plan for Investigation of Spent-Fuel Assay Using Lead Slowing-Down Spectroscopy

    SciTech Connect

    Smith, Leon E.; Haas, Derek A.; Gavron, Victor A.; Imel, G. R.; Ressler, Jennifer J.; Bowyer, Sonya M.; Danon, Y.; Beller, D.

    2009-09-25

    Under funding from the Department of Energy Office of Nuclear Energy’s Materials, Protection, Accounting, and Control for Transmutation (MPACT) program (formerly the Advanced Fuel Cycle Initiative Safeguards Campaign), Pacific Northwest National Laboratory (PNNL) and Los Alamos National Laboratory (LANL) are collaborating to study the viability of lead slowing-down spectroscopy (LSDS) for spent-fuel assay. Based on the results of previous simulation studies conducted by PNNL and LANL to estimate potential LSDS performance, a more comprehensive study of LSDS viability has been defined. That study includes benchmarking measurements, development and testing of key enabling instrumentation, and continued study of time-spectra analysis methods. This report satisfies the requirements for a PNNL/LANL deliverable that describes the objectives, plans and contributing organizations for a comprehensive three-year study of LSDS for spent-fuel assay. This deliverable was generated largely during the LSDS workshop held on August 25-26, 2009 at Rensselaer Polytechnic Institute (RPI). The workshop itself was a prominent milestone in the FY09 MPACT project and is also described within this report.

  14. Epigenomic maintenance through dietary intervention can facilitate DNA repair process to slow down the progress of premature aging.

    PubMed

    Ghosh, Shampa; Sinha, Jitendra Kumar; Raghunath, Manchala

    2016-09-01

    DNA damage caused by various sources remains one of the most researched topics in the area of aging and neurodegeneration. Increased DNA damage causes premature aging. Aging is plastic and is characterised by the decline in the ability of a cell/organism to maintain genomic stability. Lifespan can be modulated by various interventions like calorie restriction, a balanced diet of macro and micronutrients or supplementation with nutrients/nutrient formulations such as Amalaki rasayana, docosahexaenoic acid, resveratrol, curcumin, etc. Increased levels of DNA damage in the form of double stranded and single stranded breaks are associated with decreased longevity in animal models like WNIN/Ob obese rats. Erroneous DNA repair can result in accumulation of DNA damage products, which in turn result in premature aging disorders such as Hutchinson-Gilford progeria syndrome. Epigenomic studies of the aging process have opened a completely new arena for research and development of drugs and therapeutic agents. We propose here that agents or interventions that can maintain epigenomic stability and facilitate the DNA repair process can slow down the progress of premature aging, if not completely prevent it. © 2016 IUBMB Life, 68(9):717-721, 2016. PMID:27364681

  15. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    ERIC Educational Resources Information Center

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  16. [Slowing down the rate of irreversible age-related atrophy of the thymus gland by atopic autotransplantation of its tissue, subjected to long-term cryoconservation].

    PubMed

    Kulikov, A V; Arkhipova, L V; Smirnova, G N; Novoselova, E G; Shpurova, N A; Shishova, N V; Sukhikh, G T

    2010-01-01

    An experimental procedure has been developed enabling to slow down the rate of irreversible atrophy of the thymus gland. The atopic autotransplantation of its tissue subjected to prolonged cryoconservation enables one to inhibit the aging of the organism with respect to several biochemical and immunological indicators.

  17. Progress on Establishing the Feasibility of Lead Slowing Down Spectroscopy for Direct Measurement of Plutonium in Used Fuel

    SciTech Connect

    Kulisek, Jonathan A.; Anderson, Kevin K.; Bowyer, Sonya M.; Casella, Andrew M.; Gesh, Christopher J.; Smith, L. E.; Gavron, A.; Devlin, M.; O'Donnell, J. M.; Haight, R. C.; Danon, Yaron; Becker, Bjorn; Imel, G. R.; Beller, D.

    2012-07-19

    Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) of next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT continues to support a multi-institutional collaboration to address the feasibility of Lead Slowing Down Spectroscopy (LSDS) as an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today’s confirmatory assay methods. An LSDS is comprised of a stack of lead (typically 1-6 m3) in which materials to be measured are placed in the lead and a pulse of neutrons is injected. The neutrons in this pulse lose energy due to inelastic and (subsequently) elastic scattering and the average energy of the neutrons decreases as the time increases by a well-defined relationship. In the interrogation energy region (~0.1-1000 eV) the neutrons have little energy spread (~30%) about the average neutron energy. Due to this characteristic, the energy of the (assay) neutrons can then be determined by measuring the time elapsed since the neutron pulse. By measuring the induced fission neutrons emitted from the used fuel, it is possible to determine isotopic-mass content by unfolding the unique structure of isotopic resonances across the interrogation energy region. This paper will present efforts on the development of time-spectral analysis algorithms, fast neutron detector advances, and validation and testing measurements.

  18. A Reduction in Ribonucleotide Reductase Activity Slows Down the Chromosome Replication Fork but Does Not Change Its Localization

    PubMed Central

    Odsbu, Ingvild; Morigen; Skarstad, Kirsten

    2009-01-01

    Background It has been proposed that the enzymes of nucleotide biosynthesis may be compartmentalized or concentrated in a structure affecting the organization of newly replicated DNA. Here we have investigated the effect of changes in ribonucleotide reductase (RNR) activity on chromosome replication and organization of replication forks in Escherichia coli. Methodology/Principal Findings Reduced concentrations of deoxyribonucleotides (dNTPs) obtained by reducing the activity of wild type RNR by treatment with hydroxyurea or by mutation, resulted in a lengthening of the replication period. The replication fork speed was found to be gradually reduced proportionately to moderate reductions in nucleotide availability. Cells with highly extended C periods showed a “delay” in cell division i.e. had a higher cell mass. Visualization of SeqA structures by immunofluorescence indicated no change in organization of the new DNA upon moderate limitation of RNR activity. Severe nucleotide limitation led to replication fork stalling and reversal. Well defined SeqA structures were not found in situations of extensive replication fork repair. In cells with stalled forks obtained by UV irradiation, considerable DNA compaction was observed, possibly indicating a reorganization of the DNA into a “repair structure” during the initial phase of the SOS response. Conclusion/Significance The results indicate that the replication fork is slowed down in a controlled manner during moderate nucleotide depletion and that a change in the activity of RNR does not lead to a change in the organization of newly replicated DNA. Control of cell division but not control of initiation was affected by the changes in replication elongation. PMID:19898675

  19. A method for (n,alpha) and (n,p) cross section measurements using a lead slowing-down spectrometer

    NASA Astrophysics Data System (ADS)

    Thompson, Jason Tyler

    The need for nuclear data comes from several sources including astrophysics, stockpile stewardship, and reactor design. Photodisintegration, neutron capture, and charged particle out reactions on stable or short-lived radioisotopes play crucial roles during stellar evolution and forming solar isotopic abundances whereas these reactions can affect the safety of our national weapons stockpile or criticality and safety calculations for reactors. Although models can be used to predict some of these values, these predictions are only as good as the experimental data that constrains them. For neutron-induced emission of α particles and protons ((n,α) and (n,p) reactions) at energies below 1 MeV, the experimental data is at best scarce and models must rely on extrapolations from unlike situations, (i.e. different reactions, isotopes, and energies) providing ample room for uncertainty. In this work a new method of measuring energy dependent (n,α) and (n,p) cross sections was developed for the energy range of 0.1 eV - ˜100 keV using a lead slowing-down spectrometer (LSDS). The LSDS provides a ˜10 4 neutron flux increase over the more conventionally used time-of-flight (ToF) methods at equivalent beam conditions, allowing for the measurement of small cross sections (µb’s to mb’s) while using small sample masses (µg’s to mg’s). Several detector concepts were designed and tested, including specially constructed Canberra passivated, implanted, planar silicon (PIPS) detectors; and gas-electron-multiplier (GEM) foils. All designs are compensated to minimize γ-flash problems. The GEM detector was found to function satisfactory for (n,α) measurements, but the PIPS detectors were found to be better suited for (n,p) reaction measurements. A digital data acquisition (DAQ) system was programmed such that background can be measured simultaneously with the reaction cross section. Measurements of the 147Sm(n,α)144Nd and 149 Sm(n,α)146Nd reaction cross sections were

  20. Cross-section measurements for 239Pu(n,f) and 6Li(n, α) with a lead slowing-down spectrometer

    NASA Astrophysics Data System (ADS)

    Rochman, D.; Haight, R. C.; O'Donnell, J. M.; Wender, S. A.; Vieira, D. J.; Bond, E. M.; Bredeweg, T. A.; Wilhelmy, J. B.; Granier, T.; Ethvignot, T.; Petit, M.; Danon, Y.; Romano, C.

    2006-08-01

    We present fission cross-section measurements with ˜10 ng of 239Pu performed using the LANSCE Lead Slowing-Down Spectrometer. Results of Li6(n,α) measurements with a sample size of 760 ng of 6Li are also reported. This technical achievement demonstrates the feasibility of measuring neutron-induced fission cross-section on samples with 10 ng of fissile actinides that are available on ultra-small quantities. Furthermore, results on neutron-induced alpha emission show that measurements for astrophysics purposes are feasible with the LSDS.

  1. Effect of the size of experimental channels of the lead slowing-down spectrometer SVZ-100 (Institute for Nuclear Research, Moscow) on the moderation constant

    SciTech Connect

    Latysheva, L. N.; Bergman, A. A.; Sobolevsky, N. M.; Ilic, R. D.

    2013-04-15

    Lead slowing-down (LSD) spectrometers have a low energy resolution (about 30%), but their luminosity is 10{sup 3} to 10{sup 4} times higher than that of time-of-flight (TOF) spectrometers. A high luminosity of LSD spectrometers makes it possible to use them to measure neutron cross section for samples of mass about several micrograms. These features specify a niche for the application of LSD spectrometers in measuring neutron cross sections for elements hardly available in macroscopic amounts-in particular, for actinides. A mathematical simulation of the parameters of SVZ-100 LSD spectrometer of the Institute for Nuclear Research (INR, Moscow) is performed in the present study on the basis of the MCNPX code. It is found that the moderation constant, which is the main parameter of LSD spectrometers, is highly sensitive to the size and shape of detecting volumes in calculations and, hence, to the real size of experimental channels of the LSD spectrometer.

  2. Slowing down presentation of facial movements and vocal sounds enhances facial expression recognition and induces facial-vocal imitation in children with autism.

    PubMed

    Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno

    2007-09-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a static control. Overall, children with autism showed lower performance in expression recognition and more induced facial-vocal imitation than controls. In the autistic group, facial expression recognition and induced facial-vocal imitation were significantly enhanced in slow conditions. Findings may give new perspectives for understanding and intervention for verbal and emotional perceptive and communicative impairments in autistic populations. PMID:17029018

  3. Significant change in the construction of a door to a room with slowed down neutron field by means of commonly used inexpensive protective materials.

    PubMed

    Konefał, Adam; Łaciak, Marcin; Dawidowska, Anna; Osewski, Wojciech

    2014-12-01

    The detailed analysis of nuclear reactions occurring in materials of the door is presented for the typical construction of an entrance door to a room with a slowed down neutron field. The changes in the construction of the door were determined to reduce effectively the level of neutron and gamma radiation in the vicinity of the door in a room adjoining the neutron field room. Optimisation of the door construction was performed with the use of Monte Carlo calculations (GEANT4). The construction proposed in this paper bases on the commonly used inexpensive protective materials such as borax (13.4 cm), lead (4 cm) and stainless steel (0.1 and 0.5 cm on the side of the neutron field room and of the adjoining room, respectively). The improved construction of the door, worked out in the presented studies, can be an effective protection against neutrons with energies up to 1 MeV. PMID:24324249

  4. Effect of the size of experimental channels of the lead slowing-down spectrometer SVZ-100 (Institute for Nuclear Research, Moscow) on the moderation constant

    NASA Astrophysics Data System (ADS)

    Latysheva, L. N.; Bergman, A. A.; Sobolevsky, N. M.; Ilić, R. D.

    2013-04-01

    Lead slowing-down (LSD) spectrometers have a low energy resolution (about 30%), but their luminosity is 103 to 104 times higher than that of time-of-flight (TOF) spectrometers. A high luminosity of LSD spectrometers makes it possible to use them to measure neutron cross section for samples of mass about several micrograms. These features specify a niche for the application of LSD spectrometers in measuring neutron cross sections for elements hardly available in macroscopic amounts—in particular, for actinides. A mathematical simulation of the parameters of SVZ-100 LSD spectrometer of the Institute for Nuclear Research (INR, Moscow) is performed in the present study on the basis of the MCNPX code. It is found that the moderation constant, which is the main parameter of LSD spectrometers, is highly sensitive to the size and shape of detecting volumes in calculations and, hence, to the real size of experimental channels of the LSD spectrometer.

  5. Significant change in the construction of a door to a room with slowed down neutron field by means of commonly used inexpensive protective materials.

    PubMed

    Konefał, Adam; Łaciak, Marcin; Dawidowska, Anna; Osewski, Wojciech

    2014-12-01

    The detailed analysis of nuclear reactions occurring in materials of the door is presented for the typical construction of an entrance door to a room with a slowed down neutron field. The changes in the construction of the door were determined to reduce effectively the level of neutron and gamma radiation in the vicinity of the door in a room adjoining the neutron field room. Optimisation of the door construction was performed with the use of Monte Carlo calculations (GEANT4). The construction proposed in this paper bases on the commonly used inexpensive protective materials such as borax (13.4 cm), lead (4 cm) and stainless steel (0.1 and 0.5 cm on the side of the neutron field room and of the adjoining room, respectively). The improved construction of the door, worked out in the presented studies, can be an effective protection against neutrons with energies up to 1 MeV.

  6. Spermidine application to young developing peach fruits leads to a slowing down of ripening by impairing ripening-related ethylene and auxin metabolism and signaling.

    PubMed

    Torrigiani, Patrizia; Bressanin, Daniela; Ruiz, Karina Beatriz; Tadiello, Alice; Trainotti, Livio; Bonghi, Claudio; Ziosi, Vanina; Costa, Guglielmo

    2012-09-01

    Peach (Prunus persica var. laevis Gray) was chosen to unravel the molecular basis underlying the ability of spermidine (Sd) to influence fruit development and ripening. Field applications of 1 mM Sd on peach fruit at an early developmental stage, 41 days after full bloom (dAFB), i.e. at late stage S1, led to a slowing down of fruit ripening. At commercial harvest (125 dAFB, S4II) Sd-treated fruits showed a reduced ethylene production and flesh softening. The endogenous concentration of free and insoluble conjugated polyamines (PAs) increased (0.3-2.6-fold) 1 day after treatment (short-term response) butsoon it declined to control levels; starting from S3/S4, when soluble conjugated forms increased (up to five-fold relative to controls at ripening), PA levels became more abundant in treated fruits, (long-term response). Real-time reverse transcription-polymerase chain reaction analyses revealed that peaks in transcript levels of fruit developmental marker genes were shifted ahead in accord with a developmental slowing down. At ripening (S4I-S4II) the upregulation of the ethylene biosynthetic genes ACO1 and ACS1 was dramatically counteracted by Sd and this led to a strong downregulation of genes responsible for fruit softening, such as PG and PMEI. Auxin-related gene expression was also altered both in the short term (TRPB) and in the long term (GH3, TIR1 and PIN1), indicating that auxin plays different roles during development and ripening processes. Messenger RNA amounts of other hormone-related ripening-regulated genes, such as NCED and GA2-OX, were strongly downregulated at maturity. Results suggest that Sd interferes with fruit development/ripening by interacting with multiple hormonal pathways. PMID:22409726

  7. Slow-downs and speed-ups of India-Eurasia convergence since ~20 Ma: Data-noise, uncertainties and dynamic implications

    NASA Astrophysics Data System (ADS)

    Iaffaldano, G.; Bodin, T.; Sambridge, M.

    2012-12-01

    India-Somalia and North America-Eurasia relative motions since Early Miocene (~20 Ma) have been recently reconstructed at unprecedented temporal resolution from magnetic surveys of the Carlsberg and northern Mid-Atlantic Ridges. These new datasets revamped interest in the convergence of India relative to Eurasia, which is obtained from the India-Somalia-Nubia-North America-Eurasia plate circuit. Unless finite rotations are arbitrarily smoothed through time, however, the reconstructed kinematics (i.e. stage Euler vectors) appear to be surprisingly unusual over the past ~20 Myr. In fact, the Euler pole for the India-Eurasia rigid motion scattered erratically over a broad region, while the associated angular velocity underwent sudden increases and decreases. As a consequence, convergence across the Himalayan front featured significant speed-ups as well as slow-downs with almost no consistent trend. Arguably, this pattern arises from the presence of data-noise that biases kinematic reconstructions, particularly at high temporal resolution. The rapid and important India-Eurasia plate-motion changes reconstructed since Early Miocene are likely to be of apparent nature, because they cannot result even from the most optimistic estimates of torques associated, for instance, with the descent of the Indian slab into Earth's mantle. Our recent work aimed at reducing noise in finite-rotation datasets via an expanded Bayesian formulation, which offers several advantages over arbitrary smoothing methods. Here we build on this advance and revise the India-Eurasia kinematics since ~20 Ma, accounting also for three alternative histories of rifting in Africa. We find that India-Eurasia kinematics are simpler and, most importantly, geodynamically plausible upon noise reduction. Convergence across the Himalayan front decreased systematically until ~10 Ma, but then increased moderately until the present-day. We test with global dynamic models of the coupled mantle/lithosphere system how

  8. Slow-downs and speed-ups of India-Eurasia convergence since ˜20Ma: Data-noise, uncertainties and dynamic implications

    NASA Astrophysics Data System (ADS)

    Iaffaldano, Giampiero; Bodin, Thomas; Sambridge, Malcolm

    2013-04-01

    India-Somalia and North America-Eurasia relative motions since Early Miocene (˜20Ma) have been recently reconstructed at unprecedented temporal resolution (<1Myr) from magnetic surveys of the Carlsberg and northern Mid-Atlantic Ridges. These new datasets revamped interest in the convergence of India relative to Eurasia, which is obtained from the India-Somalia-Nubia-North America-Eurasia plate circuit. Unless finite rotations are arbitrarily smoothed through time, however, the reconstructed kinematics (i.e. stage Euler vectors) appear to be surprisingly unusual over the past ˜20Myr. In fact, the Euler pole for the India-Eurasia rigid motion scattered erratically over a broad region, while the associated angular velocity underwent sudden increases and decreases. Consequently, convergence across the Himalayan front featured significant speed-ups as well as slow-downs with almost no consistent trend. Arguably, this pattern arises from the presence of data-noise, which biases kinematic reconstructions—particularly at high temporal resolution. The rapid and important India-Eurasia plate-motion changes reconstructed since Early Miocene are likely to be of apparent nature, because they cannot result even from the most optimistic estimates of torques associated, for instance, with the descent of the Indian slab into Earth's mantle. Our previous work aimed at reducing noise in finite-rotation datasets via an expanded Bayesian formulation, which offers several advantages over arbitrary smoothing methods. Here we build on this advance and revise the India-Eurasia kinematics since ˜20Ma, accounting also for three alternative histories of rifting in Africa. We find that India-Eurasia kinematics are simpler and, most importantly, geodynamically plausible upon noise reduction. Convergence across the Himalayan front overall decreased until ˜10Ma, but then systematically increased, albeit moderately, towards the present-day. We test with global dynamic models of the coupled

  9. Information slows down hierarchy growth

    NASA Astrophysics Data System (ADS)

    Czaplicka, Agnieszka; Suchecki, Krzysztof; Miñano, Borja; Trias, Miquel; Hołyst, Janusz A.

    2014-06-01

    We consider models of growing multilevel systems wherein the growth process is driven by rules of tournament selection. A system can be conceived as an evolving tree with a new node being attached to a contestant node at the best hierarchy level (a level nearest to the tree root). The proposed evolution reflects limited information on system properties available to new nodes. It can also be expressed in terms of population dynamics. Two models are considered: a constant tournament (CT) model wherein the number of tournament participants is constant throughout system evolution, and a proportional tournament (PT) model where this number increases proportionally to the growing size of the system itself. The results of analytical calculations based on a rate equation fit well to numerical simulations for both models. In the CT model all hierarchy levels emerge, but the birth time of a consecutive hierarchy level increases exponentially or faster for each new level. The number of nodes at the first hierarchy level grows logarithmically in time, while the size of the last, "worst" hierarchy level oscillates quasi-log-periodically. In the PT model, the occupations of the first two hierarchy levels increase linearly, but worse hierarchy levels either do not emerge at all or appear only by chance in the early stage of system evolution to further stop growing at all. The results allow us to conclude that information available to each new node in tournament dynamics restrains the emergence of new hierarchy levels and that it is the absolute amount of information, not relative, which governs such behavior.

  10. Information slows down hierarchy growth.

    PubMed

    Czaplicka, Agnieszka; Suchecki, Krzysztof; Miñano, Borja; Trias, Miquel; Hołyst, Janusz A

    2014-06-01

    We consider models of growing multilevel systems wherein the growth process is driven by rules of tournament selection. A system can be conceived as an evolving tree with a new node being attached to a contestant node at the best hierarchy level (a level nearest to the tree root). The proposed evolution reflects limited information on system properties available to new nodes. It can also be expressed in terms of population dynamics. Two models are considered: a constant tournament (CT) model wherein the number of tournament participants is constant throughout system evolution, and a proportional tournament (PT) model where this number increases proportionally to the growing size of the system itself. The results of analytical calculations based on a rate equation fit well to numerical simulations for both models. In the CT model all hierarchy levels emerge, but the birth time of a consecutive hierarchy level increases exponentially or faster for each new level. The number of nodes at the first hierarchy level grows logarithmically in time, while the size of the last, "worst" hierarchy level oscillates quasi-log-periodically. In the PT model, the occupations of the first two hierarchy levels increase linearly, but worse hierarchy levels either do not emerge at all or appear only by chance in the early stage of system evolution to further stop growing at all. The results allow us to conclude that information available to each new node in tournament dynamics restrains the emergence of new hierarchy levels and that it is the absolute amount of information, not relative, which governs such behavior. PMID:25019836

  11. Experimental assessment of the performance of a proposed lead slowing-down spectrometer at WNR/PSR (Weapons Neutron Research/Proton Storage Ring)

    SciTech Connect

    Moore, M.S.; Koehler, P.E.; Michaudon, A.; Schelberg, A. ); Danon, Y.; Block, R.C.; Slovacek, R.E. ); Hoff, R.W.; Lougheed, R.W. )

    1990-01-01

    In November 1989, we carried out a measurement of the fission cross section of {sup 247}Cm, {sup 250}Cf, and {sup 254}Es on the Rensselaer Intense Neutron Source (RINS) at Rensselaer Polytechnic Institute (RPI). In July 1990, we carried out a second measurement, using the same fission chamber and electronics, in beam geometry at the Los Alamos Neutron Scattering Center (LANSCE) facility. Using the relative count rates observed in the two experiments, and the flux-enhancement factors determined by the RPI group for a lead slowing-down spectrometer compared to beam geometry, we can assess the performance of a spectrometer similar to RINS, driven by the Proton Storage Ring (PSR) at the Los Alamos National Laboratory. With such a spectrometer, we find that is is feasible to make measurements with samples of 1 ng for fission 1 {mu}g for capture, and of isotopes with half-lives of tens of minutes. It is important to note that, while a significant amount of information can be obtained from the low resolution RINS measurement, a definitive determination of average properties, including the level density, requires that the resonance structure be resolved. 12 refs., 5 figs., 3 tabs.

  12. Decline of deep and bottom water ventilation and slowing down of anthropogenic carbon storage in the Weddell Sea, 1984-2011

    NASA Astrophysics Data System (ADS)

    Huhn, Oliver; Rhein, Monika; Hoppema, Mario; van Heuven, Steven

    2013-06-01

    We use a 27 year long time series of repeated transient tracer observations to investigate the evolution of the ventilation time scales and the related content of anthropogenic carbon (Cant) in deep and bottom water in the Weddell Sea. This time series consists of chlorofluorocarbon (CFC) observations from 1984 to 2008 together with first combined CFC and sulphur hexafluoride (SF6) measurements from 2010/2011 along the Prime Meridian in the Antarctic Ocean and across the Weddell Sea. Applying the Transit Time Distribution (TTD) method we find that all deep water masses in the Weddell Sea have been continually growing older and getting less ventilated during the last 27 years. The decline of the ventilation rate of Weddell Sea Bottom Water (WSBW) and Weddell Sea Deep Water (WSDW) along the Prime Meridian is in the order of 15-21%; the Warm Deep Water (WDW) ventilation rate declined much faster by 33%. About 88-94% of the age increase in WSBW near its source regions (1.8-2.4 years per year) is explained by the age increase of WDW (4.5 years per year). As a consequence of the aging, the Cant increase in the deep and bottom water formed in the Weddell Sea slowed down by 14-21% over the period of observations.

  13. Fission fragment mass and energy distributions as a function of incident neutron energy measured in a lead slowing-down spectrometer

    SciTech Connect

    Romano, C.; Danon, Y.; Block, R.; Thompson, J.; Blain, E.; Bond, E.

    2010-01-15

    A new method of measuring fission fragment mass and energy distributions as a function of incident neutron energy in the range from below 0.1 eV to 1 keV has been developed. The method involves placing a double-sided Frisch-gridded fission chamber in Rensselaer Polytechnic Institute's lead slowing-down spectrometer (LSDS). The high neutron flux of the LSDS allows for the measurement of the energy-dependent, neutron-induced fission cross sections simultaneously with the mass and kinetic energy of the fission fragments of various small samples. The samples may be isotopes that are not available in large quantities (submicrograms) or with small fission cross sections (microbarns). The fission chamber consists of two anodes shielded by Frisch grids on either side of a single cathode. The sample is located in the center of the cathode and is made by depositing small amounts of actinides on very thin films. The chamber was successfully tested and calibrated using 0.41+-0.04 ng of {sup 252}Cf and the resulting mass distributions were compared to those of previous work. As a proof of concept, the chamber was placed in the LSDS to measure the neutron-induced fission cross section and fragment mass and energy distributions of 25.3+-0.5 mug of {sup 235}U. Changes in the mass distributions as a function of incident neutron energy are evident and are examined using the multimodal fission mode model.

  14. Some new results on electron transport in the atmosphere. [Monte Carlo calculation of penetration, diffusion, and slowing down of electron beams in air

    NASA Technical Reports Server (NTRS)

    Berger, M. J.; Seltzer, S. M.; Maeda, K.

    1972-01-01

    The penetration, diffusion and slowing down of electrons in a semi-infinite air medium has been studied by the Monte Carlo method. The results are applicable to the atmosphere at altitudes up to 300 km. Most of the results pertain to monoenergetic electron beams injected into the atmosphere at a height of 300 km, either vertically downwards or with a pitch-angle distribution isotropic over the downward hemisphere. Some results were also obtained for various initial pitch angles between 0 deg and 90 deg. Information has been generated concerning the following topics: (1) the backscattering of electrons from the atmosphere, expressed in terms of backscattering coefficients, angular distributions and energy spectra of reflected electrons, for incident energies T(o) between 2 keV and 2 MeV; (2) energy deposition by electrons as a function of the altitude, down to 80 km, for T(o) between 2 keV and 2 MeV; (3) the corresponding energy depostion by electron-produced bremsstrahlung, down to 30 km; (4) the evolution of the electron flux spectrum as function of the atmospheric depth, for T(o) between 2 keV and 20 keV. Energy deposition results are given for incident electron beams with exponential and power-exponential spectra.

  15. Measurement of Neutron-Induced Fission Cross Sections of {sup 229}Th and {sup 231}Pa Using Linac-Driven Lead Slowing-Down Spectrometer

    SciTech Connect

    Kobayashi, Katsuhei; Yamamoto, Shuji; Lee, Samyol; Cho, Hyun-Je; Yamana, Hajimu; Moriyama, Hirotake; Fujita, Yoshiaki; Mitsugashira, Toshiaki

    2001-11-15

    Use is made of a back-to-back type of double fission chamber and an electron linear accelerator-driven lead slowing-down spectrometer to measure the neutron-induced fission cross sections of {sup 229}Th and {sup 231}Pa below 10 keV relative to that of {sup 235}U. A measurement relative to the {sup 10}B(n, {alpha}) reaction is also made using a BF{sub 3} counter at energies below 1 keV and normalized to the absolute value obtained by using the cross section of the {sup 235}U(n,f) reaction between 200 eV and 1 keV.The experimental data of the {sup 229}Th(n,f) reaction, which was measured by Konakhovich et al., show higher cross-section values, especially at energies of 0.1 to 0.4 eV. The data by Gokhberg et al. seem to be lower than the current measurement above 6 keV. Although the evaluated data in JENDL-3.2 are in general agreement with the measurement, the evaluation is higher from 0.25 to 5 eV and lower above 10 eV. The ENDF/B-VI data evaluated above 10 eV are also lower. The current thermal neutron-induced fission cross section at 0.0253 eV is 32.4 {+-} 10.7 b, which is in good agreement with results of Gindler et al., Mughabghab, and JENDL-3.2.The mean value of the {sup 231}Pa(n,f) cross sections between 0.37 and 0.52 eV, which were measured by Leonard and Odegaarden, is close to the current measurement. The evaluated data in ENDF/B-VI are lower below 0.15 eV and higher above {approx}30 eV. The ENDF/B-VI and the JEF-2.2 are extremely higher above 1 keV. The JENDL-3.2 data are in general agreement with the measurement, although they are lower above {approx}100 eV.

  16. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  17. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318

  18. Iterative software kernels

    SciTech Connect

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  19. A Spectrum Tree Kernel

    NASA Astrophysics Data System (ADS)

    Kuboyama, Tetsuji; Hirata, Kouichi; Kashima, Hisashi; F. Aoki-Kinoshita, Kiyoko; Yasuda, Hiroshi

    Learning from tree-structured data has received increasing interest with the rapid growth of tree-encodable data in the World Wide Web, in biology, and in other areas. Our kernel function measures the similarity between two trees by counting the number of shared sub-patterns called tree q-grams, and runs, in effect, in linear time with respect to the number of tree nodes. We apply our kernel function with a support vector machine (SVM) to classify biological data, the glycans of several blood components. The experimental results show that our kernel function performs as well as one exclusively tailored to glycan properties.

  20. Robotic Intelligence Kernel: Communications

    SciTech Connect

    Walton, Mike C.

    2009-09-16

    The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.

  1. Robotic Intelligence Kernel: Driver

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.

  2. Linearized Kernel Dictionary Learning

    NASA Astrophysics Data System (ADS)

    Golts, Alona; Elad, Michael

    2016-06-01

    In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.

  3. Critical slowing down mechanism and reentrant dipole glass phenomena in (1-x)BaTiO3-xBiScO3 (0.1⩽x⩽0.4): The high energy density dielectrics

    NASA Astrophysics Data System (ADS)

    Bharadwaja, S. S. N.; Kim, J. R.; Ogihara, H.; Cross, L. E.; Trolier-McKinstry, S.; Randall, C. A.

    2011-01-01

    The dielectric and ferroelectric switching properties of high temperature-high energy density (1-x)BaTiO3-xBiScO3 (0.1⩽x⩽0.4) dielectrics were investigated over a broad temperature range. It was found that these ceramics possess dipole glass features such as critical slowing down of the dielectric relaxation, polarization hysteresis aging, rejuvenation, and holelike memory below the dipole glass transition temperature (TDG). The dielectric relaxation behavior is consistent with a three-dimensional Ising model with critical slowing exponents (zυ)=10±1 and composition-dependent glass transition temperatures. At lower temperatures, (1-x)BaTiO3-xBiScO3 ceramics transform into a reentrant dipole glass state owing to the breakup of local polar ordering. A phase diagram is developed marking the paraelectric, ferroelectric, and dipole glass regimes as a function of composition with the reentrant features.

  4. Kernel mucking in top

    SciTech Connect

    LeFebvre, W.

    1994-08-01

    For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.

  5. Calculates Thermal Neutron Scattering Kernel.

    1989-11-10

    Version 00 THRUSH computes the thermal neutron scattering kernel by the phonon expansion method for both coherent and incoherent scattering processes. The calculation of the coherent part is suitable only for calculating the scattering kernel for heavy water.

  6. Robotic Intelligence Kernel: Visualization

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.

  7. Robotic Intelligence Kernel: Architecture

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  8. Can cooperation slow down emergency evacuations?

    NASA Astrophysics Data System (ADS)

    Cirillo, Emilio N. M.; Muntean, Adrian

    2012-09-01

    We study the motion of pedestrians through obscure corridors where the lack of visibility hides the precise position of the exits. Using a lattice model, we explore the effects of cooperation on the overall exit flux (evacuation rate). More precisely, we study the effect of the buddying threshold (of no exclusion per site) on the dynamics of the crowd. In some cases, we note that if the evacuees tend to cooperate and act altruistically, then their collective action tends to favor the occurrence of disasters.

  9. [Demography: can growth be slowed down?].

    PubMed

    1990-01-01

    The UN Fund for Population Activities report on the status of world population in 1990 is particularly unsettling because it indicates that fertility is not declining as rapidly as had been predicted. The world population of some 5.3 billion is growing by 90-100 million per year. 6 years ago the growth rate appeared to be declining everywhere except in Africa and some regions of South Asia. Hopes that the world population would stabilize at around 10.2 billion by the end of the 21st century now appear unrealistic. Some countries such as the Philippines, India, and Morocco which had some success in slowing growth in the 1960s and 70s have seen a significant deceleration in the decline. Growth rates in several African countries are already 2.7% per year and increasing. It is projected that Africa's population will reach 1.581 billion by 2025. Already there are severe shortages of arable land in some overwhelmingly agricultural countries like Rwanda and Burundi, and malnutrition is widespread on the continent. Between 1979-81 and 1986- 87, cereal production declined in 25 African countries out of 43 for which the Food and Agriculture Organization has data. The urban population of developing countries is increasing at 3.6%/year. It grew from 285 million in 1950 to 1.384 billion today and is projected at 4.050 billion in 2050. Provision of water, electricity, and sanitary services will be very difficult. From 1970-88 the number of urban households without portable water increased from 138 million to 215 million. It is not merely the quality of life that is menaced by constant population growth, but also the very future of the earth as a habitat, because of the degradation of soils and forests and resulting global warming. 6-7 million hectares of agricultural land are believed to be lost to erosion each year. Deforestation is a principal cause of soil erosion. Each year more than 11 million hectares of tropical forest and forested zones are stripped, in addition to some 4.4 million hectares selectively harvested for lumber. Deforestation contributes to global warming and to deterioration of the ozone layer. Consequences of global warming by the middle of the next century may include decertification of entire countries, raising of the level of the oceans, and submersion of certain countries. To avert demographic and ecologic disaster, the geographic and financial access of women in developing countries to contraception should be improved, and some neglected groups such as adolescents should be brought into family planning programs. The condition of women must be improved so that they have access to a source of status other than motherhood.

  10. Sudden slowing down of charge carrier dynamics at the Mott metal-insulator transition in kappa-(D{sub 8}-BEDT-TTF){sub 2}Cu[N(CN){sub 2}]Br.

    SciTech Connect

    Brandenburg, J.; Muller, J.; Schlueter, J. A.

    2012-02-01

    We investigate the dynamics of correlated charge carriers in the vicinity of the Mott metal-insulator (MI) transition in the quasi-two-dimensional organic charge-transfer salt {kappa}-(D{sub 8}-BEDT-TTF){sub 2}Cu[N(CN){sub 2}]Br by means of fluctuation (noise) spectroscopy. The observed 1/f-type fluctuations are quantitatively very well described by a phenomenological model based on the concept of non-exponential kinetics. The main result is a correlation-induced enhancement of the fluctuations accompanied by a substantial shift of spectral weight to low frequencies in the vicinity of the Mott critical endpoint. This sudden slowing down of the electron dynamics, observed here in a pure Mott system, may be a universal feature of MI transitions. Our findings are compatible with an electronic phase separation in the critical region of the phase diagram and offer an explanation for the not yet understood absence of effective mass enhancement when crossing the Mott transition.

  11. MC Kernel: Broadband Waveform Sensitivity Kernels for Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; van Driel, Martin; Auer, Ludwig; Hosseini, Kasra; Sigloch, Karin; Nissen-Meyer, Tarje

    2016-04-01

    We present MC Kernel, a software implementation to calculate seismic sensitivity kernels on arbitrary tetrahedral or hexahedral grids across the whole observable seismic frequency band. Seismic sensitivity kernels are the basis for seismic tomography, since they map measurements to model perturbations. Their calculation over the whole frequency range was so far only possible with approximative methods (Dahlen et al. 2000). Fully numerical methods were restricted to the lower frequency range (usually below 0.05 Hz, Tromp et al. 2005). With our implementation, it's possible to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (www.axisem.info), and thus on spherically symmetric models. The advantage is that frequencies up to 0.2 Hz and higher can be accessed. Since the usage of irregular, adapted grids is an integral part of regularisation in seismic tomography, MC Kernel works in a inversion-grid-centred fashion: A Monte-Carlo integration method is used to project the kernel onto each basis function, which allows to control the desired precision of the kernel estimation. Also, it means that the code concentrates calculation effort on regions of interest without prior assumptions on the kernel shape. The code makes extensive use of redundancies in calculating kernels for different receivers or frequency-pass-bands for one earthquake, to facilitate its usage in large-scale global seismic tomography.

  12. Twin kernel embedding.

    PubMed

    Guo, Yi; Gao, Junbin; Kwan, Paul W

    2008-08-01

    In most existing dimensionality reduction algorithms, the main objective is to preserve relational structure among objects of the input space in a low dimensional embedding space. This is achieved by minimizing the inconsistency between two similarity/dissimilarity measures, one for the input data and the other for the embedded data, via a separate matching objective function. Based on this idea, a new dimensionality reduction method called Twin Kernel Embedding (TKE) is proposed. TKE addresses the problem of visualizing non-vectorial data that is difficult for conventional methods in practice due to the lack of efficient vectorial representation. TKE solves this problem by minimizing the inconsistency between the similarity measures captured respectively by their kernel Gram matrices in the two spaces. In the implementation, by optimizing a nonlinear objective function using the gradient descent algorithm, a local minimum can be reached. The results obtained include both the optimal similarity preserving embedding and the appropriate values for the hyperparameters of the kernel. Experimental evaluation on real non-vectorial datasets confirmed the effectiveness of TKE. TKE can be applied to other types of data beyond those mentioned in this paper whenever suitable measures of similarity/dissimilarity can be defined on the input data. PMID:18566501

  13. Kernel Phase and Kernel Amplitude in Fizeau Imaging

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin J. S.

    2016-09-01

    Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent fhistory of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.

  14. Robotic intelligence kernel

    DOEpatents

    Bruemmer, David J.

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  15. Flexible kernel memory.

    PubMed

    Nowicki, Dimitri; Siegelmann, Hava

    2010-06-11

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces.

  16. Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels.

    PubMed

    Jayasumana, Sadeep; Hartley, Richard; Salzmann, Mathieu; Li, Hongdong; Harandi, Mehrtash

    2015-12-01

    In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. PMID:26539851

  17. Learning With Jensen-Tsallis Kernels.

    PubMed

    Ghoshdastidar, Debarghya; Adsul, Ajay P; Dukkipati, Ambedkar

    2016-10-01

    Jensen-type [Jensen-Shannon (JS) and Jensen-Tsallis] kernels were first proposed by Martins et al. (2009). These kernels are based on JS divergences that originated in the information theory. In this paper, we extend the Jensen-type kernels on probability measures to define positive-definite kernels on Euclidean space. We show that the special cases of these kernels include dot-product kernels. Since Jensen-type divergences are multidistribution divergences, we propose their multipoint variants, and study spectral clustering and kernel methods based on these. We also provide experimental studies on benchmark image database and gene expression database that show the benefits of the proposed kernels compared with the existing kernels. The experiments on clustering also demonstrate the use of constructing multipoint similarities.

  18. RTOS kernel in portable electrocardiograph

    NASA Astrophysics Data System (ADS)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  19. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  20. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  1. Adaptive wiener image restoration kernel

    DOEpatents

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  2. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  3. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  4. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  5. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  6. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  7. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  8. Wigner functions defined with Laplace transform kernels.

    PubMed

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton.

  9. Kernel Near Principal Component Analysis

    SciTech Connect

    MARTIN, SHAWN B.

    2002-07-01

    We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.

  10. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    PubMed

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.

  11. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    PubMed

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach. PMID:24805227

  12. Stem kernels for RNA sequence analyses.

    PubMed

    Sakakibara, Yasubumi; Popendorf, Kris; Ogawa, Nana; Asai, Kiyoshi; Sato, Kengo

    2007-10-01

    Several computational methods based on stochastic context-free grammars have been developed for modeling and analyzing functional RNA sequences. These grammatical methods have succeeded in modeling typical secondary structures of RNA, and are used for structural alignment of RNA sequences. However, such stochastic models cannot sufficiently discriminate member sequences of an RNA family from nonmembers and hence detect noncoding RNA regions from genome sequences. A novel kernel function, stem kernel, for the discrimination and detection of functional RNA sequences using support vector machines (SVMs) is proposed. The stem kernel is a natural extension of the string kernel, specifically the all-subsequences kernel, and is tailored to measure the similarity of two RNA sequences from the viewpoint of secondary structures. The stem kernel examines all possible common base pairs and stem structures of arbitrary lengths, including pseudoknots between two RNA sequences, and calculates the inner product of common stem structure counts. An efficient algorithm is developed to calculate the stem kernels based on dynamic programming. The stem kernels are then applied to discriminate members of an RNA family from nonmembers using SVMs. The study indicates that the discrimination ability of the stem kernel is strong compared with conventional methods. Furthermore, the potential application of the stem kernel is demonstrated by the detection of remotely homologous RNA families in terms of secondary structures. This is because the string kernel is proven to work for the remote homology detection of protein sequences. These experimental results have convinced us to apply the stem kernel in order to find novel RNA families from genome sequences. PMID:17933013

  13. Predicting Protein Function Using Multiple Kernels.

    PubMed

    Yu, Guoxian; Rangwala, Huzefa; Domeniconi, Carlotta; Zhang, Guoji; Zhang, Zili

    2015-01-01

    High-throughput experimental techniques provide a wide variety of heterogeneous proteomic data sources. To exploit the information spread across multiple sources for protein function prediction, these data sources are transformed into kernels and then integrated into a composite kernel. Several methods first optimize the weights on these kernels to produce a composite kernel, and then train a classifier on the composite kernel. As such, these approaches result in an optimal composite kernel, but not necessarily in an optimal classifier. On the other hand, some approaches optimize the loss of binary classifiers and learn weights for the different kernels iteratively. For multi-class or multi-label data, these methods have to solve the problem of optimizing weights on these kernels for each of the labels, which are computationally expensive and ignore the correlation among labels. In this paper, we propose a method called Predicting Protein Function using Multiple Kernels (ProMK). ProMK iteratively optimizes the phases of learning optimal weights and reduces the empirical loss of multi-label classifier for each of the labels simultaneously. ProMK can integrate kernels selectively and downgrade the weights on noisy kernels. We investigate the performance of ProMK on several publicly available protein function prediction benchmarks and synthetic datasets. We show that the proposed approach performs better than previously proposed protein function prediction approaches that integrate multiple data sources and multi-label multiple kernel learning methods. The codes of our proposed method are available at https://sites.google.com/site/guoxian85/promk.

  14. Kernel earth mover's distance for EEG classification.

    PubMed

    Daliri, Mohammad Reza

    2013-07-01

    Here, we propose a new kernel approach based on the earth mover's distance (EMD) for electroencephalography (EEG) signal classification. The EEG time series are first transformed into histograms in this approach. The distance between these histograms is then computed using the EMD in a pair-wise manner. We bring the distances into a kernel form called kernel EMD. The support vector classifier can then be used for the classification of EEG signals. The experimental results on the real EEG data show that the new kernel method is very effective, and can classify the data with higher accuracy than traditional methods.

  15. Molecular Hydrodynamics from Memory Kernels.

    PubMed

    Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin

    2016-04-01

    The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t^{-3/2}. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius. PMID:27104730

  16. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  17. The context-tree kernel for strings.

    PubMed

    Cuturi, Marco; Vert, Jean-Philippe

    2005-10-01

    We propose a new kernel for strings which borrows ideas and techniques from information theory and data compression. This kernel can be used in combination with any kernel method, in particular Support Vector Machines for string classification, with notable applications in proteomics. By using a Bayesian averaging framework with conjugate priors on a class of Markovian models known as probabilistic suffix trees or context-trees, we compute the value of this kernel in linear time and space while only using the information contained in the spectrum of the considered strings. This is ensured through an adaptation of a compression method known as the context-tree weighting algorithm. Encouraging classification results are reported on a standard protein homology detection experiment, showing that the context-tree kernel performs well with respect to other state-of-the-art methods while using no biological prior knowledge.

  18. Bayesian Kernel Mixtures for Counts

    PubMed Central

    Canale, Antonio; Dunson, David B.

    2011-01-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437

  19. Bayesian Kernel Mixtures for Counts.

    PubMed

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437

  20. MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES

    PubMed Central

    Dunson, David B.

    2013-01-01

    Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563

  1. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  2. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently.

  3. Slowing Down: Age-Related Neurobiological Predictors of Processing Speed

    PubMed Central

    Eckert, Mark A.

    2011-01-01

    Processing speed, or the rate at which tasks can be performed, is a robust predictor of age-related cognitive decline and an indicator of independence among older adults. This review examines evidence for neurobiological predictors of age-related changes in processing speed, which is guided in part by our source based morphometry findings that unique patterns of frontal and cerebellar gray matter predict age-related variation in processing speed. These results, together with the extant literature on morphological predictors of age-related changes in processing speed, suggest that specific neural systems undergo declines and as a result slow processing speed. Future studies of processing speed – dependent neural systems will be important for identifying the etiologies for processing speed change and the development of interventions that mitigate gradual age-related declines in cognitive functioning and enhance healthy cognitive aging. PMID:21441995

  4. Amyloid Beta Peptide Slows Down Sensory-Induced Hippocampal Oscillations

    PubMed Central

    Peña-Ortega, Fernando; Bernal-Pedraza, Ramón

    2012-01-01

    Alzheimer's disease (AD) progresses with a deterioration of hippocampal function that is likely induced by amyloid beta (Aβ) oligomers. Hippocampal function is strongly dependent on theta rhythm, and disruptions in this rhythm have been related to the reduction of cognitive performance in AD. Accordingly, both AD patients and AD-transgenic mice show an increase in theta rhythm at rest but a reduction in cognitive-induced theta rhythm. We have previously found that monomers of the short sequence of Aβ (peptide 25–35) reduce sensory-induced theta oscillations. However, considering on the one hand that different Aβ sequences differentially affect hippocampal oscillations and on the other hand that Aβ oligomers seem to be responsible for the cognitive decline observed in AD, here we aimed to explore the effect of Aβ oligomers on sensory-induced theta rhythm. Our results show that intracisternal injection of Aβ1–42 oligomers, which has no significant effect on spontaneous hippocampal activity, disrupts the induction of theta rhythm upon sensory stimulation. Instead of increasing the power in the theta band, the hippocampus of Aβ-treated animals responds to sensory stimulation (tail pinch) with an increase in lower frequencies. These findings demonstrate that Aβ alters induced theta rhythm, providing an in vivo model to test for therapeutic approaches to overcome Aβ-induced hippocampal and cognitive dysfunctions. PMID:22611415

  5. Vitamin E slows down the progression of osteoarthritis

    PubMed Central

    LI, XI; DONG, ZHONGLI; ZHANG, FUHOU; DONG, JUNJIE; ZHANG, YUAN

    2016-01-01

    Osteoarthritis is a chronic degenerative joint disorder with the characteristics of articular cartilage destruction, subchondral bone alterations and synovitis. Clinical signs and symptoms of osteoarthritis include pain, stiffness, restricted motion and crepitus. It is the major cause of joint dysfunction in developed nations and has enormous social and economic consequences. Current treatments focus on symptomatic relief, however, they lack efficacy in controlling the progression of this disease, which is a leading cause of disability. Vitamin E is safe to use and may delay the progression of osteoarthritis by acting on several aspects of the disease. In this review, how vitamin E may promote the maintenance of skeletal muscle and the regulation of nucleic acid metabolism to delay osteoarthritis progression is explored. In addition, how vitamin E may maintain the function of sex organs and the stability of mast cells, thus conferring a greater resistance to the underlying disease process is also discussed. Finally, the protective effect of vitamin E on the subchondral vascular system, which decreases the reactive remodeling in osteoarthritis, is reviewed. PMID:27347011

  6. Does the Speed of Light Slow Down Over Time?

    ERIC Educational Resources Information Center

    Ebert, Ronald

    1997-01-01

    The speed of light is a fundamental characteristic of the universe. So many processes are related to and dependent upon it that, if creationist claims were true, the universe would be far different from the way it is now. The speed of light has never been shown to vary based on the direction from which it was measured. (PVD)

  7. Misplaced helix slows down ultrafast pressure-jump protein folding.

    PubMed

    Prigozhin, Maxim B; Liu, Yanxin; Wirth, Anna Jean; Kapoor, Shobhna; Winter, Roland; Schulten, Klaus; Gruebele, Martin

    2013-05-14

    Using a newly developed microsecond pressure-jump apparatus, we monitor the refolding kinetics of the helix-stabilized five-helix bundle protein λ*YA, the Y22W/Q33Y/G46,48A mutant of λ-repressor fragment 6-85, from 3 μs to 5 ms after a 1,200-bar P-drop. In addition to a microsecond phase, we observe a slower 1.4-ms phase during refolding to the native state. Unlike temperature denaturation, pressure denaturation produces a highly reversible helix-coil-rich state. This difference highlights the importance of the denatured initial condition in folding experiments and leads us to assign a compact nonnative helical trap as the reason for slower P-jump-induced refolding. To complement the experiments, we performed over 50 μs of all-atom molecular dynamics P-drop refolding simulations with four different force fields. Two of the force fields yield compact nonnative states with misplaced α-helix content within a few microseconds of the P-drop. Our overall conclusion from experiment and simulation is that the pressure-denatured state of λ*YA contains mainly residual helix and little β-sheet; following a fast P-drop, at least some λ*YA forms misplaced helical structure within microseconds. We hypothesize that nonnative helix at helix-turn interfaces traps the protein in compact nonnative conformations. These traps delay the folding of at least some of the population for 1.4 ms en route to the native state. Based on molecular dynamics, we predict specific mutations at the helix-turn interfaces that should speed up refolding from the pressure-denatured state, if this hypothesis is correct. PMID:23620522

  8. Sacrificial tamper slows down sample explosion in FLASH diffraction experiments.

    PubMed

    Hau-Riege, Stefan P; Boutet, Sébastien; Barty, Anton; Bajt, Sasa; Bogan, Michael J; Frank, Matthias; Andreasson, Jakob; Iwan, Bianca; Seibert, M Marvin; Hajdu, Janos; Sakdinawat, Anne; Schulz, Joachim; Treusch, Rolf; Chapman, Henry N

    2010-02-12

    Intense and ultrashort x-ray pulses from free-electron lasers open up the possibility for near-atomic resolution imaging without the need for crystallization. Such experiments require high photon fluences and pulses shorter than the time to destroy the sample. We describe results with a new femtosecond pump-probe diffraction technique employing coherent 0.1 keV x rays from the FLASH soft x-ray free-electron laser. We show that the lifetime of a nanostructured sample can be extended to several picoseconds by a tamper layer to dampen and quench the sample explosion, making <1 nm resolution imaging feasible.

  9. Sacrificial Tamper Slows Down Sample Explosion in FLASH Diffraction Experiments

    NASA Astrophysics Data System (ADS)

    Hau-Riege, Stefan P.; Boutet, Sébastien; Barty, Anton; Bajt, Saša; Bogan, Michael J.; Frank, Matthias; Andreasson, Jakob; Iwan, Bianca; Seibert, M. Marvin; Hajdu, Janos; Sakdinawat, Anne; Schulz, Joachim; Treusch, Rolf; Chapman, Henry N.

    2010-02-01

    Intense and ultrashort x-ray pulses from free-electron lasers open up the possibility for near-atomic resolution imaging without the need for crystallization. Such experiments require high photon fluences and pulses shorter than the time to destroy the sample. We describe results with a new femtosecond pump-probe diffraction technique employing coherent 0.1 keV x rays from the FLASH soft x-ray free-electron laser. We show that the lifetime of a nanostructured sample can be extended to several picoseconds by a tamper layer to dampen and quench the sample explosion, making <1nm resolution imaging feasible.

  10. Can Lionel Messi's brain slow down time passing?

    PubMed

    Jafari, Sajad; Smith, Leslie Samuel

    2016-01-01

    It seems that seeing others in slow-motion by heroes does not belong only to movies. When Lionel Messi plays football, you can hardly see anything from him that other players cannot do. Then why he is not stoppable really? It seems the answer may be that opponents do not have enough time to do what they want; because in Messi's neural system, time passes slower. In differential equations that model a single neuron, this speed can be generated by multiplying an equal term in all equations. Or maybe interactions between neurons and the structure of neural networks play this role. PMID:27010676

  11. Sacrificial tamper slows down sample explosion in FLASH diffraction experiments.

    PubMed

    Hau-Riege, Stefan P; Boutet, Sébastien; Barty, Anton; Bajt, Sasa; Bogan, Michael J; Frank, Matthias; Andreasson, Jakob; Iwan, Bianca; Seibert, M Marvin; Hajdu, Janos; Sakdinawat, Anne; Schulz, Joachim; Treusch, Rolf; Chapman, Henry N

    2010-02-12

    Intense and ultrashort x-ray pulses from free-electron lasers open up the possibility for near-atomic resolution imaging without the need for crystallization. Such experiments require high photon fluences and pulses shorter than the time to destroy the sample. We describe results with a new femtosecond pump-probe diffraction technique employing coherent 0.1 keV x rays from the FLASH soft x-ray free-electron laser. We show that the lifetime of a nanostructured sample can be extended to several picoseconds by a tamper layer to dampen and quench the sample explosion, making <1 nm resolution imaging feasible. PMID:20366823

  12. Kernel score statistic for dependent data.

    PubMed

    Malzahn, Dörthe; Friedrichs, Stefanie; Rosenberger, Albert; Bickeböller, Heike

    2014-01-01

    The kernel score statistic is a global covariance component test over a set of genetic markers. It provides a flexible modeling framework and does not collapse marker information. We generalize the kernel score statistic to allow for familial dependencies and to adjust for random confounder effects. With this extension, we adjust our analysis of real and simulated baseline systolic blood pressure for polygenic familial background. We find that the kernel score test gains appreciably in power through the use of sequencing compared to tag-single-nucleotide polymorphisms for very rare single nucleotide polymorphisms with <1% minor allele frequency.

  13. Einstein Critical-Slowing-Down is Siegel CyberWar Denial-of-Access Queuing/Pinning/ Jamming/Aikido Via Siegel DIGIT-Physics BEC ``Intersection''-BECOME-UNION Barabasi Network/GRAPH-Physics BEC: Strutt/Rayleigh-Siegel Percolation GLOBALITY-to-LOCALITY Phase-Transition Critical-Phenomenon

    NASA Astrophysics Data System (ADS)

    Buick, Otto; Falcon, Pat; Alexander, G.; Siegel, Edward Carl-Ludwig

    2013-03-01

    Einstein[Dover(03)] critical-slowing-down(CSD)[Pais, Subtle in The Lord; Life & Sci. of Albert Einstein(81)] is Siegel CyberWar denial-of-access(DOA) operations-research queuing theory/pinning/jamming/.../Read [Aikido, Aikibojitsu & Natural-Law(90)]/Aikido(!!!) phase-transition critical-phenomenon via Siegel DIGIT-Physics (Newcomb[Am.J.Math. 4,39(1881)]-{Planck[(1901)]-Einstein[(1905)])-Poincare[Calcul Probabilités(12)-p.313]-Weyl [Goett.Nachr.(14); Math.Ann.77,313 (16)]-{Bose[(24)-Einstein[(25)]-Fermi[(27)]-Dirac[(1927)]}-``Benford''[Proc.Am.Phil.Soc. 78,4,551 (38)]-Kac[Maths.Stat.-Reasoning(55)]-Raimi[Sci.Am. 221,109 (69)...]-Jech[preprint, PSU(95)]-Hill[Proc.AMS 123,3,887(95)]-Browne[NYT(8/98)]-Antonoff-Smith-Siegel[AMS Joint-Mtg.,S.-D.(02)] algebraic-inversion to yield ONLY BOSE-EINSTEIN QUANTUM-statistics (BEQS) with ZERO-digit Bose-Einstein CONDENSATION(BEC) ``INTERSECTION''-BECOME-UNION to Barabasi[PRL 876,5632(01); Rev.Mod.Phys.74,47(02)...] Network /Net/GRAPH(!!!)-physics BEC: Strutt/Rayleigh(1881)-Polya(21)-``Anderson''(58)-Siegel[J.Non-crystalline-Sol.40,453(80)

  14. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2014-01-01 2014-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...

  15. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2012-01-01 2012-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...

  16. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2013-01-01 2013-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...

  17. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2011-01-01 2011-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...

  18. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...

  19. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...

  20. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...

  1. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...

  2. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  3. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  4. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...

  5. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a...

  6. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  7. KITTEN Lightweight Kernel 0.1 Beta

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten providesmore » unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.« less

  8. Quantum kernel applications in medicinal chemistry.

    PubMed

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design. PMID:22857535

  9. Variational Dirichlet Blur Kernel Estimation.

    PubMed

    Zhou, Xu; Mateos, Javier; Zhou, Fugen; Molina, Rafael; Katsaggelos, Aggelos K

    2015-12-01

    Blind image deconvolution involves two key objectives: 1) latent image and 2) blur estimation. For latent image estimation, we propose a fast deconvolution algorithm, which uses an image prior of nondimensional Gaussianity measure to enforce sparsity and an undetermined boundary condition methodology to reduce boundary artifacts. For blur estimation, a linear inverse problem with normalization and nonnegative constraints must be solved. However, the normalization constraint is ignored in many blind image deblurring methods, mainly because it makes the problem less tractable. In this paper, we show that the normalization constraint can be very naturally incorporated into the estimation process by using a Dirichlet distribution to approximate the posterior distribution of the blur. Making use of variational Dirichlet approximation, we provide a blur posterior approximation that considers the uncertainty of the estimate and removes noise in the estimated kernel. Experiments with synthetic and real data demonstrate that the proposed method is very competitive to the state-of-the-art blind image restoration methods. PMID:26390458

  10. Weighted Bergman Kernels and Quantization}

    NASA Astrophysics Data System (ADS)

    Engliš, Miroslav

    Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion for x,y near z, where φ(x,y) is an almost-analytic extension of &\\phi(x)=φ(x,x) and similarly for ψ. Further, . If in addition Ω is of finite type, φ,ψ behave reasonably at the boundary, and - log φ, - log ψ are strictly plurisubharmonic on Ω, we obtain also an analogous asymptotic expansion for the Berezin transform and give applications to the Berezin quantization. Finally, for Ω smoothly bounded and strictly pseudoconvex and φ a smooth strictly plurisubharmonic defining function for Ω, we also obtain results on the Berezin-Toeplitz quantization.

  11. TICK: Transparent Incremental Checkpointing at Kernel Level

    SciTech Connect

    Petrini, Fabrizio; Gioiosa, Roberto

    2004-10-25

    TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5

  12. A kernel autoassociator approach to pattern classification.

    PubMed

    Zhang, Haihong; Huang, Weimin; Huang, Zhiyong; Zhang, Bailing

    2005-06-01

    Autoassociators are a special type of neural networks which, by learning to reproduce a given set of patterns, grasp the underlying concept that is useful for pattern classification. In this paper, we present a novel nonlinear model referred to as kernel autoassociators based on kernel methods. While conventional non-linear autoassociation models emphasize searching for the non-linear representations of input patterns, a kernel autoassociator takes a kernel feature space as the nonlinear manifold, and places emphasis on the reconstruction of input patterns from the kernel feature space. Two methods are proposed to address the reconstruction problem, using linear and multivariate polynomial functions, respectively. We apply the proposed model to novelty detection with or without novelty examples and study it on the promoter detection and sonar target recognition problems. We also apply the model to mclass classification problems including wine recognition, glass recognition, handwritten digit recognition, and face recognition. The experimental results show that, compared with conventional autoassociators and other recognition systems, kernel autoassociators can provide better or comparable performance for concept learning and recognition in various domains. PMID:15971928

  13. A kernel autoassociator approach to pattern classification.

    PubMed

    Zhang, Haihong; Huang, Weimin; Huang, Zhiyong; Zhang, Bailing

    2005-06-01

    Autoassociators are a special type of neural networks which, by learning to reproduce a given set of patterns, grasp the underlying concept that is useful for pattern classification. In this paper, we present a novel nonlinear model referred to as kernel autoassociators based on kernel methods. While conventional non-linear autoassociation models emphasize searching for the non-linear representations of input patterns, a kernel autoassociator takes a kernel feature space as the nonlinear manifold, and places emphasis on the reconstruction of input patterns from the kernel feature space. Two methods are proposed to address the reconstruction problem, using linear and multivariate polynomial functions, respectively. We apply the proposed model to novelty detection with or without novelty examples and study it on the promoter detection and sonar target recognition problems. We also apply the model to mclass classification problems including wine recognition, glass recognition, handwritten digit recognition, and face recognition. The experimental results show that, compared with conventional autoassociators and other recognition systems, kernel autoassociators can provide better or comparable performance for concept learning and recognition in various domains.

  14. PET Image Reconstruction Using Kernel Method

    PubMed Central

    Wang, Guobao; Qi, Jinyi

    2014-01-01

    Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249

  15. Adaptive kernels for multi-fiber reconstruction.

    PubMed

    Barmpoutis, Angelos; Jian, Bing; Vemuri, Baba C

    2009-01-01

    In this paper we present a novel method for multi-fiber reconstruction given a diffusion-weighted MRI dataset. There are several existing methods that employ various spherical deconvolution kernels for achieving this task. However the kernels in all of the existing methods rely on certain assumptions regarding the properties of the underlying fibers, which introduce inaccuracies and unnatural limitations in them. Our model is a non trivial generalization of the spherical deconvolution model, which unlike the existing methods does not make use of a fix-shaped kernel. Instead, the shape of the kernel is estimated simultaneously with the rest of the unknown parameters by employing a general adaptive model that can theoretically approximate any spherical deconvolution kernel. The performance of our model is demonstrated using simulated and real diffusion-weighed MR datasets and compared quantitatively with several existing techniques in literature. The results obtained indicate that our model has superior performance that is close to the theoretic limit of the best possible achievable result.

  16. Analog forecasting with dynamics-adapted kernels

    NASA Astrophysics Data System (ADS)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  17. Fast generation of sparse random kernel graphs

    SciTech Connect

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.

  18. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  19. Kernel bandwidth estimation for nonparametric modeling.

    PubMed

    Bors, Adrian G; Nasios, Nikolaos

    2009-12-01

    Kernel density estimation is a nonparametric procedure for probability density modeling, which has found several applications in various fields. The smoothness and modeling ability of the functional approximation are controlled by the kernel bandwidth. In this paper, we describe a Bayesian estimation method for finding the bandwidth from a given data set. The proposed bandwidth estimation method is applied in three different computational-intelligence methods that rely on kernel density estimation: 1) scale space; 2) mean shift; and 3) quantum clustering. The third method is a novel approach that relies on the principles of quantum mechanics. This method is based on the analogy between data samples and quantum particles and uses the SchrOdinger potential as a cost function. The proposed methodology is used for blind-source separation of modulated signals and for terrain segmentation based on topography information.

  20. Experimental study of turbulent flame kernel propagation

    SciTech Connect

    Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve

    2008-07-15

    Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{sub j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)

  1. Volatile compound formation during argan kernel roasting.

    PubMed

    El Monfalouti, Hanae; Charrouf, Zoubida; Giordano, Manuela; Guillaume, Dominique; Kartah, Badreddine; Harhar, Hicham; Gharby, Saïd; Denhez, Clément; Zeppa, Giuseppe

    2013-01-01

    Virgin edible argan oil is prepared by cold-pressing argan kernels previously roasted at 110 degrees C for up to 25 minutes. The concentration of 40 volatile compounds in virgin edible argan oil was determined as a function of argan kernel roasting time. Most of the volatile compounds begin to be formed after 15 to 25 minutes of roasting. This suggests that a strictly controlled roasting time should allow the modulation of argan oil taste and thus satisfy different types of consumers. This could be of major importance considering the present booming use of edible argan oil.

  2. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  3. Utilizing Kernelized Advection Schemes in Ocean Models

    NASA Astrophysics Data System (ADS)

    Zadeh, N.; Balaji, V.

    2008-12-01

    There has been a recent effort in the ocean model community to use a set of generic FORTRAN library routines for advection of scalar tracers in the ocean. In a collaborative project called Hybrid Ocean Model Environement (HOME), vastly different advection schemes (space-differencing schemes for advection equation) become available to modelers in the form of subroutine calls (kernels). In this talk we explore the possibility of utilizing ESMF data structures in wrapping these kernels so that they can be readily used in ESMF gridded components.

  4. Kernel abortion in maize. II. Distribution of /sup 14/C among kernel carboydrates

    SciTech Connect

    Hanft, J.M.; Jones, R.J.

    1986-06-01

    This study was designed to compare the uptake and distribution of /sup 14/C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35/sup 0/C were transferred to (/sup 14/C)sucrose media 10 days after pollination. Kernels cultured at 35/sup 0/C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on (/sup 14/C)sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35/sup 0/C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35/sup 0/C compared to kernels cultured at 30/sup 0/C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35/sup 0/C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30/sup 0/C (89%). Kernels cultured at 35/sup 0/C had a correspondingly higher proportion of /sup 14/C in endosperm fructose, glucose, and sucrose.

  5. Accuracy of Reduced and Extended Thin-Wire Kernels

    SciTech Connect

    Burke, G J

    2008-11-24

    Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.

  6. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  7. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    SciTech Connect

    Charles Barnes; CLay Richardson; Scott Nagley; John Hunn; Eric Shaber

    2010-10-01

    Babcock and Wilcox (B&W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-µm, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B&W produced 425-µm, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B&W also produced 500-µm, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B&W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  8. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... the separated half of a kernel with not more than one-eighth broken off....

  9. Kernel Temporal Differences for Neural Decoding

    PubMed Central

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  10. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  11. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  12. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  13. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  14. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...

  15. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...

  16. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...

  17. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...

  18. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...

  19. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a..., packaging, transporting, or holding food, subject to the provisions of this section. (a) Tamarind...

  20. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  1. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  2. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  3. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  4. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  5. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  6. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  7. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of...

  8. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  9. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of...

  10. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...

  11. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...

  12. Chare kernel; A runtime support system for parallel computations

    SciTech Connect

    Shu, W. ); Kale, L.V. )

    1991-03-01

    This paper presents the chare kernel system, which supports parallel computations with irregular structure. The chare kernel is a collection of primitive functions that manage chares, manipulative messages, invoke atomic computations, and coordinate concurrent activities. Programs written in the chare kernel language can be executed on different parallel machines without change. Users writing such programs concern themselves with the creation of parallel actions but not with assigning them to specific processors. The authors describe the design and implementation of the chare kernel. Performance of chare kernel programs on two hypercube machines, the Intel iPSC/2 and the NCUBE, is also given.

  13. Kernel weights optimization for error diffusion halftoning method

    NASA Astrophysics Data System (ADS)

    Fedoseev, Victor

    2015-02-01

    This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.

  14. Online kernel principal component analysis: a reduced-order model.

    PubMed

    Honeine, Paul

    2012-09-01

    Kernel principal component analysis (kernel-PCA) is an elegant nonlinear extension of one of the most used data analysis and dimensionality reduction techniques, the principal component analysis. In this paper, we propose an online algorithm for kernel-PCA. To this end, we examine a kernel-based version of Oja's rule, initially put forward to extract a linear principal axe. As with most kernel-based machines, the model order equals the number of available observations. To provide an online scheme, we propose to control the model order. We discuss theoretical results, such as an upper bound on the error of approximating the principal functions with the reduced-order model. We derive a recursive algorithm to discover the first principal axis, and extend it to multiple axes. Experimental results demonstrate the effectiveness of the proposed approach, both on synthetic data set and on images of handwritten digits, with comparison to classical kernel-PCA and iterative kernel-PCA.

  15. A Novel Framework for Learning Geometry-Aware Kernels.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo

    2016-05-01

    The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.

  16. Quark-hadron duality: Pinched kernel approach

    NASA Astrophysics Data System (ADS)

    Dominguez, C. A.; Hernandez, L. A.; Schilcher, K.; Spiesberger, H.

    2016-08-01

    Hadronic spectral functions measured by the ALEPH collaboration in the vector and axial-vector channels are used to study potential quark-hadron duality violations (DV). This is done entirely in the framework of pinched kernel finite energy sum rules (FESR), i.e. in a model independent fashion. The kinematical range of the ALEPH data is effectively extended up to s = 10 GeV2 by using an appropriate kernel, and assuming that in this region the spectral functions are given by perturbative QCD. Support for this assumption is obtained by using e+ e‑ annihilation data in the vector channel. Results in both channels show a good saturation of the pinched FESR, without further need of explicit models of DV.

  17. Wilson Dslash Kernel From Lattice QCD Optimization

    SciTech Connect

    Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  18. Searching and Indexing Genomic Databases via Kernelization

    PubMed Central

    Gagie, Travis; Puglisi, Simon J.

    2015-01-01

    The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper, we survey the 20-year history of this idea and discuss its relation to kernelization in parameterized complexity. PMID:25710001

  19. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.

  20. Semi-Supervised Kernel Mean Shift Clustering.

    PubMed

    Anand, Saket; Mittal, Sushil; Tuzel, Oncel; Meer, Peter

    2014-06-01

    Mean shift clustering is a powerful nonparametric technique that does not require prior knowledge of the number of clusters and does not constrain the shape of the clusters. However, being completely unsupervised, its performance suffers when the original distance metric fails to capture the underlying cluster structure. Despite recent advances in semi-supervised clustering methods, there has been little effort towards incorporating supervision into mean shift. We propose a semi-supervised framework for kernel mean shift clustering (SKMS) that uses only pairwise constraints to guide the clustering procedure. The points are first mapped to a high-dimensional kernel space where the constraints are imposed by a linear transformation of the mapped points. This is achieved by modifying the initial kernel matrix by minimizing a log det divergence-based objective function. We show the advantages of SKMS by evaluating its performance on various synthetic and real datasets while comparing with state-of-the-art semi-supervised clustering algorithms. PMID:26353281

  1. Kernel methods for phenotyping complex plant architecture.

    PubMed

    Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien

    2014-02-01

    The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.

  2. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. PMID:26829605

  3. The Palomar kernel-phase experiment: testing kernel phase interferometry for ground-based astronomical observations

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin; Tuthill, Peter; Hinkley, Sasha; Ireland, Michael J.; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D.; Martinache, Frantz

    2016-01-01

    At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently developed adaptive optics post-processing technique, called kernel-phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here, we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the α Ophiuchi binary system near periastron, using the Palomar 200-Inch Telescope. This is the first case in which kernel phase has been used with a full aperture to resolve a system close to the diffraction limit with ground-based extreme adaptive optics observations. Excellent agreement in astrometric quantities is found between kernel phase and masking, and kernel phase significantly outperforms PSF fitting and bispectral analysis, demonstrating its viability as an alternative to conventional non-redundant masking under appropriate conditions.

  4. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  5. Multiple kernel learning for sparse representation-based classification.

    PubMed

    Shrivastava, Ashish; Patel, Vishal M; Chellappa, Rama

    2014-07-01

    In this paper, we propose a multiple kernel learning (MKL) algorithm that is based on the sparse representation-based classification (SRC) method. Taking advantage of the nonlinear kernel SRC in efficiently representing the nonlinearities in the high-dimensional feature space, we propose an MKL method based on the kernel alignment criteria. Our method uses a two step training method to learn the kernel weights and sparse codes. At each iteration, the sparse codes are updated first while fixing the kernel mixing coefficients, and then the kernel mixing coefficients are updated while fixing the sparse codes. These two steps are repeated until a stopping criteria is met. The effectiveness of the proposed method is demonstrated using several publicly available image classification databases and it is shown that this method can perform significantly better than many competitive image classification algorithms. PMID:24835226

  6. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps.

    PubMed

    Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Lund, Torben Ellegaard; Hansen, Lars Kai

    2011-04-01

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification models. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We show that the performance of linear models is reduced for certain scan labelings/categorizations in this data set, while the nonlinear models provide more flexibility. We show that the sensitivity map can be used to visualize nonlinear versions of kernel logistic regression, the kernel Fisher discriminant, and the SVM, and conclude that the sensitivity map is a versatile and computationally efficient tool for visualization of nonlinear kernel models in neuroimaging.

  7. Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.

    SciTech Connect

    CHIBANI, OMAR

    1999-05-12

    Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twice the source particle range.

  8. A Kernel-based Account of Bibliometric Measures

    NASA Astrophysics Data System (ADS)

    Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji

    The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.

  9. Embedded real-time operating system micro kernel design

    NASA Astrophysics Data System (ADS)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  10. Robust visual tracking via speedup multiple kernel ridge regression

    NASA Astrophysics Data System (ADS)

    Qian, Cheng; Breckon, Toby P.; Li, Hui

    2015-09-01

    Most of the tracking methods attempt to build up feature spaces to represent the appearance of a target. However, limited by the complex structure of the distribution of features, the feature spaces constructed in a linear manner cannot characterize the nonlinear structure well. We propose an appearance model based on kernel ridge regression for visual tracking. Dense sampling is fulfilled around the target image patches to collect the training samples. In order to obtain a kernel space in favor of describing the target appearance, multiple kernel learning is introduced into the selection of kernels. Under the framework, instead of a single kernel, a linear combination of kernels is learned from the training samples to create a kernel space. Resorting to the circulant property of a kernel matrix, a fast interpolate iterative algorithm is developed to seek coefficients that are assigned to these kernels so as to give an optimal combination. After the regression function is learned, all candidate image patches gathered are taken as the input of the function, and the candidate with the maximal response is regarded as the object image patch. Extensive experimental results demonstrate that the proposed method outperforms other state-of-the-art tracking methods.

  11. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  12. LFK. Livermore FORTRAN Kernel Computer Test

    SciTech Connect

    McMahon, F.H.

    1990-05-01

    LFK, the Livermore FORTRAN Kernels, is a computer performance test that measures a realistic floating-point performance range for FORTRAN applications. Informally known as the Livermore Loops test, the LFK test may be used as a computer performance test, as a test of compiler accuracy (via checksums) and efficiency, or as a hardware endurance test. The LFK test, which focuses on FORTRAN as used in computational physics, measures the joint performance of the computer CPU, the compiler, and the computational structures in units of Megaflops/sec or Mflops. A C language version of subroutine KERNEL is also included which executes 24 samples of C numerical computation. The 24 kernels are a hydrodynamics code fragment, a fragment from an incomplete Cholesky conjugate gradient code, the standard inner product function of linear algebra, a fragment from a banded linear equations routine, a segment of a tridiagonal elimination routine, an example of a general linear recurrence equation, an equation of state fragment, part of an alternating direction implicit integration code, an integrate predictor code, a difference predictor code, a first sum, a first difference, a fragment from a two-dimensional particle-in-cell code, a part of a one-dimensional particle-in-cell code, an example of how casually FORTRAN can be written, a Monte Carlo search loop, an example of an implicit conditional computation, a fragment of a two-dimensional explicit hydrodynamics code, a general linear recurrence equation, part of a discrete ordinates transport program, a simple matrix calculation, a segment of a Planckian distribution procedure, a two-dimensional implicit hydrodynamics fragment, and determination of the location of the first minimum in an array.

  13. Oil point pressure of Indian almond kernels

    NASA Astrophysics Data System (ADS)

    Aregbesola, O.; Olatunde, G.; Esuola, S.; Owolarafe, O.

    2012-07-01

    The effect of preprocessing conditions such as moisture content, heating temperature, heating time and particle size on oil point pressure of Indian almond kernel was investigated. Results showed that oil point pressure was significantly (P < 0.05) affected by above mentioned parameters. It was also observed that oil point pressure reduced with increase in heating temperature and heating time for both coarse and fine particles. Furthermore, an increase in moisture content resulted in increased oil point pressure for coarse particles while there was a reduction in oil point pressure with increase in moisture content for fine particles.

  14. Verification of Chare-kernel programs

    SciTech Connect

    Bhansali, S.; Kale, L.V. )

    1989-01-01

    Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.

  15. Prediction of kernel density of corn using single-kernel near infrared spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...

  16. Linear and kernel methods for multi- and hypervariate change detection

    NASA Astrophysics Data System (ADS)

    Nielsen, Allan A.; Canty, Morton J.

    2010-10-01

    The iteratively re-weighted multivariate alteration detection (IR-MAD) algorithm may be used both for unsuper- vised change detection in multi- and hyperspectral remote sensing imagery as well as for automatic radiometric normalization of multi- or hypervariate multitemporal image sequences. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA), kernel MAF and kernel MNF analyses handle nonlinearities by implicitly transforming data into high (even innite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In image analysis the Gram matrix is often prohibitively large (its size is the number of pixels in the image squared). In this case we may sub-sample the image and carry out the kernel eigenvalue analysis on a set of training data samples only. To obtain a transformed version of the entire image we then project all pixels, which we call the test data, mapped nonlinearly onto the primal eigenvectors. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric normalization and kernel PCA/MAF/MNF transformations have been written

  17. Fructan metabolism in developing wheat (Triticum aestivum L.) kernels.

    PubMed

    Verspreet, Joran; Cimini, Sara; Vergauwen, Rudy; Dornez, Emmie; Locato, Vittoria; Le Roy, Katrien; De Gara, Laura; Van den Ende, Wim; Delcour, Jan A; Courtin, Christophe M

    2013-12-01

    Although fructans play a crucial role in wheat kernel development, their metabolism during kernel maturation is far from being understood. In this study, all major fructan-metabolizing enzymes together with fructan content, fructan degree of polymerization and the presence of fructan oligosaccharides were examined in developing wheat kernels (Triticum aestivum L. var. Homeros) from anthesis until maturity. Fructan accumulation occurred mainly in the first 2 weeks after anthesis, and a maximal fructan concentration of 2.5 ± 0.3 mg fructan per kernel was reached at 16 days after anthesis (DAA). Fructan synthesis was catalyzed by 1-SST (sucrose:sucrose 1-fructosyltransferase) and 6-SFT (sucrose:fructan 6-fructosyltransferase), and to a lesser extent by 1-FFT (fructan:fructan 1-fructosyltransferase). Despite the presence of 6G-kestotriose in wheat kernel extracts, the measured 6G-FFT (fructan:fructan 6G-fructosyltransferase) activity levels were low. During kernel filling, which lasted from 2 to 6 weeks after anthesis, kernel fructan content decreased from 2.5 ± 0.3 to 1.31 ± 0.12 mg fructan per kernel (42 DAA) and the average fructan degree of polymerization decreased from 7.3 ± 0.4 (14 DAA) to 4.4 ± 0.1 (42 DAA). FEH (fructan exohydrolase) reached maximal activity between 20 and 28 DAA. No fructan-metabolizing enzyme activities were registered during the final phase of kernel maturation, and fructan content and structure remained unchanged. This study provides insight into the complex metabolism of fructans during wheat kernel development and relates fructan turnover to the general phases of kernel development.

  18. Aligning Biomolecular Networks Using Modular Graph Kernels

    NASA Astrophysics Data System (ADS)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  19. Bergman kernel, balanced metrics and black holes

    NASA Astrophysics Data System (ADS)

    Klevtsov, Semyon

    In this thesis we explore the connections between the Kahler geometry and Landau levels on compact manifolds. We rederive the expansion of the Bergman kernel on Kahler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory. The physics interpretation of this result is as an expansion of the projector of wavefunctions on the lowest Landau level, in the special case that the magnetic field is proportional to the Kahler form. This is a geometric expansion, somewhat similar to the DeWitt-Seeley-Gilkey short time expansion for the heat kernel, but in this case describing the long time limit, without depending on supersymmetry. We also generalize this expansion to supersymmetric quantum mechanics and more general magnetic fields, and explore its applications. These include the quantum Hall effect in curved space, the balanced metrics and Kahler gravity. In particular, we conjecture that for a probe in a BPS black hole in type II strings compactified on Calabi-Yau manifolds, the moduli space metric is the balanced metric.

  20. Delimiting Areas of Endemism through Kernel Interpolation

    PubMed Central

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  1. Pareto-path multitask multiple kernel learning.

    PubMed

    Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2015-01-01

    A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches. PMID:25532155

  2. Scientific Computing Kernels on the Cell Processor

    SciTech Connect

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  3. Pareto-path multitask multiple kernel learning.

    PubMed

    Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2015-01-01

    A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.

  4. Stable Local Volatility Calibration Using Kernel Splines

    NASA Astrophysics Data System (ADS)

    Coleman, Thomas F.; Li, Yuying; Wang, Cheng

    2010-09-01

    We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.

  5. Transcriptome analysis of Ginkgo biloba kernels

    PubMed Central

    He, Bing; Gu, Yincong; Xu, Meng; Wang, Jianwen; Cao, Fuliang; Xu, Li-an

    2015-01-01

    Ginkgo biloba is a dioecious species native to China with medicinally and phylogenetically important characteristics; however, genomic resources for this species are limited. In this study, we performed the first transcriptome sequencing for Ginkgo kernels at five time points using Illumina paired-end sequencing. Approximately 25.08-Gb clean reads were obtained, and 68,547 unigenes with an average length of 870 bp were generated by de novo assembly. Of these unigenes, 29,987 (43.74%) were annotated in publicly available plant protein database. A total of 3,869 genes were identified as significantly differentially expressed, and enrichment analysis was conducted at different time points. Furthermore, metabolic pathway analysis revealed that 66 unigenes were responsible for terpenoid backbone biosynthesis, with up to 12 up-regulated unigenes involved in the biosynthesis of ginkgolide and bilobalide. Differential gene expression analysis together with real-time PCR experiments indicated that the synthesis of bilobalide may have interfered with the ginkgolide synthesis process in the kernel. These data can remarkably expand the existing transcriptome resources of Ginkgo, and provide a valuable platform to reveal more on developmental and metabolic mechanisms of this species. PMID:26500663

  6. Analysis of maize (Zea mays) kernel density and volume using micro-computed tomography and single-kernel near infrared spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Maize kernel density impacts milling quality of the grain due to kernel hardness. Harder kernels are correlated with higher test weight and are more resistant to breakage during harvest and transport. Softer kernels, in addition to being susceptible to mechanical damage, are also prone to pathogen ...

  7. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  8. Evidence-based kernels: fundamental units of behavioral influence.

    PubMed

    Embry, Dennis D; Biglan, Anthony

    2008-09-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior.

  9. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  10. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of...

  11. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of...

  12. Evidence-Based Kernels: Fundamental Units of Behavioral Influence

    ERIC Educational Resources Information Center

    Embry, Dennis D.; Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…

  13. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  14. Sugar uptake into kernels of tunicate tassel-seed maize

    SciTech Connect

    Thomas, P.A.; Felker, F.C.; Crawford, C.G. )

    1990-05-01

    A maize (Zea mays L.) strain expressing both the tassel-seed (Ts-5) and tunicate (Tu) characters was developed which produces glume-covered kernels on the tassel, often born on 7-10 mm pedicels. Vigorous plants produce up to 100 such kernels interspersed with additional sessile kernels. This floral unit provides a potentially valuable experimental system for studying sugar uptake into developing maize seeds. When detached kernels (with glumes and pedicel intact) are placed in incubation solution, fluid flows up the pedicel and into the glumes, entering the pedicel apoplast near the kernel base. The unusual anatomical features of this maize strain permit experimental access to the pedicel apoplast with much less possibility of kernel base tissue damage than with kernels excised from the cob. ({sup 14}C)Fructose incorporation into soluble and insoluble fractions of endosperm increased for 8 days. Endosperm uptake of sucrose, fructose, and D-glucose was significantly greater than that of L-glucose. Fructose uptake was significantly inhibited by CCCP, DNP, and PCMBS. These results suggest the presence of an active, non-diffusion component of sugar transport in maize kernels.

  15. Introduction to Kernel Methods: Classification of Multivariate Data

    NASA Astrophysics Data System (ADS)

    Fauvel, M.

    2016-05-01

    In this chapter, kernel methods are presented for the classification of multivariate data. An introduction example is given to enlighten the main idea of kernel methods. Then emphasis is done on the Support Vector Machine. Structural risk minimization is presented, and linear and non-linear SVM are described. Finally, a full example of SVM classification is given on simulated hyperspectral data.

  16. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  17. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  18. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  19. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  20. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  1. Accumulation of storage products in oat during kernel development.

    PubMed

    Banaś, A; Dahlqvist, A; Debski, H; Gummeson, P O; Stymne, S

    2000-12-01

    Lipids, proteins and starch are the main storage products in oat seeds. As a first step in elucidating the regulatory mechanisms behind the deposition of these compounds, two different oat varieties, 'Freja' and 'Matilda', were analysed during kernel development. In both cultivars, the majority of the lipids accumulated at very early stage of development but Matilda accumulated about twice the amount of lipids compared to Freja. Accumulation of proteins and starch started also in the early stage of kernel development but, in contrast to lipids, continued over a considerably longer period. The high-oil variety Matilda also accumulated higher amounts of proteins than Freja. The starch content in Freja kernels was higher than in Matilda kernels and the difference was most pronounced during the early stage of development when oil synthesis was most active. Oleosin accumulation continued during the whole period of kernel development.

  2. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  3. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  4. OSKI: A Library of Automatically Tuned Sparse Matrix Kernels

    SciTech Connect

    Vuduc, R; Demmel, J W; Yelick, K A

    2005-07-19

    The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.

  5. Feasibility of near infrared spectroscopy for analyzing corn kernel damage and viability of soybean and corn kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The current US corn grading system accounts for the portion of damaged kernels, which is measured by time-consuming and inaccurate visual inspection. Near infrared spectroscopy (NIRS), a non-destructive and fast analytical method, was tested as a tool for discriminating corn kernels with heat and f...

  6. A visualization tool for the kernel-driven model with improved ability in data analysis and kernel assessment

    NASA Astrophysics Data System (ADS)

    Dong, Yadong; Jiao, Ziti; Zhang, Hu; Bai, Dongni; Zhang, Xiaoning; Li, Yang; He, Dandan

    2016-10-01

    The semi-empirical, kernel-driven Bidirectional Reflectance Distribution Function (BRDF) model has been widely used for many aspects of remote sensing. With the development of the kernel-driven model, there is a need to further assess the performance of newly developed kernels. The use of visualization tools can facilitate the analysis of model results and the assessment of newly developed kernels. However, the current version of the kernel-driven model does not contain a visualization function. In this study, a user-friendly visualization tool, named MaKeMAT, was developed specifically for the kernel-driven model. The POLDER-3 and CAR BRDF datasets were used to demonstrate the applicability of MaKeMAT. The visualization of inputted multi-angle measurements enhances understanding of multi-angle measurements and allows the choice of measurements with good representativeness. The visualization of modeling results facilitates the assessment of newly developed kernels. The study shows that the visualization tool MaKeMAT can promote the widespread application of the kernel-driven model.

  7. Privacy preserving RBF kernel support vector machine.

    PubMed

    Li, Haoran; Xiong, Li; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2014-01-01

    Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data. PMID:25013805

  8. Point-Kernel Shielding Code System.

    1982-02-17

    Version 00 QAD-BSA is a three-dimensional, point-kernel shielding code system based upon the CCC-48/QAD series. It is designed to calculate photon dose rates and heating rates using exponential attenuation and infinite medium buildup factors. Calculational provisions include estimates of fast neutron penetration using data computed by the moments method. Included geometry routines can describe complicated source and shield geometries. An internal library contains data for many frequently used structural and shielding materials, enabling the codemore » to solve most problems with only source strengths and problem geometry required as input. This code system adapts especially well to problems requiring multiple sources and sources with asymmetrical geometry. In addition to being edited separately, the total interaction rates from many sources may be edited at each detector point. Calculated photon interaction rates agree closely with those obtained using QAD-P5A.« less

  9. Kernel density estimation using graphical processing unit

    NASA Astrophysics Data System (ADS)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  10. The flare kernel in the impulsive phase

    NASA Technical Reports Server (NTRS)

    Dejager, C.

    1986-01-01

    The impulsive phase of a flare is characterized by impulsive bursts of X-ray and microwave radiation, related to impulsive footpoint heating up to 50 or 60 MK, by upward gas velocities (150 to 400 km/sec) and by a gradual increase of the flare's thermal energy content. These phenomena, as well as non-thermal effects, are all related to the impulsive energy injection into the flare. The available observations are also quantitatively consistent with a model in which energy is injected into the flare by beams of energetic electrons, causing ablation of chromospheric gas, followed by convective rise of gas. Thus, a hole is burned into the chromosphere; at the end of impulsive phase of an average flare the lower part of that hole is situated about 1800 km above the photosphere. H alpha and other optical and UV line emission is radiated by a thin layer (approx. 20 km) at the bottom of the flare kernel. The upward rising and outward streaming gas cools down by conduction in about 45 s. The non-thermal effects in the initial phase are due to curtailing of the energy distribution function by escape of energetic electrons. The single flux tube model of a flare does not fit with these observations; instead we propose the spaghetti-bundle model. Microwave and gamma-ray observations suggest the occurrence of dense flare knots of approx. 800 km diameter, and of high temperature. Future observations should concentrate on locating the microwave/gamma-ray sources, and on determining the kernel's fine structure and the related multi-loop structure of the flaring area.

  11. Labeled Graph Kernel for Behavior Analysis.

    PubMed

    Zhao, Ruiqi; Martinez, Aleix M

    2016-08-01

    Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data.

  12. Equivalence of kernel machine regression and kernel distance covariance for multidimensional phenotype association studies.

    PubMed

    Hua, Wen-Yu; Ghosh, Debashis

    2015-09-01

    Associating genetic markers with a multidimensional phenotype is an important yet challenging problem. In this work, we establish the equivalence between two popular methods: kernel-machine regression (KMR), and kernel distance covariance (KDC). KMR is a semiparametric regression framework that models covariate effects parametrically and genetic markers non-parametrically, while KDC represents a class of methods that include distance covariance (DC) and Hilbert-Schmidt independence criterion (HSIC), which are nonparametric tests of independence. We show that the equivalence between the score test of KMR and the KDC statistic under certain conditions can lead to a novel generalization of the KDC test that incorporates covariates. Our contributions are 3-fold: (1) establishing the equivalence between KMR and KDC; (2) showing that the principles of KMR can be applied to the interpretation of KDC; (3) the development of a broader class of KDC statistics, where the class members are statistics corresponding to different kernel combinations. Finally, we perform simulation studies and an analysis of real data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. The ADNI study suggest that SNPs of FLJ16124 exhibit pairwise interaction effects that are strongly correlated to the changes of brain region volumes. PMID:25939365

  13. Equivalence of kernel machine regression and kernel distance covariance for multidimensional phenotype association studies.

    PubMed

    Hua, Wen-Yu; Ghosh, Debashis

    2015-09-01

    Associating genetic markers with a multidimensional phenotype is an important yet challenging problem. In this work, we establish the equivalence between two popular methods: kernel-machine regression (KMR), and kernel distance covariance (KDC). KMR is a semiparametric regression framework that models covariate effects parametrically and genetic markers non-parametrically, while KDC represents a class of methods that include distance covariance (DC) and Hilbert-Schmidt independence criterion (HSIC), which are nonparametric tests of independence. We show that the equivalence between the score test of KMR and the KDC statistic under certain conditions can lead to a novel generalization of the KDC test that incorporates covariates. Our contributions are 3-fold: (1) establishing the equivalence between KMR and KDC; (2) showing that the principles of KMR can be applied to the interpretation of KDC; (3) the development of a broader class of KDC statistics, where the class members are statistics corresponding to different kernel combinations. Finally, we perform simulation studies and an analysis of real data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. The ADNI study suggest that SNPs of FLJ16124 exhibit pairwise interaction effects that are strongly correlated to the changes of brain region volumes.

  14. Probability-confidence-kernel-based localized multiple kernel learning with lp norm.

    PubMed

    Han, Yina; Liu, Guizhong

    2012-06-01

    Localized multiple kernel learning (LMKL) is an attractive strategy for combining multiple heterogeneous features in terms of their discriminative power for each individual sample. However, models excessively fitting to a specific sample would obstacle the extension to unseen data, while a more general form is often insufficient for diverse locality characterization. Hence, both learning sample-specific local models for each training datum and extending the learned models to unseen test data should be equally addressed in designing LMKL algorithm. In this paper, for an integrative solution, we propose a probability confidence kernel (PCK), which measures per-sample similarity with respect to probabilistic-prediction-based class attribute: The class attribute similarity complements the spatial-similarity-based base kernels for more reasonable locality characterization, and the predefined form of involved class probability density function facilitates the extension to the whole input space and ensures its statistical meaning. Incorporating PCK into support-vectormachine-based LMKL framework, we propose a new PCK-LMKL with arbitrary l(p)-norm constraint implied in the definition of PCKs, where both the parameters in PCK and the final classifier can be efficiently optimized in a joint manner. Evaluations of PCK-LMKL on both benchmark machine learning data sets (ten University of California Irvine (UCI) data sets) and challenging computer vision data sets (15-scene data set and Caltech-101 data set) have shown to achieve state-of-the-art performances.

  15. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  16. Gaussian kernel width optimization for sparse Bayesian learning.

    PubMed

    Mohsenzadeh, Yalda; Sheikhzadeh, Hamid

    2015-04-01

    Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters. PMID:25794377

  17. Correlation and classification of single kernel fluorescence hyperspectral data with aflatoxin concentration in corn kernels inoculated with Aspergillus flavus spores.

    PubMed

    Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D

    2010-05-01

    The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to

  18. Bridging the gap between the KERNEL and RT-11

    SciTech Connect

    Hendra, R.G.

    1981-06-01

    A software package is proposed to allow users of the PL-11 language, and the LSI-11 KERNEL in general, to use their PL-11 programs under RT-11. Further, some general purpose extensions to the KERNEL are proposed that facilitate some number conversions and strong manipulations. A Floating Point Package of procedures to allow full use of the hardware floating point capability of the LSI-11 computers is proposed. Extensions to the KERNEL that allow a user to read, write and delete disc files in the manner of RT-11 is also proposed. A device directory listing routine is also included.

  19. Spectrophotometric method for determination of phosphine residues in cashew kernels.

    PubMed

    Rangaswamy, J R

    1988-01-01

    A spectrophotometric method reported for determination of phosphine (PH3) residues in wheat has been extended for determination of these residues in cashew kernels. Unlike the spectrum for wheat, the spectrum of PH3 residue-AgNO3 chromophore from cashew kernels does not show an absorption maximum at 400 nm; nevertheless, reading the absorbance at 400 nm afforded good recoveries of 90-98%. No interference occurred from crop materials, and crop controls showed low absorbance; the method can be applied for determinations as low as 0.01 ppm PH3 residue in cashew kernels.

  20. Initial-state splitting kernels in cold nuclear matter

    NASA Astrophysics Data System (ADS)

    Ovanesyan, Grigory; Ringer, Felix; Vitev, Ivan

    2016-09-01

    We derive medium-induced splitting kernels for energetic partons that undergo interactions in dense QCD matter before a hard-scattering event at large momentum transfer Q2. Working in the framework of the effective theory SCETG, we compute the splitting kernels beyond the soft gluon approximation. We present numerical studies that compare our new results with previous findings. We expect the full medium-induced splitting kernels to be most relevant for the extension of initial-state cold nuclear matter energy loss phenomenology in both p+A and A+A collisions.

  1. Kernel simplex growing algorithm for hyperspectral endmember extraction

    NASA Astrophysics Data System (ADS)

    Zhao, Liaoying; Zheng, Junpeng; Li, Xiaorun; Wang, Lijiao

    2014-01-01

    In order to effectively extract endmembers for hyperspectral imagery where linear mixing model may not be appropriate due to multiple scattering effects, this paper extends the simplex growing algorithm (SGA) to its kernel version. A new simplex volume formula without dimension reduction is used in SGA to form a new simplex growing algorithm (NSGA). The original data are nonlinearly mapped into a high-dimensional space where the scatters can be ignored. To avoid determining complex nonlinear mapping, a kernel function is used to extend the NSGA to kernel NSGA (KNSGA). Experimental results of simulated and real data prove that the proposed KNSGA approach outperforms SGA and NSGA.

  2. Multitasking kernel for the C and Fortran programming languages

    SciTech Connect

    Brooks, E.D. III

    1984-09-01

    A multitasking kernel for the C and Fortran programming languages which runs on the Unix operating system is presented. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the coding, debugging and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessors. The performance evaluation features require no changes in the source code of the application and are implemented as a set of compile and run time options in the kernel.

  3. Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.

    1999-05-12

    Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twicemore » the source particle range.« less

  4. Kernel-based Linux emulation for Plan 9.

    SciTech Connect

    Minnich, Ronald G.

    2010-09-01

    CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9. In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.

  5. Inheritance of Kernel Color in Corn: Explanations and Investigations.

    ERIC Educational Resources Information Center

    Ford, Rosemary H.

    2000-01-01

    Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)

  6. Intelligent classification methods of grain kernels using computer vision analysis

    NASA Astrophysics Data System (ADS)

    Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo

    2011-06-01

    In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.

  7. Isolation and purification of D-mannose from palm kernel.

    PubMed

    Zhang, Tao; Pan, Ziguo; Qian, Chao; Chen, Xinzhi

    2009-09-01

    An economically viable procedure for the isolation and purification of d-mannose from palm kernel was developed in this research. The palm kernel was catalytically hydrolyzed with sulfuric acid at 100 degrees C and then fermented by mannan-degrading enzymes. The solution after fermentation underwent filtration in a silica gel column, desalination by ion-exchange resin, and crystallization in ethanol to produce pure d-mannose in a total yield of 48.4% (based on the weight of the palm kernel). Different enzymes were investigated, and the results indicated that endo-beta-mannanase was the best enzyme to promote the hydrolysis of the oligosaccharides isolated from the palm kernel. The pure d-mannose sample was characterized by FTIR, (1)H NMR, and (13)C NMR spectra.

  8. A kernel adaptive algorithm for quaternion-valued inputs.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations. PMID:25594982

  9. The Dynamic Kernel Scheduler-Part 1

    NASA Astrophysics Data System (ADS)

    Adelmann, Andreas; Locans, Uldis; Suter, Andreas

    2016-10-01

    Emerging processor architectures such as GPUs and Intel MICs provide a huge performance potential for high performance computing. However developing software that uses these hardware accelerators introduces additional challenges for the developer. These challenges may include exposing increased parallelism, handling different hardware designs, and using multiple development frameworks in order to utilise devices from different vendors. The Dynamic Kernel Scheduler (DKS) is being developed in order to provide a software layer between the host application and different hardware accelerators. DKS handles the communication between the host and the device, schedules task execution, and provides a library of built-in algorithms. Algorithms available in the DKS library will be written in CUDA, OpenCL, and OpenMP. Depending on the available hardware, the DKS can select the appropriate implementation of the algorithm. The first DKS version was created using CUDA for the Nvidia GPUs and OpenMP for Intel MIC. DKS was further integrated into OPAL (Object-oriented Parallel Accelerator Library) in order to speed up a parallel FFT based Poisson solver and Monte Carlo simulations for particle-matter interaction used for proton therapy degrader modelling. DKS was also used together with Minuit2 for parameter fitting, where χ2 and max-log-likelihood functions were offloaded to the hardware accelerator. The concepts of the DKS, first results, and plans for the future will be shown in this paper.

  10. Protoribosome by quantum kernel energy method.

    PubMed

    Huang, Lulu; Krupkin, Miri; Bashan, Anat; Yonath, Ada; Massa, Lou

    2013-09-10

    Experimental evidence suggests the existence of an RNA molecular prebiotic entity, called by us the "protoribosome," which may have evolved in the RNA world before evolution of the genetic code and proteins. This vestige of the RNA world, which possesses all of the capabilities required for peptide bond formation, seems to be still functioning in the heart of all of the contemporary ribosome. Within the modern ribosome this remnant includes the peptidyl transferase center. Its highly conserved nucleotide sequence is suggestive of its robustness under diverse environmental conditions, and hence on its prebiotic origin. Its twofold pseudosymmetry suggests that this entity could have been a dimer of self-folding RNA units that formed a pocket within which two activated amino acids might be accommodated, similar to the binding mode of modern tRNA molecules that carry amino acids or peptidyl moieties. Using quantum mechanics and crystal coordinates, this work studies the question of whether the putative protoribosome has properties necessary to function as an evolutionary precursor to the modern ribosome. The quantum model used in the calculations is density functional theory--B3LYP/3-21G*, implemented using the kernel energy method to make the computations practical and efficient. It occurs that the necessary conditions that would characterize a practicable protoribosome--namely (i) energetic structural stability and (ii) energetically stable attachment to substrates--are both well satisfied.

  11. Local Kernel for Brains Classification in Schizophrenia

    NASA Astrophysics Data System (ADS)

    Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.

    In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.

  12. Kernel MAD Algorithm for Relative Radiometric Normalization

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Tang, Ping; Hu, Changmiao

    2016-06-01

    The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  13. Kernel spectral clustering with memory effect

    NASA Astrophysics Data System (ADS)

    Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.

    2013-05-01

    Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.

  14. Resummed memory kernels in generalized system-bath master equations

    SciTech Connect

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  15. The Weighted Super Bergman Kernels Over the Supermatrix Spaces

    NASA Astrophysics Data System (ADS)

    Feng, Zhiming

    2015-12-01

    The purpose of this paper is threefold. Firstly, using Howe duality for , we obtain integral formulas of the super Schur functions with respect to the super standard Gaussian distributions. Secondly, we give explicit expressions of the super Szegö kernels and the weighted super Bergman kernels for the Cartan superdomains of type I. Thirdly, combining these results, we obtain duality relations of integrals over the unitary groups and the Cartan superdomains, and the marginal distributions of the weighted measure.

  16. Kernel approximation for solving few-body integral equations

    NASA Astrophysics Data System (ADS)

    Christie, I.; Eyre, D.

    1986-06-01

    This paper investigates an approximate method for solving integral equations that arise in few-body problems. The method is to replace the kernel by a degenerate kernel defined on a finite dimensional subspace of piecewise Lagrange polynomials. Numerical accuracy of the method is tested by solving the two-body Lippmann-Schwinger equation with non-separable potentials, and the three-body Amado-Lovelace equation with separable two-body potentials.

  17. Enzymatic treatment of peanut kernels to reduce allergen levels.

    PubMed

    Yu, Jianmei; Ahmedna, Mohamed; Goktepe, Ipek; Cheng, Hsiaopo; Maleki, Soheila

    2011-08-01

    This study investigated the use of enzymatic treatment to reduce peanut allergens in peanut kernels as affected by processing conditions. Two major peanut allergens, Ara h 1 and Ara h 2, were used as indicators of process effectiveness. Enzymatic treatment effectively reduced Ara h 1 and Ara h 2 in roasted peanut kernels by up to 100% under optimal conditions. For instance, treatment of roasted peanut kernels with α-chymotrypsin and trypsin for 1-3h significantly increased the solubility of peanut protein while reducing Ara h 1 and Ara h 2 in peanut kernel extracts by 100% and 98%, respectively, based on ELISA readings. Ara h 1 and Ara h 2 levels in peanut protein extracts were inversely correlated with protein solubility in roasted peanut. Blanching of kernels enhanced the effectiveness of enzyme treatment in roasted peanuts but not in raw peanuts. The optimal concentration of enzyme was determined by response surface to be in the range of 0.1-0.2%. No consistent results were obtained for raw peanut kernels since Ara h 1 and Ara h 2 increased in peanut protein extracts under some treatment conditions and decreased in others. PMID:25214091

  18. An Ensemble Approach to Building Mercer Kernels with Prior Information

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  19. A Gabor-Block-Based Kernel Discriminative Common Vector Approach Using Cosine Kernels for Human Face Recognition

    PubMed Central

    Kar, Arindam; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita; Kundu, Mahantapas

    2012-01-01

    In this paper a nonlinear Gabor Wavelet Transform (GWT) discriminant feature extraction approach for enhanced face recognition is proposed. Firstly, the low-energized blocks from Gabor wavelet transformed images are extracted. Secondly, the nonlinear discriminating features are analyzed and extracted from the selected low-energized blocks by the generalized Kernel Discriminative Common Vector (KDCV) method. The KDCV method is extended to include cosine kernel function in the discriminating method. The KDCV with the cosine kernels is then applied on the extracted low-energized discriminating feature vectors to obtain the real component of a complex quantity for face recognition. In order to derive positive kernel discriminative vectors, we apply only those kernel discriminative eigenvectors that are associated with nonzero eigenvalues. The feasibility of the low-energized Gabor-block-based generalized KDCV method with cosine kernel function models has been successfully tested for classification using the L1, L2 distance measures; and the cosine similarity measure on both frontal and pose-angled face recognition. Experimental results on the FRAV2D and the FERET database demonstrate the effectiveness of this new approach. PMID:23365559

  20. Volcano clustering determination: Bivariate Gauss vs. Fisher kernels

    NASA Astrophysics Data System (ADS)

    Cañón-Tapia, Edgardo

    2013-05-01

    Underlying many studies of volcano clustering is the implicit assumption that vent distribution can be studied by using kernels originally devised for distribution in plane surfaces. Nevertheless, an important change in topology in the volcanic context is related to the distortion that is introduced when attempting to represent features found on the surface of a sphere that are being projected into a plane. This work explores the extent to which different topologies of the kernel used to study the spatial distribution of vents can introduce significant changes in the obtained density functions. To this end, a planar (Gauss) and a spherical (Fisher) kernels are mutually compared. The role of the smoothing factor in these two kernels is also explored with some detail. The results indicate that the topology of the kernel is not extremely influential, and that either type of kernel can be used to characterize a plane or a spherical distribution with exactly the same detail (provided that a suitable smoothing factor is selected in each case). It is also shown that there is a limitation on the resolution of the Fisher kernel relative to the typical separation between data that can be accurately described, because data sets with separations lower than 500 km are considered as a single cluster using this method. In contrast, the Gauss kernel can provide adequate resolutions for vent distributions at a wider range of separations. In addition, this study also shows that the numerical value of the smoothing factor (or bandwidth) of both the Gauss and Fisher kernels has no unique nor direct relationship with the relevant separation among data. In order to establish the relevant distance, it is necessary to take into consideration the value of the respective smoothing factor together with a level of statistical significance at which the contributions to the probability density function will be analyzed. Based on such reference level, it is possible to create a hierarchy of

  1. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power

  2. Thermal-to-visible face recognition using multiple kernel learning

    NASA Astrophysics Data System (ADS)

    Hu, Shuowen; Gurram, Prudhvi; Kwon, Heesung; Chan, Alex L.

    2014-06-01

    Recognizing faces acquired in the thermal spectrum from a gallery of visible face images is a desired capability for the military and homeland security, especially for nighttime surveillance and intelligence gathering. However, thermal-tovisible face recognition is a highly challenging problem, due to the large modality gap between thermal and visible imaging. In this paper, we propose a thermal-to-visible face recognition approach based on multiple kernel learning (MKL) with support vector machines (SVMs). We first subdivide the face into non-overlapping spatial regions or blocks using a method based on coalitional game theory. For comparison purposes, we also investigate uniform spatial subdivisions. Following this subdivision, histogram of oriented gradients (HOG) features are extracted from each block and utilized to compute a kernel for each region. We apply sparse multiple kernel learning (SMKL), which is a MKLbased approach that learns a set of sparse kernel weights, as well as the decision function of a one-vs-all SVM classifier for each of the subjects in the gallery. We also apply equal kernel weights (non-sparse) and obtain one-vs-all SVM models for the same subjects in the gallery. Only visible images of each subject are used for MKL training, while thermal images are used as probe images during testing. With subdivision generated by game theory, we achieved Rank-1 identification rate of 50.7% for SMKL and 93.6% for equal kernel weighting using a multimodal dataset of 65 subjects. With uniform subdivisions, we achieved a Rank-1 identification rate of 88.3% for SMKL, but 92.7% for equal kernel weighting.

  3. Protein fold recognition using geometric kernel data fusion

    PubMed Central

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-01-01

    Motivation: Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. Results: We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. Availability and implementation: The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/ Contact: pooyapaydar@gmail.com or yves

  4. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  5. Lesser grain borers, Rhyzopertha dominica, select rough rice kernels with cracked hulls for reproduction.

    PubMed

    Kavallieratos, Nickolas G; Athanassiou, Christos G; Arthur, Frank H; Throne, James E

    2012-01-01

    Tests were conducted to determine whether the lesser grain borer, Rhyzopertha dominica (F.) (Coleoptera: Bostrychidae), selects rough rice (Oryza sativa L. (Poales: Poaceae)) kernels with cracked hulls for reproduction when these kernels are mixed with intact kernels. Differing amounts of kernels with cracked hulls (0, 5, 10, and 20%) of the varieties Francis and Wells were mixed with intact kernels, and the number of adult progeny emerging from intact kernels and from kernels with cracked hulls was determined. The Wells variety had been previously classified as tolerant to R. dominica, while the Francis variety was classified as moderately susceptible. Few F 1 progeny were produced in Wells regardless of the percentage of kernels with cracked hulls, few of the kernels with cracked hulls had emergence holes, and little firass was produced from feeding damage. At 10 and 20% kernels with cracked hulls, the progeny production, number of emergence holes in kernels with cracked hulls, and the amount of firass was greater in Francis than in Wells. The proportion of progeny emerging from kernels with cracked hulls increased as the proportion of kernels with cracked hulls increased. The results indicate that R. dominica select kernels with cracked hulls for reproduction.

  6. Travel-time sensitivity kernels in long-range propagation.

    PubMed

    Skarsoulis, E K; Cornuelle, B D; Dzieciuch, M A

    2009-11-01

    Wave-theoretic travel-time sensitivity kernels (TSKs) are calculated in two-dimensional (2D) and three-dimensional (3D) environments and their behavior with increasing propagation range is studied and compared to that of ray-theoretic TSKs and corresponding Fresnel-volumes. The differences between the 2D and 3D TSKs average out when horizontal or cross-range marginals are considered, which indicates that they are not important in the case of range-independent sound-speed perturbations or perturbations of large scale compared to the lateral TSK extent. With increasing range, the wave-theoretic TSKs expand in the horizontal cross-range direction, their cross-range extent being comparable to that of the corresponding free-space Fresnel zone, whereas they remain bounded in the vertical. Vertical travel-time sensitivity kernels (VTSKs)-one-dimensional kernels describing the effect of horizontally uniform sound-speed changes on travel-times-are calculated analytically using a perturbation approach, and also numerically, as horizontal marginals of the corresponding TSKs. Good agreement between analytical and numerical VTSKs, as well as between 2D and 3D VTSKs, is found. As an alternative method to obtain wave-theoretic sensitivity kernels, the parabolic approximation is used; the resulting TSKs and VTSKs are in good agreement with normal-mode results. With increasing range, the wave-theoretic VTSKs approach the corresponding ray-theoretic sensitivity kernels.

  7. Characterization of the desiccation of wheat kernels by multivariate imaging.

    PubMed

    Jaillais, B; Perrin, E; Mangavel, C; Bertrand, D

    2011-06-01

    Variations in the quality of wheat kernels can be an important problem in the cereal industry. In particular, desiccation conditions play an essential role in both the technological characteristics of the kernel and its ability to sprout. In planta desiccation constitutes a key stage in the determinism of the functional properties of seeds. The impact of desiccation on the endosperm texture of seed is presented in this work. A simple imaging system had previously been developed to acquire multivariate images to characterize the heterogeneity of food materials. A special algorithm for the use under principal component analysis (PCA) was developed to process the acquired multivariate images. Wheat grains were collected at physiological maturity, and were subjected to two types of drying conditions that induced different kinetics of water loss. A data set containing 24 images (dimensioned 702 × 524 pixels) corresponding to the different desiccation stages of wheat kernels was acquired at different wavelengths and then analyzed. A comparison of the images of kernel sections highlighted changes in kernel texture as a function of their drying conditions. Slow drying led to a floury texture, whereas fast drying caused a glassy texture. The automated imaging system thus developed is sufficiently rapid and economical to enable the characterization in large collections of grain texture as a function of time and water content.

  8. Boundary conditions for gas flow problems from anisotropic scattering kernels

    NASA Astrophysics Data System (ADS)

    To, Quy-Dong; Vu, Van-Huyen; Lauriat, Guy; Léonard, Céline

    2015-10-01

    The paper presents an interface model for gas flowing through a channel constituted of anisotropic wall surfaces. Using anisotropic scattering kernels and Chapman Enskog phase density, the boundary conditions (BCs) for velocity, temperature, and discontinuities including velocity slip and temperature jump at the wall are obtained. Two scattering kernels, Dadzie and Méolans (DM) kernel, and generalized anisotropic Cercignani-Lampis (ACL) are examined in the present paper, yielding simple BCs at the wall fluid interface. With these two kernels, we rigorously recover the analytical expression for orientation dependent slip shown in our previous works [Pham et al., Phys. Rev. E 86, 051201 (2012) and To et al., J. Heat Transfer 137, 091002 (2015)] which is in good agreement with molecular dynamics simulation results. More important, our models include both thermal transpiration effect and new equations for the temperature jump. While the same expression depending on the two tangential accommodation coefficients is obtained for slip velocity, the DM and ACL temperature equations are significantly different. The derived BC equations associated with these two kernels are of interest for the gas simulations since they are able to capture the direction dependent slip behavior of anisotropic interfaces.

  9. [Utilizable value of wild economic plant resource--acron kernel].

    PubMed

    He, R; Wang, K; Wang, Y; Xiong, T

    2000-04-01

    Peking whites breeding hens were selected. Using true metabolizable energy method (TME) to evaluate the available nutritive value of acorn kernel, while maize and rice were used as control. The results showed that the contents of gross energy (GE), apparent metabolizable energy (AME), true metabolizable energy (TME) and crude protein (CP) in the acorn kernel were 16.53 mg/kg-1, 11.13 mg.kg-1, 11.66 mg.kg-1 and 10.63%, respectively. The apparent availability and true availability of crude protein were 45.55% and 49.83%. The gross content of 17 amino acids, essential amino acids and semiessential amino acids were 9.23% and 4.84%. The true availability of amino acid and the content of true available amino acid were 60.85% and 6.09%. The contents of tannin and hydrocyanic acid were 4.55% and 0.98% in acorn kernel. The available nutritive value of acorn kernel is similar to maize or slightly lower, but slightly higher than that of rice. Acorn kernel is a wild economic plant resource to exploit and utilize but it contains higher tannin and hydrocyanic acid. PMID:11767593

  10. Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Theodoridis, Sergios

    2008-12-01

    Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.

  11. Aleurone cell identity is suppressed following connation in maize kernels.

    PubMed

    Geisler-Lee, Jane; Gallie, Daniel R

    2005-09-01

    Expression of the cytokinin-synthesizing isopentenyl transferase enzyme under the control of the Arabidopsis (Arabidopsis thaliana) SAG12 senescence-inducible promoter reverses the normal abortion of the lower floret from a maize (Zea mays) spikelet. Following pollination, the upper and lower floret pistils fuse, producing a connated kernel with two genetically distinct embryos and the endosperms fused along their abgerminal face. Therefore, ectopic synthesis of cytokinin was used to position two independent endosperms within a connated kernel to determine how the fused endosperm would affect the development of the two aleurone layers along the fusion plane. Examination of the connated kernel revealed that aleurone cells were present for only a short distance along the fusion plane whereas starchy endosperm cells were present along most of the remainder of the fusion plane, suggesting that aleurone development is suppressed when positioned between independent starchy endosperms. Sporadic aleurone cells along the fusion plane were observed and may have arisen from late or imperfect fusion of the endosperms of the connated kernel, supporting the observation that a peripheral position at the surface of the endosperm and not proximity to maternal tissues such as the testa and pericarp are important for aleurone development. Aleurone mosaicism was observed in the crown region of nonconnated SAG12-isopentenyl transferase kernels, suggesting that cytokinin can also affect aleurone development.

  12. Kernel Methods for Mining Instance Data in Ontologies

    NASA Astrophysics Data System (ADS)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  13. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    PubMed Central

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  14. Kernel Manifold Alignment for Domain Adaptation.

    PubMed

    Tuia, Devis; Camps-Valls, Gustau

    2016-01-01

    The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors' knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational

  15. Kernel Manifold Alignment for Domain Adaptation

    PubMed Central

    Tuia, Devis; Camps-Valls, Gustau

    2016-01-01

    The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors’ knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational

  16. Scalar heat kernel with boundary in the worldline formalism

    NASA Astrophysics Data System (ADS)

    Bastianelli, Fiorenzo; Corradini, Olindo; Pisani, Pablo A. G.; Schubert, Christian

    2008-10-01

    The worldline formalism has in recent years emerged as a powerful tool for the computation of effective actions and heat kernels. However, implementing nontrivial boundary conditions in this formalism has turned out to be a difficult problem. Recently, such a generalization was developed for the case of a scalar field on the half-space Bbb R+ × Bbb RD-1, based on an extension of the associated worldline path integral to the full Bbb RD using image charges. We present here an improved version of this formalism which allows us to write down non-recursive master formulas for the n-point contribution to the heat kernel trace of a scalar field on the half-space with Dirichlet or Neumann boundary conditions. These master formulas are suitable to computerization. We demonstrate the efficiency of the formalism by a calculation of two new heat-kernel coefficients for the half-space, a4 and a9/2.

  17. Weighted Feature Gaussian Kernel SVM for Emotion Recognition

    PubMed Central

    Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443

  18. Improved Online Support Vector Machines Spam Filtering Using String Kernels

    NASA Astrophysics Data System (ADS)

    Amayri, Ola; Bouguila, Nizar

    A major bottleneck in electronic communications is the enormous dissemination of spam emails. Developing of suitable filters that can adequately capture those emails and achieve high performance rate become a main concern. Support vector machines (SVMs) have made a large contribution to the development of spam email filtering. Based on SVMs, the crucial problems in email classification are feature mapping of input emails and the choice of the kernels. In this paper, we present thorough investigation of several distance-based kernels and propose the use of string kernels and prove its efficiency in blocking spam emails. We detail a feature mapping variants in text classification (TC) that yield improved performance for the standard SVMs in filtering task. Furthermore, to cope for realtime scenarios we propose an online active framework for spam filtering.

  19. Identification of nonlinear optical systems using adaptive kernel methods

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Zhang, Changjiang; Zhang, Haoran; Feng, Genliang; Xu, Xiuling

    2005-12-01

    An identification approach of nonlinear optical dynamic systems, based on adaptive kernel methods which are modified version of least squares support vector machine (LS-SVM), is presented in order to obtain the reference dynamic model for solving real time applications such as adaptive signal processing of the optical systems. The feasibility of this approach is demonstrated with the computer simulation through identifying a Bragg acoustic-optical bistable system. Unlike artificial neural networks, the adaptive kernel methods possess prominent advantages: over fitting is unlikely to occur by employing structural risk minimization criterion, the global optimal solution can be uniquely obtained owing to that its training is performed through the solution of a set of linear equations. Also, the adaptive kernel methods are still effective for the nonlinear optical systems with a variation of the system parameter. This method is robust with respect to noise, and it constitutes another powerful tool for the identification of nonlinear optical systems.

  20. A method of smoothed particle hydrodynamics using spheroidal kernels

    NASA Technical Reports Server (NTRS)

    Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.

    1995-01-01

    We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.

  1. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    PubMed Central

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  2. Recurrent kernel machines: computing with infinite echo state networks.

    PubMed

    Hermans, Michiel; Schrauwen, Benjamin

    2012-01-01

    Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks.

  3. Compression loading behaviour of sunflower seeds and kernels

    NASA Astrophysics Data System (ADS)

    Selvam, Thasaiya A.; Manikantan, Musuvadi R.; Chand, Tarsem; Sharma, Rajiv; Seerangurayar, Thirupathi

    2014-10-01

    The present study was carried out to investigate the compression loading behaviour of five Indian sunflower varieties (NIRMAL-196, NIRMAL-303, CO-2, KBSH-41, and PSH- 996) under four different moisture levels (6-18% d.b). The initial cracking force, mean rupture force, and rupture energy were measured as a function of moisture content. The observed results showed that the initial cracking force decreased linearly with an increase in moisture content for all varieties. The mean rupture force also decreased linearly with an increase in moisture content. However, the rupture energy was found to be increasing linearly for seed and kernel with moisture content. NIRMAL-196 and PSH-996 had maximum and minimum values of all the attributes studied for both seed and kernel, respectively. The values of all the studied attributes were higher for seed than kernel of all the varieties at all moisture levels. There was a significant effect of moisture and variety on compression loading behaviour.

  4. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  5. Broadband Waveform Sensitivity Kernels for Large-Scale Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Stähler, S. C.; van Driel, M.; Hosseini, K.; Auer, L.; Sigloch, K.

    2015-12-01

    Seismic sensitivity kernels, i.e. the basis for mapping misfit functionals to structural parameters in seismic inversions, have received much attention in recent years. Their computation has been conducted via ray-theory based approaches (Dahlen et al., 2000) or fully numerical solutions based on the adjoint-state formulation (e.g. Tromp et al., 2005). The core problem is the exuberant computational cost due to the large number of source-receiver pairs, each of which require solutions to the forward problem. This is exacerbated in the high-frequency regime where numerical solutions become prohibitively expensive. We present a methodology to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (abstract ID# 77891, www.axisem.info), and thus on spherically symmetric models. As a consequence of this method's numerical efficiency even in high-frequency regimes, kernels can be computed in a time- and frequency-dependent manner, thus providing the full generic mapping from perturbed waveform to perturbed structure. Such waveform kernels can then be used for a variety of misfit functions, structural parameters and refiltered into bandpasses without recomputing any wavefields. A core component of the kernel method presented here is the mapping from numerical wavefields to inversion meshes. This is achieved by a Monte-Carlo approach, allowing for convergent and controllable accuracy on arbitrarily shaped tetrahedral and hexahedral meshes. We test and validate this accuracy by comparing to reference traveltimes, show the projection onto various locally adaptive inversion meshes and discuss computational efficiency for ongoing tomographic applications in the range of millions of observed body-wave data between periods of 2-30s.

  6. Single aflatoxin contaminated corn kernel analysis with fluorescence hyperspectral image

    NASA Astrophysics Data System (ADS)

    Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Ononye, Ambrose; Brown, Robert L.; Cleveland, Thomas E.

    2010-04-01

    Aflatoxins are toxic secondary metabolites of the fungi Aspergillus flavus and Aspergillus parasiticus, among others. Aflatoxin contaminated corn is toxic to domestic animals when ingested in feed and is a known carcinogen associated with liver and lung cancer in humans. Consequently, aflatoxin levels in food and feed are regulated by the Food and Drug Administration (FDA) in the US, allowing 20 ppb (parts per billion) limits in food and 100 ppb in feed for interstate commerce. Currently, aflatoxin detection and quantification methods are based on analytical tests including thin-layer chromatography (TCL) and high performance liquid chromatography (HPLC). These analytical tests require the destruction of samples, and are costly and time consuming. Thus, the ability to detect aflatoxin in a rapid, nondestructive way is crucial to the grain industry, particularly to corn industry. Hyperspectral imaging technology offers a non-invasive approach toward screening for food safety inspection and quality control based on its spectral signature. The focus of this paper is to classify aflatoxin contaminated single corn kernels using fluorescence hyperspectral imagery. Field inoculated corn kernels were used in the study. Contaminated and control kernels under long wavelength ultraviolet excitation were imaged using a visible near-infrared (VNIR) hyperspectral camera. The imaged kernels were chemically analyzed to provide reference information for image analysis. This paper describes a procedure to process corn kernels located in different images for statistical training and classification. Two classification algorithms, Maximum Likelihood and Binary Encoding, were used to classify each corn kernel into "control" or "contaminated" through pixel classification. The Binary Encoding approach had a slightly better performance with accuracy equals to 87% or 88% when 20 ppb or 100 ppb was used as classification threshold, respectively.

  7. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  8. The Effects of Kernel Feeding by Halyomorpha halys (Hemiptera: Pentatomidae) on Commercial Hazelnuts.

    PubMed

    Hedstrom, C S; Shearer, P W; Miller, J C; Walton, V M

    2014-10-01

    Halyomorpha halys Stål, the brown marmorated stink bug (Hemiptera: Pentatomidae), is an invasive pest with established populations in Oregon. The generalist feeding habits of H. halys suggest it has the potential to be a pest of many specialty crops grown in Oregon, including hazelnuts, Corylus avellana L. The objectives of this study were to: 1) characterize the damage to developing hazelnut kernels resulting from feeding by H. halys adults, 2) determine how the timing of feeding during kernel development influences damage to kernels, and 3) determine if hazelnut shell thickness has an effect on feeding frequency on kernels. Adult brown marmorated stink bugs were allowed to feed on developing nuts for 1-wk periods from initial kernel development (spring) until harvest (fall). Developing nuts not exposed to feeding by H. halys served as a control treatment. The degree of damage and diagnostic symptoms corresponded with the hazelnut kernels' physiological development. Our results demonstrated that when H. halys fed on hazelnuts before kernel expansion, development of the kernels could cease, resulting in empty shells. When stink bugs fed during kernel expansion, kernels appeared malformed. When stink bugs fed on mature nuts the kernels exhibited corky, necrotic areas. Although significant differences in shell thickness were observed among the cultivars, no significant differences occurred in the proportions of damaged kernels based on field tests and laboratory choice tests. The results of these studies demonstrated that commercial hazelnuts are susceptible to damage caused by the feeding of H. halys throughout the entire period of kernel development.

  9. Heat kernel smoothing using Laplace-Beltrami eigenfunctions.

    PubMed

    Seo, Seongho; Chung, Moo K; Vorperian, Houri K

    2010-01-01

    We present a novel surface smoothing framework using the Laplace-Beltrami eigenfunctions. The Green's function of an isotropic diffusion equation on a manifold is constructed as a linear combination of the Laplace-Beltraimi operator. The Green's function is then used in constructing heat kernel smoothing. Unlike many previous approaches, diffusion is analytically represented as a series expansion avoiding numerical instability and inaccuracy issues. This proposed framework is illustrated with mandible surfaces, and is compared to a widely used iterative kernel smoothing technique in computational anatomy. The MATLAB source code is freely available at http://brainimaging.waisman.wisc.edu/ chung/lb.

  10. Optical remote sensor for peanut kernel abortion classification.

    PubMed

    Ozana, Nisan; Buchsbaum, Stav; Bishitz, Yael; Beiderman, Yevgeny; Schmilovitch, Zeev; Schwarz, Ariel; Shemer, Amir; Keshet, Joseph; Zalevsky, Zeev

    2016-05-20

    In this paper, we propose a simple, inexpensive optical device for remote measurement of various agricultural parameters. The sensor is based on temporal tracking of backreflected secondary speckle patterns generated when illuminating a plant with a laser and while applying periodic acoustic-based pressure stimulation. By analyzing different parameters using a support-vector-machine-based algorithm, peanut kernel abortion can be detected remotely. This paper presents experimental tests which are the first step toward an implementation of a noncontact device for the detection of agricultural parameters such as kernel abortion. PMID:27411126

  11. An information theoretic approach of designing sparse kernel adaptive filters.

    PubMed

    Liu, Weifeng; Park, Il; Principe, José C

    2009-12-01

    This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented. PMID:19923047

  12. Source identity and kernel functions for Inozemtsev-type systems

    NASA Astrophysics Data System (ADS)

    Langmann, Edwin; Takemura, Kouichi

    2012-08-01

    The Inozemtsev Hamiltonian is an elliptic generalization of the differential operator defining the BCN trigonometric quantum Calogero-Sutherland model, and its eigenvalue equation is a natural many-variable generalization of the Heun differential equation. We present kernel functions for Inozemtsev Hamiltonians and Chalykh-Feigin-Veselov-Sergeev-type deformations thereof. Our main result is a solution of a heat-type equation for a generalized Inozemtsev Hamiltonian which is the source of all these kernel functions. Applications are given, including a derivation of simple exact eigenfunctions and eigenvalues of the Inozemtsev Hamiltonian.

  13. FUV Continuum in Flare Kernels Observed by IRIS

    NASA Astrophysics Data System (ADS)

    Daw, Adrian N.; Kowalski, Adam; Allred, Joel C.; Cauzzi, Gianna

    2016-05-01

    Fits to Interface Region Imaging Spectrograph (IRIS) spectra observed from bright kernels during the impulsive phase of solar flares are providing long-sought constraints on the UV/white-light continuum emission. Results of fits of continua plus numerous atomic and molecular emission lines to IRIS far ultraviolet (FUV) spectra of bright kernels are presented. Constraints on beam energy and cross sectional area are provided by cotemporaneous RHESSI, FERMI, ROSA/DST, IRIS slit-jaw and SDO/AIA observations, allowing for comparison of the observed IRIS continuum to calculations of non-thermal electron beam heating using the RADYN radiative-hydrodynamic loop model.

  14. An information theoretic approach of designing sparse kernel adaptive filters.

    PubMed

    Liu, Weifeng; Park, Il; Principe, José C

    2009-12-01

    This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented.

  15. Iris Image Blur Detection with Multiple Kernel Learning

    NASA Astrophysics Data System (ADS)

    Pan, Lili; Xie, Mei; Mao, Ling

    In this letter, we analyze the influence of motion and out-of-focus blur on both frequency spectrum and cepstrum of an iris image. Based on their characteristics, we define two new discriminative blur features represented by Energy Spectral Density Distribution (ESDD) and Singular Cepstrum Histogram (SCH). To merge the two features for blur detection, a merging kernel which is a linear combination of two kernels is proposed when employing Support Vector Machine. Extensive experiments demonstrate the validity of our method by showing the improved blur detection performance on both synthetic and real datasets.

  16. Higher-order Lipatov kernels and the QCD Pomeron

    SciTech Connect

    White, A.R.

    1994-08-12

    Three closely related topics are covered. The derivation of O(g{sup 4}) Lipatov kernels in pure glue QCD. The significance of quarks for the physical Pomeron in QCD. The possible inter-relation of Pomeron dynamics with Electroweak symmetry breaking.

  17. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    SciTech Connect

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  18. Metabolite identification through multiple kernel learning on fragmentation trees

    PubMed Central

    Shen, Huibin; Dührkop, Kai; Böcker, Sebastian; Rousu, Juho

    2014-01-01

    Motivation: Metabolite identification from tandem mass spectrometric data is a key task in metabolomics. Various computational methods have been proposed for the identification of metabolites from tandem mass spectra. Fragmentation tree methods explore the space of possible ways in which the metabolite can fragment, and base the metabolite identification on scoring of these fragmentation trees. Machine learning methods have been used to map mass spectra to molecular fingerprints; predicted fingerprints, in turn, can be used to score candidate molecular structures. Results: Here, we combine fragmentation tree computations with kernel-based machine learning to predict molecular fingerprints and identify molecular structures. We introduce a family of kernels capturing the similarity of fragmentation trees, and combine these kernels using recently proposed multiple kernel learning approaches. Experiments on two large reference datasets show that the new methods significantly improve molecular fingerprint prediction accuracy. These improvements result in better metabolite identification, doubling the number of metabolites ranked at the top position of the candidates list. Contact: huibin.shen@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931979

  19. Enzymatic treatment of peanut kernels to reduce allergen levels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study investigated the use of enzymatic treatment to reduce peanut allergens in peanut kernel by processing conditions, such as, pretreatment with heat and proteolysis at different enzyme concentrations and treatment times. Two major peanut allergens, Ara h 1 and Ara h 2, were used as indicator...

  20. Popping the Kernel Modeling the States of Matter

    ERIC Educational Resources Information Center

    Hitt, Austin; White, Orvil; Hanson, Debbie

    2005-01-01

    This article discusses how to use popcorn to engage students in model building and to teach them about the nature of matter. Popping kernels is a simple and effective method to connect the concepts of heat, motion, and volume with the different phases of matter. Before proceeding with the activity the class should discuss the nature of scientific…

  1. Music emotion detection using hierarchical sparse kernel machines.

    PubMed

    Chin, Yu-Hao; Lin, Chang-Hong; Siahaan, Ernestasia; Wang, Jia-Ching

    2014-01-01

    For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA) is implemented to reduce the dimension. The acoustical features are utilized to generate the first-level decision vector, which is a vector with each element being a significant value of an emotion. The significant values of eight main emotional classes are utilized in this paper. To calculate the significant value of an emotion, we construct its 2-class SVM with calm emotion as the global (non-target) side of the SVM. The probability distributions of the adopted acoustical features are calculated and the probability product kernel is applied in the first-level SVMs to obtain first-level decision vector feature. In the second level of the hierarchical system, we merely construct a 2-class relevance vector machine (RVM) with happiness as the target side and other emotions as the background side of the RVM. The first-level decision vector is used as the feature with conventional radial basis function kernel. The happiness verification threshold is built on the probability value. In the experimental results, the detection error tradeoff (DET) curve shows that the proposed system has a good performance on verifying if a music clip reveals happiness emotion.

  2. High-Speed Tracking with Kernelized Correlation Filters.

    PubMed

    Henriques, João F; Caseiro, Rui; Martins, Pedro; Batista, Jorge

    2015-03-01

    The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies-any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new kernelized correlation filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call dual correlation filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source. PMID:26353263

  3. Notes on a storage manager for the Clouds kernel

    NASA Technical Reports Server (NTRS)

    Pitts, David V.; Spafford, Eugene H.

    1986-01-01

    The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.

  4. Uniqueness Result in the Cauchy Dirichlet Problem via Mehler Kernel

    NASA Astrophysics Data System (ADS)

    Dhungana, Bishnu P.

    2014-09-01

    Using the Mehler kernel, a uniqueness theorem in the Cauchy Dirichlet problem for the Hermite heat equation with homogeneous Dirichlet boundary conditions on a class P of bounded functions U( x, t) with certain growth on U x ( x, t) is established.

  5. High-Speed Tracking with Kernelized Correlation Filters.

    PubMed

    Henriques, João F; Caseiro, Rui; Martins, Pedro; Batista, Jorge

    2015-03-01

    The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies-any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new kernelized correlation filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call dual correlation filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source.

  6. PERI - auto-tuning memory-intensive kernels for multicore

    NASA Astrophysics Data System (ADS)

    Williams, S.; Datta, K.; Carter, J.; Oliker, L.; Shalf, J.; Yelick, K.; Bailey, D.

    2008-07-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to sparse matrix vector multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the high-performance computing literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4× improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  7. Microwave moisture meter for in-shell peanut kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    . A microwave moisture meter built with off-the-shelf components was developed, calibrated and tested in the laboratory and in the field for nondestructive and instantaneous in-shell peanut kernel moisture content determination from dielectric measurements on unshelled peanut pod samples. The meter ...

  8. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2008-03-01

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  9. Stereotype Measurement and the "Kernel of Truth" Hypothesis.

    ERIC Educational Resources Information Center

    Gordon, Randall A.

    1989-01-01

    Describes a stereotype measurement suitable for classroom demonstration. Illustrates C. McCauley and C. L. Stitt's diagnostic ratio measure and examines the validity of the "kernel of truth" hypothesis. Uses this as a starting point for class discussion. Reports results and gives suggestions for discussion of related concepts. (Author/NL)

  10. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    ERIC Educational Resources Information Center

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  11. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  12. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  13. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  14. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  15. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...

  16. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2009-02-20

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  17. Matrix kernels for MEG and EEG source localization and imaging

    SciTech Connect

    Mosher, J.C.; Lewis, P.S.; Leahy, R.M.

    1994-12-31

    The most widely used model for electroencephalography (EEG) and magnetoencephalography (MEG) assumes a quasi-static approximation of Maxwell`s equations and a piecewise homogeneous conductor model. Both models contain an incremental field element that linearly relates an incremental source element (current dipole) to the field or voltage at a distant point. The explicit form of the field element is dependent on the head modeling assumptions and sensor configuration. Proper characterization of this incremental element is crucial to the inverse problem. The field element can be partitioned into the product of a vector dependent on sensor characteristics and a matrix kernel dependent only on head modeling assumptions. We present here the matrix kernels for the general boundary element model (BEM) and for MEG spherical models. We show how these kernels are easily interchanged in a linear algebraic framework that includes sensor specifics such as orientation and gradiometer configuration. We then describe how this kernel is easily applied to ``gain`` or ``transfer`` matrices used in multiple dipole and source imaging models.

  18. The Stokes problem for the ellipsoid using ellipsoidal kernels

    NASA Technical Reports Server (NTRS)

    Zhu, Z.

    1981-01-01

    A brief review of Stokes' problem for the ellipsoid as a reference surface is given. Another solution of the problem using an ellipsoidal kernel, which represents an iterative form of Stokes' integral, is suggested with a relative error of the order of the flattening. On studying of Rapp's method in detail the procedures of improving its convergence are discussed.

  19. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  20. Genome Mapping of Kernel Characteristics in Hard Red Spring Wheat Breeding Lines

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Kernel characteristics, particularly kernel weight, kernel size, and grain protein content, are important components of grain yield and quality in wheat. Development of high performing wheat cultivars, with high grain yield and quality, is a major focus in wheat breeding programs worldwide. Here, we...

  1. Low Cost Real-Time Sorting of in Shell Pistachio Nuts from Kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A high speed sorter for separating pistachio nuts with (in shell) and without (kernels) shells is reported. Testing indicates 95% accuracy in removing kernels from the in shell stream with no false positive results out of 1000 kernels tested. Testing with 1000 each of in shell, shell halves, and ker...

  2. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  3. Kernel-based least squares policy iteration for reinforcement learning.

    PubMed

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating

  4. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  5. Choosing parameters of kernel subspace LDA for recognition of face images under pose and illumination variations.

    PubMed

    Huang, Jian; Yuen, Pong C; Chen, Wen-Sheng; Lai, Jian Huang

    2007-08-01

    This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.

  6. Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.

    PubMed

    Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I

    2016-03-01

    The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor.

  7. Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.

    PubMed

    Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I

    2016-03-01

    The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor. PMID:27021084

  8. Exercise: the lifelong supplement for healthy ageing and slowing down the onset of frailty.

    PubMed

    Viña, Jose; Rodriguez-Mañas, Leocadio; Salvador-Pascual, Andrea; Tarazona-Santabalbina, Francisco José; Gomez-Cabrera, Mari Carmen

    2016-04-15

    The beneficial effects of exercise have been well recognized for over half a century. Dr Jeremy Morris's pioneering studies in the fifties showed a striking difference in cardiovascular disease between the drivers and conductors on the double-decker buses in London. These studies sparked off a vast amount of research on the effects of exercise in health, and the general consensus is that exercise contributes to improved outcomes and treatment for several diseases including osteoporosis, diabetes, depression and atherosclerosis. Evidence of the beneficial effects of exercise is reviewed here. One way of highlighting the impact of exercise on disease is to consider it from the perspective of good practice. However, the intensity, duration, frequency (dosage) and counter indications of the exercise should be taken into consideration to individually tailor the exercise programme. An important case of the beneficial effect of exercise is that of ageing. Ageing is characterized by a loss of homeostatic mechanisms, on many occasions leading to the development of frailty, and hence frailty is one of the major geriatric syndromes and exercise is very useful to mitigate, or at least delay, it. Since exercise is so effective in reducing frailty, we would like to propose that exercise be considered as a supplement to other treatments. People all over the world have been taking nutritional supplements in the hopes of improving their health. We would like to think of exercise as a physiological supplement not only for treating diseases, but also for improving healthy ageing. PMID:26872560

  9. Problematic assumptions have slowed down depression research: why symptoms, not syndromes are the way forward

    PubMed Central

    Fried, Eiko I.

    2015-01-01

    Major depression (MD) is a highly heterogeneous diagnostic category. Diverse symptoms such as sad mood, anhedonia, and fatigue are routinely added to an unweighted sum-score, and cutoffs are used to distinguish between depressed participants and healthy controls. Researchers then investigate outcome variables like MD risk factors, biomarkers, and treatment response in such samples. These practices presuppose that (1) depression is a discrete condition, and that (2) symptoms are interchangeable indicators of this latent disorder. Here I review these two assumptions, elucidate their historical roots, show how deeply engrained they are in psychological and psychiatric research, and document that they contrast with evidence. Depression is not a consistent syndrome with clearly demarcated boundaries, and depression symptoms are not interchangeable indicators of an underlying disorder. Current research practices lump individuals with very different problems into one category, which has contributed to the remarkably slow progress in key research domains such as the development of efficacious antidepressants or the identification of biomarkers for depression. The recently proposed network framework offers an alternative to the problematic assumptions. MD is not understood as a distinct condition, but as heterogeneous symptom cluster that substantially overlaps with other syndromes such as anxiety disorders. MD is not framed as an underlying disease with a number of equivalent indicators, but as a network of symptoms that have direct causal influence on each other: insomnia can cause fatigue which then triggers concentration and psychomotor problems. This approach offers new opportunities for constructing an empirically based classification system and has broad implications for future research. PMID:25852621

  10. Macitentan slows down the dermal fibrotic process in systemic sclerosis: in vitro findings.

    PubMed

    Corallo, C; Pecetti, G; Iglarz, M; Volpi, N; Franci, D; Montella, A; D' Onofrio, F; Nuti, R; Giordano, N

    2013-01-01

    Systemic sclerosis (or scleroderma) is an autoimmune disease characterized by skin and internal organ fibrosis, caused by microvascular dysfunction. The microvascular damage seems to be a consequence of an endothelial autoimmune response, followed by activation of the inflammatory cascade and massive deposition of collagen. Endothelin-1 (ET-1) contributes to the inflammatory and fibrotic processes by increasing the concentration of pro-inflammatory and pro-fibrotic cytokines, and it is considered one of the most relevant mediators of vascular damage in scleroderma. It is indeed found in very high concentration in serum of sclerodermic patients. Moreover, in these pathological conditions there is an increased expression of ET-1 receptors (ETA and ETB), which mediate the detrimental action of ET-1, and often a change of ETA/ETB ratio. The aim of the present study is to evaluate the in vitro effect of macitentan, an orally active tissue-targeting dual endothelin receptor antagonist, and its major metabolite (ACT-132577) on alpha smooth muscle actin (alphaSMA) expression, evaluated on dermal fibroblasts from healthy subjects and on dermal fibroblasts from lesional and non-lesional skin from sclerodermic patients. The combination of macitentan and its major metabolite reduced the levels of αSMA after 48 h in sclerodermic fibroblasts from lesional skin. No relevant changes in αSMA levels were found in fibroblasts from non-lesional skin, whose behavior is similar to that of dermal fibroblasts from healthy patients.

  11. Driving through the Great Recession: Why does motor vehicle fatality decrease when the economy slows down?

    PubMed

    He, Monica M

    2016-04-01

    The relationship between short-term macroeconomic growth and temporary mortality increases remains strongest for motor vehicle (MV) crashes. In this paper, I investigate the mechanisms that explain falling MV fatality rates during the recent Great Recession. Using U.S. state-level panel data from 2003 to 2013, I first estimate the relationship between unemployment and MV fatality rate and then decompose it into risk and exposure factors for different types of MV crashes. Results reveal a significant 2.9 percent decrease in MV fatality rate for each percentage point increase in unemployment rate. This relationship is almost entirely explained by changes in the risk of driving rather than exposure to the amount of driving and is particularly robust for crashes involving large commercial trucks, multiple vehicles, and speeding cars. These findings provide evidence suggesting traffic patterns directly related to economic activity lead to higher risk of MV fatality rates when the economy improves.

  12. Experimental Therapies and Ongoing Clinical Trials to Slow Down Progression of ADPKD

    PubMed Central

    Irazabal, Maria V.; Torres, Vicente E.

    2014-01-01

    The improvement of imaging techniques over the years has contributed to the understanding of the natural history of autosomal dominant polycystic kidney disease, and facilitated the observation of its structural progression. Advances in molecular biology and genetics have made possible a greater understanding of the genetics, molecular, and cellular pathophysiologic mechanisms responsible for its development and have laid the foundation for the development of potential new therapies. Therapies targeting genetic mechanisms in ADPKD have inherent limitations. As a result, most experimental therapies at the present time are aimed at delaying the growth of the cysts and associated interstitial inflammation and fibrosis by targeting tubular epithelial cell proliferation and fluid secretion by the cystic epithelium. Several interventions affecting many of the signaling pathways disrupted in ADPKD have been effective in animal models and some are currently being tested in clinical trials. PMID:23971644

  13. Dispersal evolution in the presence of Allee effects can speed up or slow down invasions.

    PubMed

    Shaw, Allison K; Kokko, Hanna

    2015-05-01

    Successful invasions by sexually reproducing species depend on the ability of individuals to mate. Finding mates can be particularly challenging at low densities (a mate-finding Allee effect), a factor that is only implicitly accounted for by most invasion models, which typically assume asexual populations. Existing theory on single-sex populations suggests that dispersal evolution in the presence of a mate-finding Allee effect slows invasions. Here we develop a two-sex model to determine how mating system, strength of an Allee effect, and dispersal evolution influence invasion speed. We show that mating system differences can dramatically alter the spread rate. We also find a broader spectrum of outcomes than earlier work suggests. Allowing dispersal to evolve in a spreading context can sometimes alleviate the mate-finding Allee effect and slow the rate of spread. However, we demonstrate the opposite when resource competition among females remains high: evolution then acts to speed up the spread rate, despite simultaneously exacerbating the Allee effect. Our results highlight the importance of the timing of mating relative to dispersal and the strength of resource competition for consideration in future empirical studies.

  14. Dynamical traps lead to the slowing down of intramolecular vibrational energy flow

    PubMed Central

    Manikandan, Paranjothy; Keshavamurthy, Srihari

    2014-01-01

    The phenomenon of intramolecular vibrational energy redistribution (IVR) is at the heart of chemical reaction dynamics. Statistical rate theories, assuming instantaneous IVR, predict exponential decay of the population with the properties of the transition state essentially determining the mechanism. However, there is growing evidence that IVR competes with the reaction timescales, resulting in deviations from the exponential rate law. Dynamics cannot be ignored in such cases for understanding the reaction mechanisms. Significant insights in this context have come from the state space model of IVR, which predicts power law behavior for the rates with the power law exponent, an effective state space dimensionality, being a measure of the nature and extent of the IVR dynamics. However, whether the effective IVR dimensionality can vary with time and whether the mechanism for the variation is of purely quantum or classical origins are issues that remain unresolved. Such multiple power law scalings can lead to surprising mode specificity in the system, even above the threshold for facile IVR. In this work, choosing the well-studied thiophosgene molecule as an example, we establish the anisotropic and anomalous nature of the quantum IVR dynamics and show that multiple power law scalings do manifest in the system. More importantly, we show that the mechanism of the observed multiple power law scaling has classical origins due to a combination of trapping near resonance junctions in the network of classical nonlinear resonances at short to intermediate times and the influence of weak higher-order resonances at relatively longer times. PMID:25246538

  15. Slow Down! The Importance of Repetition, Planning, and Recycling in Language Teaching.

    ERIC Educational Resources Information Center

    Brown, Steven

    This paper argues that there is converging evidence for the pedagogical value of planning, repeating, and recycling activities in the language classroom. The paper is divided into three parts. Part one reviews field research done on this topic in Britain and finds some support for the proposition that planning fosters more complex language use and…

  16. Surface modification of an Mg-1Ca alloy to slow down its biocorrosion by chitosan.

    PubMed

    Gu, X N; Zheng, Y F; Lan, Q X; Cheng, Y; Zhang, Z X; Xi, T F; Zhang, D Y

    2009-08-01

    The surface morphologies before and after immersion corrosion test of various chitosan-coated Mg-1Ca alloy samples were studied to investigate the effect of chitosan dip coating on the slowdown of biocorrosion. It showed that the corrosion resistance of the Mg-Ca alloy increased after coating with chitosan, and depended on both the chitosan molecular weight and layer numbers of coating. The Mg-Ca alloy coated by chitosan with a molecular weight of 2.7 x 10(5) for six layers has smooth and intact surface morphology, and exhibits the highest corrosion resistance in a simulated body fluid.

  17. Herbivory and competition slow down invasion of a tall grass along a productivity gradient.

    PubMed

    Kuijper, D P J; Nijhoff, D J; Bakker, J P

    2004-11-01

    Competition models including competition for light predict that small plant species preferred by herbivores will be outshaded by taller unpreferred plant species with increasing productivity. When the tall plant species is little grazed by the herbivores, it can easily invade and dominate short vegetation. The tall-growing grass Elymus athericus dominates the highly productive stages of a salt-marsh succession in Schiermonnikoog and is not preferred by the herbivores which occur there, hares and geese. We studied how interspecific competition and herbivory affected performance during early establishment of this species with increasing productivity. Seedlings were planted in the field in a full factorial design, manipulating both interspecific competition and herbivory. The experiment was replicated along a natural productivity gradient. Competition reduced above-ground biomass production and decreased the number of ramets that were produced but did not affect survival of seedlings. The negative effects of competition on seedling performance increased with increasing productivity. In contrast to our expectations, herbivory strongly reduced seedling survival, especially at the unproductive sites and had only small effects on seedling growth. The present study shows that unpreferred tall-growing species cannot easily invade vegetation composed of short preferred species. Grazing by (intermediate-sized) herbivores can prevent establishment at unproductive sites, and increased competition can prevent a rapid invasion of highly productive sites. Herbivores can have a long-lasting impact on vegetation succession by preventing the establishment of tall-growing species, such as E. athericus, in a window of opportunity at young unproductive successional stages.

  18. Criticality in the slowed-down boiling crisis at zero gravity.

    PubMed

    Charignon, T; Lloveras, P; Chatain, D; Truskinovsky, L; Vives, E; Beysens, D; Nikolayev, V S

    2015-05-01

    Boiling crisis is a transition between nucleate and film boiling. It occurs at a threshold value of the heat flux from the heater called CHF (critical heat flux). Usually, boiling crisis studies are hindered by the high CHF and short transition duration (below 1 ms). Here we report on experiments in hydrogen near its liquid-vapor critical point, in which the CHF is low and the dynamics slow enough to be resolved. As under such conditions the surface tension is very small, the experiments are carried out in the reduced gravity to preserve the conventional bubble geometry. Weightlessness is created artificially in two-phase hydrogen by compensating gravity with magnetic forces. We were able to reveal the fractal structure of the contour of the percolating cluster of the dry areas at the heater that precedes the boiling crisis. We provide a direct statistical analysis of dry spot areas that confirms the boiling crisis at zero gravity as a scale-free phenomenon. It was observed that, in agreement with theoretical predictions, saturated boiling CHF tends to zero (within the precision of our thermal control system) in zero gravity, which suggests that the boiling crisis may be observed at any heat flux provided the experiment lasts long enough. PMID:26066249

  19. Criticality in the slowed-down boiling crisis at zero gravity.

    PubMed

    Charignon, T; Lloveras, P; Chatain, D; Truskinovsky, L; Vives, E; Beysens, D; Nikolayev, V S

    2015-05-01

    Boiling crisis is a transition between nucleate and film boiling. It occurs at a threshold value of the heat flux from the heater called CHF (critical heat flux). Usually, boiling crisis studies are hindered by the high CHF and short transition duration (below 1 ms). Here we report on experiments in hydrogen near its liquid-vapor critical point, in which the CHF is low and the dynamics slow enough to be resolved. As under such conditions the surface tension is very small, the experiments are carried out in the reduced gravity to preserve the conventional bubble geometry. Weightlessness is created artificially in two-phase hydrogen by compensating gravity with magnetic forces. We were able to reveal the fractal structure of the contour of the percolating cluster of the dry areas at the heater that precedes the boiling crisis. We provide a direct statistical analysis of dry spot areas that confirms the boiling crisis at zero gravity as a scale-free phenomenon. It was observed that, in agreement with theoretical predictions, saturated boiling CHF tends to zero (within the precision of our thermal control system) in zero gravity, which suggests that the boiling crisis may be observed at any heat flux provided the experiment lasts long enough.

  20. Critical Slowing Down in Time-to-Extinction: An Example of Critical Phenomena in Ecology

    NASA Technical Reports Server (NTRS)

    Gandhi, Amar; Levin, Simon; Orszag, Steven

    1998-01-01

    We study a model for two competing species that explicitly accounts for effects due to discreteness, stochasticity and spatial extension of populations. The two species are equally preferred by the environment and do better when surrounded by others of the same species. We observe that the final outcome depends on the initial densities (uniformly distributed in space) of the two species. The observed phase transition is a continuous one and key macroscopic quantities like the correlation length of clusters and the time-to-extinction diverge at a critical point. Away from the critical point, the dynamics can be described by a mean-field approximation. Close to the critical point, however, there is a crossover to power-law behavior because of the gross mismatch between the largest and smallest scales in the system. We have developed a theory based on surface effects, which is in good agreement with the observed behavior. The course-grained reaction-diffusion system obtained from the mean-field dynamics agrees well with the particle system.

  1. Cluster Concept Dynamics Leading to Creative Ideas Without Critical Slowing Down

    NASA Astrophysics Data System (ADS)

    Goldenberg, Y.; Solomon, S.; Mazursky, D.

    We present algorithmic procedures for generating systematically ideas and solutions to problems which are perceived as creative. Our method consists of identifying and characterizing the most creative ideas among a vast pool. We show that they fall within a few large classes (archetypes) which share the same conceptual structure (Macros). We prescribe well defined abstract algorithms which can act deterministically on arbitrary given objects. Each algorithm generates ideas with the same conceptual structure characteristic to one of the Macros. The resulting new ideas turn out to be perceived as highly creative. We support our claims by experiments in which senior advertising professionals graded advertisement ideas produced by our method according to their creativity. The marks (grade 4.6±0.2 on a 1-7 scale) obtained by laymen applying our algorithms (after being instructed for only two hours) were significantly better than the marks obtained by advertising professionals using standard methods (grade 3.6±0.2)). The method, which is currently taught in USA, Europe, and Israel and used by advertising agencies in Britain and Israel has received formal international recognition.

  2. Moving Clocks Do Not Always Appear to Slow Down: Don't Neglect the Doppler Effect

    NASA Astrophysics Data System (ADS)

    Wang, Frank

    2013-03-01

    In popular accounts of the time dilation effect in Einstein's special relativity, one often encounters the statement that moving clocks run slow. For instance, in the acclaimed PBS program "NOVA," Professor Brian Greene says, "[I]f I walk toward that guy… he'll perceive my watch ticking slower." Also in his earlier piece for The New York Times,2 he writes that "if from your perspective someone is moving, you will see time elapsing slower for him than it does for you. Everything he does … will appear in slow motion." We need to be care- ful with this kind of description, because sometimes authors neglect to consider the finite time of signal exchange between the two individuals when they observe each other. This article points out that when two individuals approach each other, everything will actually appear in fast motion—a manifestation of the relativistic Doppler effect.3

  3. Slow down, you move too fast: emotional intelligence remains an "elusive" intelligence.

    PubMed

    Zeidner, M; Matthews, G; Roberts, R D

    2001-09-01

    Commentators on the R. D. Roberts, M. Zeidner, and G. Matthews (2001) article on the measurement of emotional intelligence (EI) made various pertinent observations that confirm the growing interest in this topic. This rejoinder finds general agreement on some key issues: learning from the history of ability testing, developing more sophisticated structural models of ability, studying emotional abilities across the life span, and establishing predictive and construct validity. However, scoring methods for tests of EI remain problematic. This rejoinder acknowledges recent improvements in convergence between different scoring methods but discusses further difficulties related to (a) neglect of group differences in normative social behaviors, (b) segregation of separate domains of knowledge linked to cognitive and emotional intelligences, (c) potential confounding of competence with learned skills and cultural factors, and (d) lack of specification of adaptive functions of EI. Empirical studies have not yet established that the Multi-Factor Emotional Intelligence Scale and related tests assess a broad EI factor of real-world significance.

  4. Moving Clocks Do Not Always Appear to Slow down: Don't Neglect the Doppler Effect

    ERIC Educational Resources Information Center

    Wang, Frank

    2013-01-01

    In popular accounts of the time dilation effect in Einstein's special relativity, one often encounters the statement that moving clocks run slow. For instance, in the acclaimed PBS program "NOVA," Professor Brian Greene says, "[I]f I walk toward that guy... he'll perceive my watch ticking slower." Also in his earlier piece for The New York Times,…

  5. Relativistic and Slowing Down: The Flow in the Hotspots of Powerful Radio Galaxies and Quasars

    NASA Technical Reports Server (NTRS)

    Kazanas, D.

    2003-01-01

    The 'hotspots' of powerful radio galaxies (the compact, high brightness regions, where the jet flow collides with the intergalactic medium (IGM)) have been imaged in radio, optical and recently in X-ray frequencies. We propose a scheme that unifies their, at first sight, disparate broad band (radio to X-ray) spectral properties. This scheme involves a relativistic flow upstream of the hotspot that decelerates to the sub-relativistic speed of its inferred advance through the IGM and it is viewed at different angles to its direction of motion, as suggested by two independent orientation estimators (the presence or not of broad emission lines in their optical spectra and the core-to-extended radio luminosity). This scheme, besides providing an account of the hotspot spectral properties with jet orientation, it also suggests that the large-scale jets remain relativistic all the way to the hotspots.

  6. REAC technology and hyaluron synthase 2, an interesting network to slow down stem cell senescence

    PubMed Central

    Maioli, Margherita; Rinaldi, Salvatore; Pigliaru, Gianfranco; Santaniello, Sara; Basoli, Valentina; Castagna, Alessandro; Fontani, Vania; Ventura, Carlo

    2016-01-01

    Hyaluronic acid (HA) plays a fundamental role in cell polarity and hydrodynamic processes, affording significant modulation of proliferation, migration, morphogenesis and senescence, with deep implication in the ability of stem cells to execute their differentiating plans. The Radio Electric Asymmetric Conveyer (REAC) technology is aimed to optimize the ions fluxes at the molecular level in order to optimize the molecular mechanisms driving cellular asymmetry and polarization. Here, we show that treatment with 4-methylumbelliferone (4-MU), a potent repressor of type 2 HA synthase and endogenous HA synthesis, dramatically antagonized the ability of REAC to recover the gene and protein expression of Bmi1, Oct4, Sox2, and Nanog in ADhMSCs that had been made senescent by prolonged culture up to the 30th passage. In senescent ADhMSCs, 4-MU also counteracted the REAC ability to rescue the gene expression of TERT, and the associated resumption of telomerase activity. Hence, the anti-senescence action of REAC is largely dependent upon the availability of endogenous HA synthesis. Endogenous HA and HA-binding proteins with REAC technology create an interesting network that acts on the modulation of cell polarity and intracellular environment. This suggests that REAC technology is effective on an intracellular niche level of stem cell regulation. PMID:27339908

  7. Hypercapnia slows down proliferation and apoptosis of human bone marrow promyeloblasts.

    PubMed

    Hamad, Mouna; Irhimeh, Mohammad R; Abbas, Ali

    2016-09-01

    Stem cells are being applied in increasingly diverse fields of research and therapy; as such, growing and culturing them in scalable quantities would be a huge advantage for all concerned. Gas mixtures containing 5 % CO2 are a typical concentration for the in vitro culturing of cells. The effect of varying the CO2 concentration on promyeloblast KG-1a cells was investigated in this paper. KG-1a cells are characterized by high expression of CD34 surface antigen, which is an important clinical surface marker for human hematopoietic stem cells (HSCs) transplantation. KG-1a cells were cultured in three CO2 concentrations (1, 5 and 15 %). Cells were batch-cultured and analyzed daily for viability, size, morphology, proliferation, and apoptosis using flow cytometry. No considerable differences were noted in KG-1a cell morphological properties at all three CO2 levels as they retained their myeloblast appearance. Calculated population doubling time increased with an increase in CO2 concentration. Enhanced cell proliferation was seen in cells cultured in hypercapnic conditions, in contrast to significantly decreased proliferation in hypocapnic populations. Flow cytometry analysis revealed that apoptosis was significantly (p = 0.0032) delayed in hypercapnic cultures, in parallel to accelerated apoptosis in hypocapnic ones. These results, which to the best of our knowledge are novel, suggest that elevated levels of CO2 are favored for the enhanced proliferation of bone marrow (BM) progenitor cells such as HSCs.

  8. Species richness declines and biotic homogenisation have slowed down for NW-European pollinators and plants

    PubMed Central

    Carvalheiro, Luísa Gigante; Kunin, William E; Keil, Petr; Aguirre-Gutiérrez, Jesus; Ellis, Willem Nicolaas; Fox, Richard; Groom, Quentin; Hennekens, Stephan; Landuyt, Wouter; Maes, Dirk; Meutter, Frank; Michez, Denis; Rasmont, Pierre; Ode, Baudewijn; Potts, Simon Geoffrey; Reemer, Menno; Roberts, Stuart Paul Masson; Schaminée, Joop; WallisDeVries, Michiel F; Biesmeijer, Jacobus Christiaan

    2013-01-01

    Concern about biodiversity loss has led to increased public investment in conservation. Whereas there is a widespread perception that such initiatives have been unsuccessful, there are few quantitative tests of this perception. Here, we evaluate whether rates of biodiversity change have altered in recent decades in three European countries (Great Britain, Netherlands and Belgium) for plants and flower visiting insects. We compared four 20-year periods, comparing periods of rapid land-use intensification and natural habitat loss (1930–1990) with a period of increased conservation investment (post-1990). We found that extensive species richness loss and biotic homogenisation occurred before 1990, whereas these negative trends became substantially less accentuated during recent decades, being partially reversed for certain taxa (e.g. bees in Great Britain and Netherlands). These results highlight the potential to maintain or even restore current species assemblages (which despite past extinctions are still of great conservation value), at least in regions where large-scale land-use intensification and natural habitat loss has ceased. PMID:23692632

  9. Strongly confined fluids: Diverging time scales and slowing down of equilibration

    NASA Astrophysics Data System (ADS)

    Schilling, Rolf

    2016-06-01

    The Newtonian dynamics of strongly confined fluids exhibits a rich behavior. Its confined and unconfined degrees of freedom decouple for confinement length L →0 . In that case and for a slit geometry the intermediate scattering functions Sμ ν(q ,t ) simplify, resulting for (μ ,ν )≠(0 ,0 ) in a Knudsen-gas-like behavior of the confined degrees of freedom, and otherwise in S∥(q ,t ) , describing the structural relaxation of the unconfined ones. Taking the coupling into account we prove that the energy fluctuations relax exponentially. For smooth potentials the relaxation times diverge as L-3 and L-4, respectively, for the confined and unconfined degrees of freedom. The strength of the L-3 divergence can be calculated analytically. It depends on the pair potential and the two-dimensional pair distribution function. Experimental setups are suggested to test these predictions.

  10. Being "Lazy" and Slowing Down: Toward Decolonizing Time, Our Body, and Pedagogy

    ERIC Educational Resources Information Center

    Shahjahan, Riyad A.

    2015-01-01

    In recent years, scholars have critiqued norms of neoliberal higher education (HE) by calling for embodied and anti-oppressive teaching and learning. Implicit in these accounts, but lacking elaboration, is a concern with reformulating the notion of "time" and temporalities of academic life. Employing a coloniality perspective, this…

  11. Application of calcium carbonate slows down organic amendments mineralization in reclaimed soils

    NASA Astrophysics Data System (ADS)

    Zornoza, Raúl; Faz, Ángel; Acosta, José A.; Martínez-Martínez, Silvia; Ángeles Muñoz, M.

    2014-05-01

    A field experiment was set up in Cartagena-La Unión Mining District, SE Spain, aimed at evaluating the short-term effects of pig slurry (PS) amendment alone and together with marble waste (MW) on organic matter mineralization, microbial activity and stabilization of heavy metals in two tailing ponds. These structures pose environmental risk owing to high metals contents, low organic matter and nutrients, and null vegetation. Carbon mineralization, exchangeable metals and microbiological properties were monitored during 67 days. The application of amendments led to a rapid decrease of exchangeable metals concentrations, except for Cu, with decreases up to 98%, 75% and 97% for Cd, Pb and Zn, respectively. The combined addition of MW+PS was the treatment with greater reduction in metals concentrations. The addition of PS caused a significant increase in respiration rates, although in MW+PS plots respiration was lower than in PS plots. The mineralised C from the pig slurry was low, approximately 25-30% and 4-12% for PS and MW+PS treatments, respectively. Soluble carbon (Csol), microbial biomass carbon (MBC) and β-galactosidase and β-glucosidase activities increased after the application of the organic amendment. However, after 3 days these parameters started a decreasing trend reaching similar values than control from approximately day 25 for Csol and MBC. The PS treatment promoted highest values in enzyme activities, which remained high upon time. Arylesterase activity increased in the MW+PS treatment. Thus, the remediation techniques used improved soil microbiological status and reduced metal availability. The combined application of PS+MW reduced the degradability of the organic compounds. Keywords: organic wastes, mine soils stabilization, carbon mineralization, microbial activity.

  12. 49 CFR 392.11 - Railroad grade crossings; slowing down required.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit said... driven upon or over such crossing until due caution has been taken to ascertain that the course is clear....

  13. 49 CFR 392.11 - Railroad grade crossings; slowing down required.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit said... driven upon or over such crossing until due caution has been taken to ascertain that the course is clear....

  14. 49 CFR 392.11 - Railroad grade crossings; slowing down required.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit said... driven upon or over such crossing until due caution has been taken to ascertain that the course is clear....

  15. 49 CFR 392.11 - Railroad grade crossings; slowing down required.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit said... driven upon or over such crossing until due caution has been taken to ascertain that the course is clear....

  16. 49 CFR 392.11 - Railroad grade crossings; slowing down required.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit said... driven upon or over such crossing until due caution has been taken to ascertain that the course is clear....

  17. Previous physical exercise slows down the complications from experimental diabetes in the calcaneal tendon

    PubMed Central

    Bezerra, Márcio Almeida; da Silva Nery, Cybelle; de Castro Silveira, Patrícia Verçoza; de Mesquita, Gabriel Nunes; de Gomes Figueiredo, Thainá; Teixeira, Magno Felipe Holanda Barboza Inácio; de Moraes, Silvia Regina Arruda

    2016-01-01

    Summary Background the complications caused by diabetes increase fragility in the muscle-tendon system, resulting in degeneration and easier rupture. To avoid this issue, therapies that increase the metabolism of glucose by the body, with physical activity, have been used after the confirmation of diabetes. We evaluate the biomechanical behavior of the calcaneal tendon and the metabolic parameters in rats induced to experimental diabetes and submitted to pre- and post-induction exercise. Methods 54-male-Wistar rats were randomly divided into four groups: Control Group (CG), Swimming Group (SG), Diabetic Group (DG), and Diabetic Swimming Group (DSG). The trained groups were submitted to swimming exercise, while unexercised groups remained restricted to the cages. Metabolic and biomechanical parameters were assessed. Results the clinical parameters of DSG showed no change due to exercise protocol. The tendon analysis of the DSG showed increased values for the elastic modulus (p<0.01) and maximum tension (p<0.001) and lowest value for transverse area (p<0.001) when compared to the SG, however it showed no difference when compared to DG. Conclusion the homogeneous values presented by the tendons of the DG and DSG show that physical exercise applied in the pre- and post-induction wasn’t enough to promote a protective effect against the tendinopathy process, but prevent the progress of degeneration. PMID:27331036

  18. Criticality in the slowed-down boiling crisis at zero gravity

    NASA Astrophysics Data System (ADS)

    Charignon, T.; Lloveras, P.; Chatain, D.; Truskinovsky, L.; Vives, E.; Beysens, D.; Nikolayev, V. S.

    2015-05-01

    Boiling crisis is a transition between nucleate and film boiling. It occurs at a threshold value of the heat flux from the heater called CHF (critical heat flux). Usually, boiling crisis studies are hindered by the high CHF and short transition duration (below 1 ms). Here we report on experiments in hydrogen near its liquid-vapor critical point, in which the CHF is low and the dynamics slow enough to be resolved. As under such conditions the surface tension is very small, the experiments are carried out in the reduced gravity to preserve the conventional bubble geometry. Weightlessness is created artificially in two-phase hydrogen by compensating gravity with magnetic forces. We were able to reveal the fractal structure of the contour of the percolating cluster of the dry areas at the heater that precedes the boiling crisis. We provide a direct statistical analysis of dry spot areas that confirms the boiling crisis at zero gravity as a scale-free phenomenon. It was observed that, in agreement with theoretical predictions, saturated boiling CHF tends to zero (within the precision of our thermal control system) in zero gravity, which suggests that the boiling crisis may be observed at any heat flux provided the experiment lasts long enough.

  19. Glassy properties and viscous slowing down: An analysis of the correlation between nonergodicity factor and fragility.

    PubMed

    Niss, Kristine; Dalle-Ferrier, Cécile; Giordano, Valentina M; Monaco, Giulio; Frick, Bernhard; Alba-Simionesco, Christiane

    2008-11-21

    We present an extensive analysis of the proposed relationship [T. Scopigno et al., Science 302, 849 (2003)] between the fragility of glass-forming liquids and the nonergodicity factor as measured by inelastic x-ray scattering. We test the robustness of the correlation through the investigation of the relative change under pressure of the speed of sound, nonergodicity factor, and broadening of the acoustic exitations of a molecular glass former, cumene, and of a polymer, polyisobutylene. For polyisobutylene, we also perform a similar study by varying its molecular weight. Moreover, we have included new results on liquids presenting an exceptionally high fragility index m under ambient conditions. We show that the linear relation, proposed by Scopigno et al. [Science 302, 849 (2003)] between fragility, measured in the liquid state, and the slope alpha of the inverse nonergodicity factor as a function of T/T(g), measured in the glassy state, is not verified when increasing the data base. In particular, while there is still a trend in the suggested direction at atmospheric pressure, its consistency is not maintained by introducing pressure as an extra control parameter modifying the fragility: whatever is the variation in the isobaric fragility, the inverse nonergodicity factor increases or remains constant within the error bars, and one observes a systematic increase in the slope alpha when the temperature is scaled by T(g)(P). To avoid any particular aspects that might cause the relation to fail, we have replaced the fragility by other related properties often evoked, e.g., thermodynamic fragility, for the understanding of its concept. Moreover, we find, as previously proposed by two of us [K. Niss and C. Alba-Simionesco, Phys. Rev. B 74, 024205 (2006)], that the nonergodicity factor evaluated at the glass transition qualitatively reflects the effect of density on the relaxation time even though in this case no clear quantitative correlations appear. PMID:19026072

  20. Magnetic field protects plants against high light by slowing down production of singlet oxygen.

    PubMed

    Hakala-Yatkin, Marja; Sarvikas, Päivi; Paturi, Petriina; Mäntysaari, Mika; Mattila, Heta; Tyystjärvi, Taina; Nedbal, Ladislav; Tyystjärvi, Esa

    2011-05-01

    Recombination of the primary radical pair of photosystem II (PSII) of photosynthesis may produce the triplet state of the primary donor of PSII. Triplet formation is potentially harmful because chlorophyll triplets can react with molecular oxygen to produce the reactive singlet oxygen (¹O₂). The yield of ¹O₂ is expected to be directly proportional to the triplet yield and the triplet yield of charge recombination can be lowered with a magnetic field of 100-300 mT. In this study, we illuminated intact pumpkin leaves with strong light in the presence and absence of a magnetic field and found that the magnetic field protects against photoinhibition of PSII. The result suggests that radical pair recombination is responsible for significant part of ¹O₂ production in the chloroplast. The magnetic field effect vanished if leaves were illuminated in the presence of lincomycin, an inhibitor of chloroplast protein synthesis, or if isolated thylakoid membranes were exposed to light. These data, in turn, indicate that ¹O₂ produced by the recombination of the primary charge pair is not directly involved in photoinactivation of PSII but instead damages PSII by inhibiting the repair of photoinhibited PSII. We also found that an Arabidopsis thaliana mutant lacking α-tocopherol, a scavenger of ¹O₂, is more sensitive to photoinhibition than the wild-type in the absence but not in the presence of lincomycin, confirming that the target of ¹O₂ is the repair mechanism.

  1. Effective face recognition using bag of features with additive kernels

    NASA Astrophysics Data System (ADS)

    Yang, Shicai; Bebis, George; Chu, Yongjie; Zhao, Lindu

    2016-01-01

    In past decades, many techniques have been used to improve face recognition performance. The most common and well-studied ways are to use the whole face image to build a subspace based on the reduction of dimensionality. Differing from methods above, we consider face recognition as an image classification problem. The face images of the same person are considered to fall into the same category. Each category and each face image could be both represented by a simple pyramid histogram. Spatial dense scale-invariant feature transform features and bag of features method are used to build categories and face representations. In an effort to make the method more efficient, a linear support vector machine solver, Pegasos, is used for the classification in the kernel space with additive kernels instead of nonlinear SVMs. Our experimental results demonstrate that the proposed method can achieve very high recognition accuracy on the ORL, YALE, and FERET databases.

  2. Some physical properties of ginkgo nuts and kernels

    NASA Astrophysics Data System (ADS)

    Ch'ng, P. E.; Abdullah, M. H. R. O.; Mathai, E. J.; Yunus, N. A.

    2013-12-01

    Some data of the physical properties of ginkgo nuts at a moisture content of 45.53% (±2.07) (wet basis) and of their kernels at 60.13% (± 2.00) (wet basis) are presented in this paper. It consists of the estimation of the mean length, width, thickness, the geometric mean diameter, sphericity, aspect ratio, unit mass, surface area, volume, true density, bulk density, and porosity measures. The coefficient of static friction for nuts and kernels was determined by using plywood, glass, rubber, and galvanized steel sheet. The data are essential in the field of food engineering especially dealing with design and development of machines, and equipment for processing and handling agriculture products.

  3. Reproducing kernel particle method for free and forced vibration analysis

    NASA Astrophysics Data System (ADS)

    Zhou, J. X.; Zhang, H. Y.; Zhang, L.

    2005-01-01

    A reproducing kernel particle method (RKPM) is presented to analyze the natural frequencies of Euler-Bernoulli beams as well as Kirchhoff plates. In addition, RKPM is also used to predict the forced vibration responses of buried pipelines due to longitudinal travelling waves. Two different approaches, Lagrange multipliers as well as transformation method , are employed to enforce essential boundary conditions. Based on the reproducing kernel approximation, the domain of interest is discretized by a set of particles without the employment of a structured mesh, which constitutes an advantage over the finite element method. Meanwhile, RKPM also exhibits advantages over the classical Rayleigh-Ritz method and its counterparts. Numerical results presented here demonstrate the effectiveness of this novel approach for both free and forced vibration analysis.

  4. Undersampled dynamic magnetic resonance imaging using kernel principal component analysis.

    PubMed

    Wang, Yanhua; Ying, Leslie

    2014-01-01

    Compressed sensing (CS) is a promising approach to accelerate dynamic magnetic resonance imaging (MRI). Most existing CS methods employ linear sparsifying transforms. The recent developments in non-linear or kernel-based sparse representations have been shown to outperform the linear transforms. In this paper, we present an iterative non-linear CS dynamic MRI reconstruction framework that uses the kernel principal component analysis (KPCA) to exploit the sparseness of the dynamic image sequence in the feature space. Specifically, we apply KPCA to represent the temporal profiles of each spatial location and reconstruct the images through a modified pre-image problem. The underlying optimization algorithm is based on variable splitting and fixed-point iteration method. Simulation results show that the proposed method outperforms conventional CS method in terms of aliasing artifact reduction and kinetic information preservation. PMID:25570262

  5. Hydroxocobalamin treatment of acute cyanide poisoning from apricot kernels.

    PubMed

    Cigolini, Davide; Ricci, Giogio; Zannoni, Massimo; Codogni, Rosalia; De Luca, Manuela; Perfetti, Paola; Rocca, Giampaolo

    2011-05-24

    Clinical experience with hydroxocobalamin in acute cyanide poisoning via ingestion remains limited. This case concerns a 35-year-old mentally ill woman who consumed more than 20 apricot kernels. Published literature suggests each kernel would have contained cyanide concentrations ranging from 0.122 to 4.09 mg/g (average 2.92 mg/g). On arrival, the woman appeared asymptomatic with a raised pulse rate and slight metabolic acidosis. Forty minutes after admission (approximately 70 min postingestion), the patient experienced headache, nausea and dyspnoea, and was hypotensive, hypoxic and tachypnoeic. Following treatment with amyl nitrite and sodium thiosulphate, her methaemoglobin level was 10%. This prompted the administration of oxygen, which evoked a slight improvement in her vital signs. Hydroxocobalamin was then administered. After 24 h, she was completely asymptomatic with normalised blood pressure and other haemodynamic parameters. This case reinforces the safety and effectiveness of hydroxocobalamin in acute cyanide poisoning by ingestion.

  6. Hydroxocobalamin treatment of acute cyanide poisoning from apricot kernels.

    PubMed

    Cigolini, Davide; Ricci, Giogio; Zannoni, Massimo; Codogni, Rosalia; De Luca, Manuela; Perfetti, Paola; Rocca, Giampaolo

    2011-09-01

    Clinical experience with hydroxocobalamin in acute cyanide poisoning via ingestion remains limited. This case concerns a 35-year-old mentally ill woman who consumed more than 20 apricot kernels. Published literature suggests each kernel would have contained cyanide concentrations ranging from 0.122 to 4.09 mg/g (average 2.92 mg/g). On arrival, the woman appeared asymptomatic with a raised pulse rate and slight metabolic acidosis. Forty minutes after admission (approximately 70 min postingestion), the patient experienced headache, nausea and dyspnoea, and was hypotensive, hypoxic and tachypnoeic. Following treatment with amyl nitrite and sodium thiosulphate, her methaemoglobin level was 10%. This prompted the administration of oxygen, which evoked a slight improvement in her vital signs. Hydroxocobalamin was then administered. After 24 h, she was completely asymptomatic with normalised blood pressure and other haemodynamic parameters. This case reinforces the safety and effectiveness of hydroxocobalamin in acute cyanide poisoning by ingestion.

  7. Realistic dispersion kernels applied to cohabitation reaction dispersion equations

    NASA Astrophysics Data System (ADS)

    Isern, Neus; Fort, Joaquim; Pérez-Losada, Joaquim

    2008-10-01

    We develop front spreading models for several jump distance probability distributions (dispersion kernels). We derive expressions for a cohabitation model (cohabitation of parents and children) and a non-cohabitation model, and apply them to the Neolithic using data from real human populations. The speeds that we obtain are consistent with observations of the Neolithic transition. The correction due to the cohabitation effect is up to 38%.

  8. Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications

    SciTech Connect

    Jones, Terry R

    2011-01-01

    This paper describes a kernel scheduling algorithm that is based on co-scheduling principles and that is intended for parallel applications running on 1000 cores or more where inter-node scalability is key. Experimental results for a Linux implementation on a Cray XT5 machine are presented.1 The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  9. Multilevel image recognition using discriminative patches and kernel covariance descriptor

    NASA Astrophysics Data System (ADS)

    Lu, Le; Yao, Jianhua; Turkbey, Evrim; Summers, Ronald M.

    2014-03-01

    Computer-aided diagnosis of medical images has emerged as an important tool to objectively improve the performance, accuracy and consistency for clinical workflow. To computerize the medical image diagnostic recognition problem, there are three fundamental problems: where to look (i.e., where is the region of interest from the whole image/volume), image feature description/encoding, and similarity metrics for classification or matching. In this paper, we exploit the motivation, implementation and performance evaluation of task-driven iterative, discriminative image patch mining; covariance matrix based descriptor via intensity, gradient and spatial layout; and log-Euclidean distance kernel for support vector machine, to address these three aspects respectively. To cope with often visually ambiguous image patterns for the region of interest in medical diagnosis, discovery of multilabel selective discriminative patches is desired. Covariance of several image statistics summarizes their second order interactions within an image patch and is proved as an effective image descriptor, with low dimensionality compared with joint statistics and fast computation regardless of the patch size. We extensively evaluate two extended Gaussian kernels using affine-invariant Riemannian metric or log-Euclidean metric with support vector machines (SVM), on two medical image classification problems of degenerative disc disease (DDD) detection on cortical shell unwrapped CT maps and colitis detection on CT key images. The proposed approach is validated with promising quantitative results on these challenging tasks. Our experimental findings and discussion also unveil some interesting insights on the covariance feature composition with or without spatial layout for classification and retrieval, and different kernel constructions for SVM. This will also shed some light on future work using covariance feature and kernel classification for medical image analysis.

  10. Cassane diterpenes from the seed kernels of Caesalpinia sappan.

    PubMed

    Nguyen, Hai Xuan; Nguyen, Nhan Trung; Dang, Phu Hoang; Thi Ho, Phuoc; Nguyen, Mai Thanh Thi; Van Can, Mao; Dibwe, Dya Fita; Ueda, Jun-Ya; Awale, Suresh

    2016-02-01

    Eight structurally diverse cassane diterpenes named tomocins A-H were isolated from the seed kernels of Vietnamese Caesalpinia sappan Linn. Their structures were determined by extensive NMR and CD spectroscopic analysis. Among the isolated compounds, tomocin A, phanginin A, F, and H exhibited mild preferential cytotoxicity against PANC-1 human pancreatic cancer cells under nutrition-deprived condition without causing toxicity in normal nutrient-rich conditions.

  11. Instantaneous Bethe-Salpeter kernel for the lightest pseudoscalar mesons

    NASA Astrophysics Data System (ADS)

    Lucha, Wolfgang; Schöberl, Franz F.

    2016-05-01

    Starting from a phenomenologically successful, numerical solution of the Dyson-Schwinger equation that governs the quark propagator, we reconstruct in detail the interaction kernel that has to enter the instantaneous approximation to the Bethe-Salpeter equation to allow us to describe the lightest pseudoscalar mesons as quark-antiquark bound states exhibiting the (almost) masslessness necessary for them to be interpretable as the (pseudo) Goldstone bosons related to the spontaneous chiral symmetry breaking of quantum chromodynamics.

  12. Benchmarking NWP Kernels on Multi- and Many-core Processors

    NASA Astrophysics Data System (ADS)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  13. Mapping quantitative trait loci for kernel composition in almond

    PubMed Central

    2012-01-01

    Background Almond breeding is increasingly taking into account kernel quality as a breeding objective. Information on the parameters to be considered in evaluating almond quality, such as protein and oil content, as well as oleic acid and tocopherol concentration, has been recently compiled. The genetic control of these traits has not yet been studied in almond, although this information would improve the efficiency of almond breeding programs. Results A map with 56 simple sequence repeat or microsatellite (SSR) markers was constructed for an almond population showing a wide range of variability for the chemical components of the almond kernel. A total of 12 putative quantitative trait loci (QTL) controlling these chemical traits have been detected in this analysis, corresponding to seven genomic regions of the eight almond linkage groups (LG). Some QTL were clustered in the same region or shared the same molecular markers, according to the correlations already found between the chemical traits. The logarithm of the odds (LOD) values for any given trait ranged from 2.12 to 4.87, explaining from 11.0 to 33.1 % of the phenotypic variance of the trait. Conclusions The results produced in the study offer the opportunity to include the new genetic information in almond breeding programs. Increases in the positive traits of kernel quality may be looked for simultaneously whenever they are genetically independent, even if they are negatively correlated. We have provided the first genetic framework for the chemical components of the almond kernel, with twelve QTL in agreement with the large number of genes controlling their metabolism. PMID:22720975

  14. Equilibrium studies of copper ion adsorption onto palm kernel fibre.

    PubMed

    Ofomaja, Augustine E

    2010-07-01

    The equilibrium sorption of copper ions from aqueous solution using a new adsorbent, palm kernel fibre, has been studied. Palm kernel fibre is obtained in large amounts as a waste product of palm oil production. Batch equilibrium studies were carried out and system variables such as solution pH, sorbent dose, and sorption temperature were varied. The equilibrium sorption data was then analyzed using the Langmuir, Freundlich, Dubinin-Radushkevich (D-R) and Temkin isotherms. The fit of these isotherm models to the equilibrium sorption data was determined, using the linear coefficient of determination, r(2), and the non-linear Chi-square, chi(2) error analysis. The results revealed that sorption was pH dependent and increased with increasing solution pH above the pH(PZC) of the palm kernel fibre with an optimum dose of 10g/dm(3). The equilibrium data were found to fit the Langmuir isotherm model best, with a monolayer capacity of 3.17 x 10(-4)mol/g at 339K. The sorption equilibrium constant, K(a), increased with increasing temperature, indicating that bond strength between sorbate and sorbent increased with temperature and sorption was endothermic. This was confirmed by the increase in the values of the Temkin isotherm constant, B(1), with increasing temperature. The Dubinin-Radushkevich (D-R) isotherm parameter, free energy, E, was in the range of 15.7-16.7kJ/mol suggesting that the sorption mechanism was ion exchange. Desorption studies showed that a high percentage of the copper was desorbed from the adsorbent using acid solutions (HCl, HNO(3) and CH(3)COOH) and the desorption percentage increased with acid concentration. The thermodynamics of the copper ions/palm kernel fibre system indicate that the process is spontaneous and endothermic. PMID:20346574

  15. Deproteinated palm kernel cake-derived oligosaccharides: A preliminary study

    NASA Astrophysics Data System (ADS)

    Fan, Suet Pin; Chia, Chin Hua; Fang, Zhen; Zakaria, Sarani; Chee, Kah Leong

    2014-09-01

    Preliminary study on microwave-assisted hydrolysis of deproteinated palm kernel cake (DPKC) to produce oligosaccharides using succinic acid was performed. Three important factors, i.e., temperature, acid concentration and reaction time, were selected to carry out the hydrolysis processes. Results showed that the highest yield of DPKC-derived oligosaccharides can be obtained at a parameter 170 °C, 0.2 N SA and 20 min of reaction time.

  16. Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism

    SciTech Connect

    Jones, Terry R

    2012-01-01

    This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  17. Equilibrium studies of copper ion adsorption onto palm kernel fibre.

    PubMed

    Ofomaja, Augustine E

    2010-07-01

    The equilibrium sorption of copper ions from aqueous solution using a new adsorbent, palm kernel fibre, has been studied. Palm kernel fibre is obtained in large amounts as a waste product of palm oil production. Batch equilibrium studies were carried out and system variables such as solution pH, sorbent dose, and sorption temperature were varied. The equilibrium sorption data was then analyzed using the Langmuir, Freundlich, Dubinin-Radushkevich (D-R) and Temkin isotherms. The fit of these isotherm models to the equilibrium sorption data was determined, using the linear coefficient of determination, r(2), and the non-linear Chi-square, chi(2) error analysis. The results revealed that sorption was pH dependent and increased with increasing solution pH above the pH(PZC) of the palm kernel fibre with an optimum dose of 10g/dm(3). The equilibrium data were found to fit the Langmuir isotherm model best, with a monolayer capacity of 3.17 x 10(-4)mol/g at 339K. The sorption equilibrium constant, K(a), increased with increasing temperature, indicating that bond strength between sorbate and sorbent increased with temperature and sorption was endothermic. This was confirmed by the increase in the values of the Temkin isotherm constant, B(1), with increasing temperature. The Dubinin-Radushkevich (D-R) isotherm parameter, free energy, E, was in the range of 15.7-16.7kJ/mol suggesting that the sorption mechanism was ion exchange. Desorption studies showed that a high percentage of the copper was desorbed from the adsorbent using acid solutions (HCl, HNO(3) and CH(3)COOH) and the desorption percentage increased with acid concentration. The thermodynamics of the copper ions/palm kernel fibre system indicate that the process is spontaneous and endothermic.

  18. Knowledge Driven Image Mining with Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels; which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper, we present the theory of Mercer Kernels, describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  19. KNBD: A Remote Kernel Block Server for Linux

    NASA Technical Reports Server (NTRS)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  20. Biodiesel from Siberian apricot (Prunus sibirica L.) seed kernel oil.

    PubMed

    Wang, Libing; Yu, Haiyan

    2012-05-01

    In this paper, Siberian apricot (Prunus sibirica L.) seed kernel oil was investigated for the first time as a promising non-conventional feedstock for preparation of biodiesel. Siberian apricot seed kernel has high oil content (50.18 ± 3.92%), and the oil has low acid value (0.46 mg g(-1)) and low water content (0.17%). The fatty acid composition of the Siberian apricot seed kernel oil includes a high percentage of oleic acid (65.23 ± 4.97%) and linoleic acid (28.92 ± 4.62%). The measured fuel properties of the Siberian apricot biodiesel, except cetane number and oxidative stability, were conformed to EN 14214-08, ASTM D6751-10 and GB/T 20828-07 standards, especially the cold flow properties were excellent (Cold filter plugging point -14°C). The addition of 500 ppm tert-butylhydroquinone (TBHQ) resulted in a higher induction period (7.7h) compliant with all the three biodiesel standards. PMID:22440572

  1. Hyperspectral-imaging-based techniques applied to wheat kernels characterization

    NASA Astrophysics Data System (ADS)

    Serranti, Silvia; Cesare, Daniela; Bonifazi, Giuseppe

    2012-05-01

    Single kernels of durum wheat have been analyzed by hyperspectral imaging (HSI). Such an approach is based on the utilization of an integrated hardware and software architecture able to digitally capture and handle spectra as an image sequence, as they results along a pre-defined alignment on a surface sample properly energized. The study was addressed to investigate the possibility to apply HSI techniques for classification of different types of wheat kernels: vitreous, yellow berry and fusarium-damaged. Reflectance spectra of selected wheat kernels of the three typologies have been acquired by a laboratory device equipped with an HSI system working in near infrared field (1000-1700 nm). The hypercubes were analyzed applying principal component analysis (PCA) to reduce the high dimensionality of data and for selecting some effective wavelengths. Partial least squares discriminant analysis (PLS-DA) was applied for classification of the three wheat typologies. The study demonstrated that good classification results were obtained not only considering the entire investigated wavelength range, but also selecting only four optimal wavelengths (1104, 1384, 1454 and 1650 nm) out of 121. The developed procedures based on HSI can be utilized for quality control purposes or for the definition of innovative sorting logics of wheat.

  2. Reduced-size kernel models for nonlinear hybrid system identification.

    PubMed

    Le, Van Luong; Bloch, Grard; Lauer, Fabien

    2011-12-01

    This brief paper focuses on the identification of nonlinear hybrid dynamical systems, i.e., systems switching between multiple nonlinear dynamical behaviors. Thus the aim is to learn an ensemble of submodels from a single set of input-output data in a regression setting with no prior knowledge on the grouping of the data points into similar behaviors. To be able to approximate arbitrary nonlinearities, kernel submodels are considered. However, in order to maintain efficiency when applying the method to large data sets, a preprocessing step is required in order to fix the submodel sizes and limit the number of optimization variables. This brief paper proposes four approaches, respectively inspired by the fixed-size least-squares support vector machines, the feature vector selection method, the kernel principal component regression and a modification of the latter, in order to deal with this issue and build sparse kernel submodels. These are compared in numerical experiments, which show that the proposed approach achieves the simultaneous classification of data points and approximation of the nonlinear behaviors in an efficient and accurate manner.

  3. Fast metabolite identification with Input Output Kernel Regression

    PubMed Central

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-01-01

    Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628

  4. Dimensionality reduction of hyperspectral images using kernel ICA

    NASA Astrophysics Data System (ADS)

    Khan, Asif; Kim, Intaek; Kong, Seong G.

    2009-05-01

    Computational burden due to high dimensionality of Hyperspectral images is an obstacle in efficient analysis and processing of Hyperspectral images. In this paper, we use Kernel Independent Component Analysis (KICA) for dimensionality reduction of Hyperspectraql images based on band selection. Commonly used ICA and PCA based dimensionality reduction methods do not consider non linear transformations and assumes that data has non-gaussian distribution. When the relation of source signals (pure materials) and observed Hyperspectral images is nonlinear then these methods drop a lot of information during dimensionality reduction process. Recent research shows that kernel-based methods are effective in nonlinear transformations. KICA is robust technique of blind source separation and can even work on near-gaussina data. We use Kernel Independent Component Analysis (KICA) for the selection of minimum number of bands that contain maximum information for detection in Hyperspectral images. The reduction of bands is basd on the evaluation of weight matrix generated by KICA. From the selected lower number of bands, we generate a new spectral image with reduced dimension and use it for hyperspectral image analysis. We use this technique as preprocessing step in detection and classification of poultry skin tumors. The hyperspectral iamge samples of chicken tumors used contain 65 spectral bands of fluorescence in the visible region of the spectrum. Experimental results show that KICA based band selection has high accuracy than that of fastICA based band selection for dimensionality reduction and analysis for Hyperspectral images.

  5. Noise Level Estimation for Model Selection in Kernel PCA Denoising.

    PubMed

    Varon, Carolina; Alzate, Carlos; Suykens, Johan A K

    2015-11-01

    One of the main challenges in unsupervised learning is to find suitable values for the model parameters. In kernel principal component analysis (kPCA), for example, these are the number of components, the kernel, and its parameters. This paper presents a model selection criterion based on distance distributions (MDDs). This criterion can be used to find the number of components and the σ(2) parameter of radial basis function kernels by means of spectral comparison between information and noise. The noise content is estimated from the statistical moments of the distribution of distances in the original dataset. This allows for a type of randomization of the dataset, without actually having to permute the data points or generate artificial datasets. After comparing the eigenvalues computed from the estimated noise with the ones from the input dataset, information is retained and maximized by a set of model parameters. In addition to the model selection criterion, this paper proposes a modification to the fixed-size method and uses the incomplete Cholesky factorization, both of which are used to solve kPCA in large-scale applications. These two approaches, together with the model selection MDD, were tested in toy examples and real life applications, and it is shown that they outperform other known algorithms. PMID:25608316

  6. Predicting activity approach based on new atoms similarity kernel function.

    PubMed

    Abu El-Atta, Ahmed H; Moussa, M I; Hassanien, Aboul Ella

    2015-07-01

    Drug design is a high cost and long term process. To reduce time and costs for drugs discoveries, new techniques are needed. Chemoinformatics field implements the informational techniques and computer science like machine learning and graph theory to discover the chemical compounds properties, such as toxicity or biological activity. This is done through analyzing their molecular structure (molecular graph). To overcome this problem there is an increasing need for algorithms to analyze and classify graph data to predict the activity of molecules. Kernels methods provide a powerful framework which combines machine learning with graph theory techniques. These kernels methods have led to impressive performance results in many several chemoinformatics problems like biological activity prediction. This paper presents a new approach based on kernel functions to solve activity prediction problem for chemical compounds. First we encode all atoms depending on their neighbors then we use these codes to find a relationship between those atoms each other. Then we use relation between different atoms to find similarity between chemical compounds. The proposed approach was compared with many other classification methods and the results show competitive accuracy with these methods.

  7. Initial Kernel Timing Using a Simple PIM Performance Model

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.; Block, Gary L.; Springer, Paul L.; Sterling, Thomas; Brockman, Jay B.; Callahan, David

    2005-01-01

    This presentation will describe some initial results of paper-and-pencil studies of 4 or 5 application kernels applied to a processor-in-memory (PIM) system roughly similar to the Cascade Lightweight Processor (LWP). The application kernels are: * Linked list traversal * Sun of leaf nodes on a tree * Bitonic sort * Vector sum * Gaussian elimination The intent of this work is to guide and validate work on the Cascade project in the areas of compilers, simulators, and languages. We will first discuss the generic PIM structure. Then, we will explain the concepts needed to program a parallel PIM system (locality, threads, parcels). Next, we will present a simple PIM performance model that will be used in the remainder of the presentation. For each kernel, we will then present a set of codes, including codes for a single PIM node, and codes for multiple PIM nodes that move data to threads and move threads to data. These codes are written at a fairly low level, between assembly and C, but much closer to C than to assembly. For each code, we will present some hand-drafted timing forecasts, based on the simple PIM performance model. Finally, we will conclude by discussing what we have learned from this work, including what programming styles seem to work best, from the point-of-view of both expressiveness and performance.

  8. Hyperspectral anomaly detection using sparse kernel-based ensemble learning

    NASA Astrophysics Data System (ADS)

    Gurram, Prudhvi; Han, Timothy; Kwon, Heesung

    2011-06-01

    In this paper, sparse kernel-based ensemble learning for hyperspectral anomaly detection is proposed. The proposed technique is aimed to optimize an ensemble of kernel-based one class classifiers, such as Support Vector Data Description (SVDD) classifiers, by estimating optimal sparse weights. In this method, hyperspectral signatures are first randomly sub-sampled into a large number of spectral feature subspaces. An enclosing hypersphere that defines the support of spectral data, corresponding to the normalcy/background data, in the Reproducing Kernel Hilbert Space (RKHS) of each respective feature subspace is then estimated using regular SVDD. The enclosing hypersphere basically represents the spectral characteristics of the background data in the respective feature subspace. The joint hypersphere is learned by optimally combining the hyperspheres from the individual RKHS, while imposing the l1 constraint on the combining weights. The joint hypersphere representing the most optimal compact support of the local hyperspectral data in the joint feature subspaces is then used to test each pixel in hyperspectral image data to determine if it belongs to the local background data or not. The outliers are considered to be targets. The performance comparison between the proposed technique and the regular SVDD is provided using the HYDICE hyperspectral images.

  9. Kernel Averaged Predictors for Spatio-Temporal Regression Models.

    PubMed

    Heaton, Matthew J; Gelfand, Alan E

    2012-12-01

    In applications where covariates and responses are observed across space and time, a common goal is to quantify the effect of a change in the covariates on the response while adequately accounting for the spatio-temporal structure of the observations. The most common approach for building such a model is to confine the relationship between a covariate and response variable to a single spatio-temporal location. However, oftentimes the relationship between the response and predictors may extend across space and time. In other words, the response may be affected by levels of predictors in spatio-temporal proximity to the response location. Here, a flexible modeling framework is proposed to capture such spatial and temporal lagged effects between a predictor and a response. Specifically, kernel functions are used to weight a spatio-temporal covariate surface in a regression model for the response. The kernels are assumed to be parametric and non-stationary with the data informing the parameter values of the kernel. The methodology is illustrated on simulated data as well as a physical data set of ozone concentrations to be explained by temperature. PMID:24010051

  10. Open-cluster density profiles derived using a kernel estimator

    NASA Astrophysics Data System (ADS)

    Seleznev, Anton F.

    2016-03-01

    Surface and spatial radial density profiles in open clusters are derived using a kernel estimator method. Formulae are obtained for the contribution of every star into the spatial density profile. The evaluation of spatial density profiles is tested against open-cluster models from N-body experiments with N = 500. Surface density profiles are derived for seven open clusters (NGC 1502, 1960, 2287, 2516, 2682, 6819 and 6939) using Two-Micron All-Sky Survey data and for different limiting magnitudes. The selection of an optimal kernel half-width is discussed. It is shown that open-cluster radius estimates hardly depend on the kernel half-width. Hints of stellar mass segregation and structural features indicating cluster non-stationarity in the regular force field are found. A comparison with other investigations shows that the data on open-cluster sizes are often underestimated. The existence of an extended corona around the open cluster NGC 6939 was confirmed. A combined function composed of the King density profile for the cluster core and the uniform sphere for the cluster corona is shown to be a better approximation of the surface radial density profile.The King function alone does not reproduce surface density profiles of sample clusters properly. The number of stars, the cluster masses and the tidal radii in the Galactic gravitational field for the sample clusters are estimated. It is shown that NGC 6819 and 6939 are extended beyond their tidal surfaces.

  11. Classification of corn kernels contaminated with aflatoxins using fluorescence and reflectance hyperspectral images analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas

    2015-05-01

    Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.

  12. General-form 3-3-3 interpolation kernel and its simplified frequency-response derivation

    NASA Astrophysics Data System (ADS)

    Deng, Tian-Bo

    2016-11-01

    An interpolation kernel is required in a wide variety of signal processing applications such as image interpolation and timing adjustment in digital communications. This article presents a general-form interpolation kernel called 3-3-3 interpolation kernel and derives its frequency response in a closed-form by using a simple derivation method. This closed-form formula is preliminary to designing various 3-3-3 interpolation kernels subject to a set of design constraints. The 3-3-3 interpolation kernel is formed through utilising the third-degree piecewise polynomials, and it is an even-symmetric function. Thus, it will suffice to consider only its right-hand side when deriving its frequency response. Since the right-hand side of the interpolation kernel contains three piecewise polynomials of the third degree, i.e. the degrees of the three piecewise polynomials are (3,3,3), we call it the 3-3-3 interpolation kernel. Once the general-form frequency-response formula is derived, we can systematically formulate the design of various 3-3-3 interpolation kernels subject to a set of design constraints, which are targeted for different interpolation applications. Therefore, the closed-form frequency-response expression is preliminary to the optimal design of various 3-3-3 interpolation kernels. We will use an example to show the optimal design of a 3-3-3 interpolation kernel based on the closed-form frequency-response expression.

  13. Hypothesis of a daemon kernel of the Earth

    NASA Astrophysics Data System (ADS)

    Drobyshevski, E. M.

    2004-01-01

    The paper considers the fate of the electrically charged (Ze 10e) Planckian elementary black holes, namely, daemons, making up the dark matter of the Galactic disc, which, as follows from our measurements, were trapped by the Earth during 4.5 Gyears in an amount equal to approximately 1024. Owing to their huge mass (about 2 x 10 kg), these particles settle down to the Earth's centre to form a kernel. Assuming that the excess flux of 10-20 TW over the heat flux level produced by known sources, which is quoted by many researchers, is due to the energy liberated in the outer kernel layers in daemon-stimulated proton decay of Fe nuclei, we have come to the conclusion that the Earth's kernel is at present a fraction of a metre in size. The observed mantle flux of 3He (and the limiting 3He to 4He ratio of about 10 4 itself) can be provided if at least one 3He (or 3T) nucleus is emitted in a daemon-stimulated decay of 102-103 Fe nuclei. This could actually remove the only objection to the hot origin of the Earth and to its original melting. The high energy liberation at the centre of the Earth drives two-phase two-dimensional convection in its inner core (IC), with rolls oriented along the rotation axis. This provides an explanation for the numerous features in the IC structure revealed in recent years (anisotropy in the seismic wave propagation, the existence of small irregularities, the strong damping of the P and S waves, ambiguities in the measurements of the IC rotation rate, etc.). The energy release in the kernel grows continuously as the number of daemons in it increases. Therefore the global tectonic activity, which had died out after the initial differentiation and cooling off of the Earth was reanimated 2 Gyears ago by the rearrangement and enhancement of convection in the mantle as a result of the increasing outward energy flow. It is pointed out that, as the kernel continues to grow, the tectonic activity will become intensified rather than die out, as was

  14. Functional diversity among seed dispersal kernels generated by carnivorous mammals.

    PubMed

    González-Varo, Juan P; López-Bao, José V; Guitián, José

    2013-05-01

    1. Knowledge of the spatial scale of the dispersal service provided by important seed dispersers (i.e. common and/or keystone species) is essential to our understanding of their role on plant ecology, ecosystem functioning and, ultimately, biodiversity conservation. 2. Carnivores are the main mammalian frugivores and seed dispersers in temperate climate regions. However, information on the seed dispersal distances they generate is still very limited. We focused on two common temperate carnivores differing in body size and spatial ecology - red fox (Vulpes vulpes) and European pine marten (Martes martes) - for evaluating possible functional diversity in their seed dispersal kernels. 3. We measured dispersal distances using colour-coded seed mimics embedded in experimental fruits that were offered to the carnivores in feeding stations (simulating source trees). The exclusive colour code of each simulated tree allowed us to assign the exact origin of seed mimics found later in carnivore faeces. We further designed an explicit sampling strategy aiming to detect the longest dispersal events; as far we know, the most robust sampling scheme followed for tracking carnivore-dispersed seeds. 4. We found a marked functional heterogeneity among both species in their seed dispersal kernels according to their home range size: multimodality and long-distance dispersal in the case of the fox and unimodality and short-distance dispersal in the case of the marten (maximum distances = 2846 and 1233 m, respectively). As a consequence, emergent kernels at the guild level (overall and in two different years) were highly dependent on the relative contribution of each carnivore species. 5. Our results provide the first empirical evidence of functional diversity among seed dispersal kernels generated by carnivorous mammals. Moreover, they illustrate for the first time how seed dispersal kernels strongly depend on the relative contribution of different disperser species, thus on the

  15. Interaction between drought and chronic high temperature during kernel filling in wheat in a controlled environment.

    PubMed

    Wardlaw, Ian F

    2002-10-01

    Wheat plants (Triticum aestivum L. 'Lyallpur'), limited to a single culm, were grown at day/night temperatures of either 18/13 degrees C (moderate temperature), or 27/22 degrees C (chronic high temperature) from the time of anthesis. Plants were either non-droughted or subjected to two post-anthesis water stresses by withholding water from plants grown in different volumes of potting mix. In selected plants the demand for assimilates by the ear was reduced by removal of all but the five central spikelets. In non-droughted plants, it was confirmed that shading following anthesis (source limitation) reduced kernel dry weight at maturity, with a compensating increase in the dry weight of the remaining kernels when the total number of kernels was reduced (small sink). Reducing kernel number did not alter the effect of high temperature following anthesis on the dry weight of the remaining kernels at maturity, but reducing the number of kernels did result in a greater dry weight of the remaining kernels of droughted plants. However, the relationship between the response to drought and kernel number was confounded by a reduction in the extent of water stress associated with kernel removal. Data on the effect of water stress on kernel dry weight at maturity of plants with either the full complement or reduced numbers of kernels, and subjected to low and high temperatures following anthesis, indicate that the effect of drought on kernel dry weight may be reduced, in both absolute and relative terms, rather than enhanced, at high temperature. It is suggested that where high temperature and drought occur concurrently after anthesis there may be a degree of drought escape associated with chronic high temperature due to the reduction in the duration of kernel filling, even though the rate of water use may be enhanced by high temperature. PMID:12324270

  16. The Effects of Kernel Feeding by Halyomorpha halys (Hemiptera: Pentatomidae) on Commercial Hazelnuts.

    PubMed

    Hedstrom, C S; Shearer, P W; Miller, J C; Walton, V M

    2014-10-01

    Halyomorpha halys Stål, the brown marmorated stink bug (Hemiptera: Pentatomidae), is an invasive pest with established populations in Oregon. The generalist feeding habits of H. halys suggest it has the potential to be a pest of many specialty crops grown in Oregon, including hazelnuts, Corylus avellana L. The objectives of this study were to: 1) characterize the damage to developing hazelnut kernels resulting from feeding by H. halys adults, 2) determine how the timing of feeding during kernel development influences damage to kernels, and 3) determine if hazelnut shell thickness has an effect on feeding frequency on kernels. Adult brown marmorated stink bugs were allowed to feed on developing nuts for 1-wk periods from initial kernel development (spring) until harvest (fall). Developing nuts not exposed to feeding by H. halys served as a control treatment. The degree of damage and diagnostic symptoms corresponded with the hazelnut kernels' physiological development. Our results demonstrated that when H. halys fed on hazelnuts before kernel expansion, development of the kernels could cease, resulting in empty shells. When stink bugs fed during kernel expansion, kernels appeared malformed. When stink bugs fed on mature nuts the kernels exhibited corky, necrotic areas. Although significant differences in shell thickness were observed among the cultivars, no significant differences occurred in the proportions of damaged kernels based on field tests and laboratory choice tests. The results of these studies demonstrated that commercial hazelnuts are susceptible to damage caused by the feeding of H. halys throughout the entire period of kernel development. PMID:26309276

  17. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed.

  18. Kernel-aligned multi-view canonical correlation analysis for image recognition

    NASA Astrophysics Data System (ADS)

    Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao

    2016-09-01

    Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.

  19. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  20. Antioxidant and antimicrobial activities of bitter and sweet apricot (Prunus armeniaca L.) kernels.

    PubMed

    Yiğit, D; Yiğit, N; Mavi, A

    2009-04-01

    The present study describes the in vitro antimicrobial and antioxidant activity of methanol and water extracts of sweet and bitter apricot (Prunus armeniaca L.) kernels. The antioxidant properties of apricot kernels were evaluated by determining radical scavenging power, lipid peroxidation inhibition activity and total phenol content measured with a DPPH test, the thiocyanate method and the Folin method, respectively. In contrast to extracts of the bitter kernels, both the water and methanol extracts of sweet kernels have antioxidant potential. The highest percent inhibition of lipid peroxidation (69%) and total phenolic content (7.9 +/- 0.2 microg/mL) were detected in the methanol extract of sweet kernels (Hasanbey) and in the water extract of the same cultivar, respectively. The antimicrobial activities of the above extracts were also tested against human pathogenic microorganisms using a disc-diffusion method, and the minimal inhibitory concentration (MIC) values of each active extract were determined. The most effective antibacterial activity was observed in the methanol and water extracts of bitter kernels and in the methanol extract of sweet kernels against the Gram-positive bacteria Staphylococcus aureus. Additionally, the methanol extracts of the bitter kernels were very potent against the Gram-negative bacteria Escherichia coli (0.312 mg/mL MIC value). Significant anti-candida activity was also observed with the methanol extract of bitter apricot kernels against Candida albicans, consisting of a 14 mm in diameter of inhibition zone and a 0.625 mg/mL MIC value.

  1. Removing blur kernel noise via a hybrid ℓp norm

    NASA Astrophysics Data System (ADS)

    Yu, Xin; Zhang, Shunli; Zhao, Xiaolin; Zhang, Li

    2015-01-01

    When estimating a sharp image from a blurred one, blur kernel noise often leads to inaccurate recovery. We develop an effective method to estimate a blur kernel which is able to remove kernel noise and prevent the production of an overly sparse kernel. Our method is based on an iterative framework which alternatingly recovers the sharp image and estimates the blur kernel. In the image recovery step, we utilize the total variation (TV) regularization to recover latent images. In solving TV regularization, we propose a new criterion which adaptively terminates the iterations before convergence. While improving the efficiency, the quality of the final results is not degraded. In the kernel estimation step, we develop a metric to measure the usefulness of image edges, by which we can reduce the ambiguity of kernel estimation caused by small-scale edges. We also propose a hybrid ℓp norm, which is composed of ℓ2 norm and ℓp norm with 0.7≤p<1, to construct a sparsity constraint. Using the hybrid ℓp norm, we reduce a wider range of kernel noise and recover a more accurate blur kernel. The experiments show that the proposed method achieves promising results on both synthetic and real images.

  2. Influence of argan kernel roasting-time on virgin argan oil composition and oxidative stability.

    PubMed

    Harhar, Hicham; Gharby, Saïd; Kartah, Bader; El Monfalouti, Hanae; Guillaume, Dom; Charrouf, Zoubida

    2011-06-01

    Virgin argan oil, which is harvested from argan fruit kernels, constitutes an alimentary source of substances of nutraceutical value. Chemical composition and oxidative stability of argan oil prepared from argan kernels roasted for different times were evaluated and compared with those of beauty argan oil that is prepared from unroasted kernels. Prolonged roasting time induced colour development and increased phosphorous content whereas fatty acid composition and tocopherol levels did not change. Oxidative stability data indicate that kernel roasting for 15 to 30 min at 110 °C is optimum to preserve virgin argan oil nutritive properties.

  3. Growth inhibition of a Fusarium verticillioides GUS strain in corn kernels of aflatoxin-resistant genotypes.

    PubMed

    Brown, R L; Cleveland, T E; Woloshuk, C P; Payne, G A; Bhatnagar, D

    2001-12-01

    Two corn genotypes, GT-MAS:gk and MI82, resistant to Aspergillus flavus infection/aflatoxin contamination, were tested for their ability to limit growth of Fusarium verticillioides. An F. verticillioides strain was transformed with a beta-glucuronidase (GUS) reporter gene (uidA) construct to facilitate fungal growth quantification and then inoculated onto endosperm-wounded and non-wounded kernels of the above-corn lines. To serve as a control, an A. flavus strain containing the same reporter gene construct was inoculated onto non-wounded kernels of GT-MAS:gk. Results showed that, as in a previous study, non-wounded GT-MAS:gk kernels supported less growth (six- to ten-fold) of A. flavus than did kernels of a susceptible control. Also, non-wounded kernels of GT-MAS:gk and M182 supported less growth (two- to four-fold) of F. verticillioides than did susceptible kernels. Wounding, however, increased F. verticillioides infection of MI82, but not that of GT-MAS:gk. This is in contrast to a previous study of A. flavus, where wounding increased infection of GT-MAS:gk rather than M182 kernels. Further study is needed to explain genotypic variation in the kernel response to A. flavus and F. verticillioides kernel infections. Also, the potential for aflatoxin-resistant corn lines to likewise inhibit growth of F. verticillioides needs to be confirmed in the field. PMID:11778882

  4. Participation of cob tissue in the transport of medium components into maize kernels cultured in vitro

    SciTech Connect

    Felker, F.C. )

    1990-05-01

    Maize (Zea mays L.) kernels cultured in vitro while still attached to cob pieces have been used as a model system to study the physiology of kernel development. In this study, the role of the cob tissue in uptake of medium components into kernels was examined. Cob tissue was essential for in vitro kernel growth, and better growth occurred with larger cob/kernel ratios. A symplastically transported fluorescent dye readily permeated the endosperm when supplied in the medium, while an apoplastic dye did not. Slicing the cob tissue to disrupt vascular connections, but not apoplastic continuity, greatly reduced ({sup 14}C)sucrose uptake into kernels. ({sup 14}C)Sucrose uptake by cob and kernel tissue was reduced 31% and 68%, respectively, by 5 mM PCMBS. L-({sup 14}C)glucose was absorbed much more slowly than D-({sup 14}C)glucose. These and other results indicate that phloem loading of sugars occurs in the cob tissue. Passage of medium components through the symplast cob tissue may be a prerequisite for uptake into the kernel. Simple diffusion from the medium to the kernels is unlikely. Therefore, the ability of substances to be transported into cob tissue cells should be considered in formulating culture medium.

  5. Graphlet kernels for prediction of functional residues in protein structures.

    PubMed

    Vacic, Vladimir; Iakoucheva, Lilia M; Lonardi, Stefano; Radivojac, Predrag

    2010-01-01

    We introduce a novel graph-based kernel method for annotating functional residues in protein structures. A structure is first modeled as a protein contact graph, where nodes correspond to residues and edges connect spatially neighboring residues. Each vertex in the graph is then represented as a vector of counts of labeled non-isomorphic subgraphs (graphlets), centered on the vertex of interest. A similarity measure between two vertices is expressed as the inner product of their respective count vectors and is used in a supervised learning framework to classify protein residues. We evaluated our method on two function prediction problems: identification of catalytic residues in proteins, which is a well-studied problem suitable for benchmarking, and a much less explored problem of predicting phosphorylation sites in protein structures. The performance of the graphlet kernel approach was then compared against two alternative methods, a sequence-based predictor and our implementation of the FEATURE framework. On both tasks, the graphlet kernel performed favorably; however, the margin of difference was considerably higher on the problem of phosphorylation site prediction. While there is data that phosphorylation sites are preferentially positioned in intrinsically disordered regions, we provide evidence that for the sites that are located in structured regions, neither the surface accessibility alone nor the averaged measures calculated from the residue microenvironments utilized by FEATURE were sufficient to achieve high accuracy. The key benefit of the graphlet representation is its ability to capture neighborhood similarities in protein structures via enumerating the patterns of local connectivity in the corresponding labeled graphs.

  6. Xyloglucans from flaxseed kernel cell wall: Structural and conformational characterisation.

    PubMed

    Ding, Huihuang H; Cui, Steve W; Goff, H Douglas; Chen, Jie; Guo, Qingbin; Wang, Qi

    2016-10-20

    The structure of ethanol precipitated fraction from 1M KOH extracted flaxseed kernel polysaccharides (KPI-EPF) was studied for better understanding the molecular structures of flaxseed kernel cell wall polysaccharides. Based on methylation/GC-MS, NMR spectroscopy, and MALDI-TOF-MS analysis, the dominate sugar residues of KPI-EPF fraction comprised of (1,4,6)-linked-β-d-glucopyranose (24.1mol%), terminal α-d-xylopyranose (16.2mol%), (1,2)-α-d-linked-xylopyranose (10.7mol%), (1,4)-β-d-linked-glucopyranose (10.7mol%), and terminal β-d-galactopyranose (8.5mol%). KPI-EPF was proposed as xyloglucans: The substitution rate of the backbone is 69.3%; R1 could be T-α-d-Xylp-(1→, or none; R2 could be T-α-d-Xylp-(1→, T-β-d-Galp-(1→2)-α-d-Xylp-(1→, or T-α-l-Araf-(1→2)-α-d-Xylp-(1→; R3 could be T-α-d-Xylp-(1→, T-β-d-Galp-(1→2)-α-d-Xylp-(1→, T-α-l-Fucp-(1→2)-β-d-Galp-(1→2)-α-d-Xylp-(1→, or none. The Mw of KPI-EPF was calculated to be 1506kDa by static light scattering (SLS). The structure-sensitive parameter (ρ) of KPI-EPF was calculated as 1.44, which confirmed the highly branched structure of extracted xyloglucans. This new findings on flaxseed kernel xyloglucans will be helpful for understanding its fermentation properties and potential applications. PMID:27474598

  7. Genetic dissection of the maize kernel development process via conditional QTL mapping for three developing kernel-related traits in an immortalized F2 population.

    PubMed

    Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua

    2016-02-01

    Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.

  8. Nonlinear Knowledge in Kernel-Based Multiple Criteria Programming Classifier

    NASA Astrophysics Data System (ADS)

    Zhang, Dongling; Tian, Yingjie; Shi, Yong

    Kernel-based Multiple Criteria Linear Programming (KMCLP) model is used as classification methods, which can learn from training examples. Whereas, in traditional machine learning area, data sets are classified only by prior knowledge. Some works combine the above two classification principle to overcome the defaults of each approach. In this paper, we propose a model to incorporate the nonlinear knowledge into KMCLP in order to solve the problem when input consists of not only training example, but also nonlinear prior knowledge. In dealing with real world case breast cancer diagnosis, the model shows its better performance than the model solely based on training data.

  9. Anytime query-tuned kernel machine classifiers via Cholesky factorization

    NASA Technical Reports Server (NTRS)

    DeCoste, D.

    2002-01-01

    We recently demonstrated 2 to 64-fold query-time speedups of Support Vector Machine and Kernel Fisher classifiers via a new computational geometry method for anytime output bounds (DeCoste,2002). This new paper refines our approach in two key ways. First, we introduce a simple linear algebra formulation based on Cholesky factorization, yielding simpler equations and lower computational overhead. Second, this new formulation suggests new methods for achieving additional speedups, including tuning on query samples. We demonstrate effectiveness on benchmark datasets.

  10. Kernel PLS-SVC for Linear and Nonlinear Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Matthews, Bryan

    2003-01-01

    A new methodology for discrimination is proposed. This is based on kernel orthonormalized partial least squares (PLS) dimensionality reduction of the original data space followed by support vector machines for classification. Close connection of orthonormalized PLS and Fisher's approach to linear discrimination or equivalently with canonical correlation analysis is described. This gives preference to use orthonormalized PLS over principal component analysis. Good behavior of the proposed method is demonstrated on 13 different benchmark data sets and on the real world problem of the classification finger movement periods versus non-movement periods based on electroencephalogram.

  11. Kernel methods for large-scale genomic data analysis

    PubMed Central

    Xing, Eric P.; Schaid, Daniel J.

    2015-01-01

    Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743

  12. Sufficient conditions for a memory-kernel master equation

    NASA Astrophysics Data System (ADS)

    Chruściński, Dariusz; Kossakowski, Andrzej

    2016-08-01

    We derive sufficient conditions for the memory-kernel governing nonlocal master equation which guarantee a legitimate (completely positive and trace-preserving) dynamical map. It turns out that these conditions provide natural parametrizations of the dynamical map being a generalization of the Markovian semigroup. This parametrization is defined by the so-called legitimate pair—monotonic quantum operation and completely positive map—and it is shown that such a class of maps covers almost all known examples from the Markovian semigroup, the semi-Markov evolution, up to collision models and their generalization.

  13. On the solution of integral equations with strongly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1987-01-01

    Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  14. On the solution of integral equations with strongly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m ,m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup -m , terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  15. Kernel based color estimation for night vision imagery

    NASA Astrophysics Data System (ADS)

    Gu, Xiaojing; Sun, Shaoyuan; Fang, Jian'an; Zhou, Peng

    2012-04-01

    Displaying night vision (NV) imagery with colors can largely improve observer's performance of scene recognition and situational awareness comparing to the conventional monochrome representation. However, estimating colors for single-band NV imagery has two challenges: deriving an appropriate color mapping model and extracting sufficient image features required by the model. To address these, a kernel based regression model and a set of multi-scale image features are used here. The proposed method can automatically render single-band NV imagery with natural colors, even when it has abnormal luminance distribution and lacks identifiable details.

  16. Partial Kernelization for Rank Aggregation: Theory and Experiments

    NASA Astrophysics Data System (ADS)

    Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf

    Rank Aggregation is important in many areas ranging from web search over databases to bioinformatics. The underlying decision problem Kemeny Score is NP-complete even in case of four input rankings to be aggregated into a "median ranking". We study efficient polynomial-time data reduction rules that allow us to find optimal median rankings. On the theoretical side, we improve a result for a "partial problem kernel" from quadratic to linear size. On the practical side, we provide encouraging experimental results with data based on web search and sport competitions, e.g., computing optimal median rankings for real-world instances with more than 100 candidates within milliseconds.

  17. Emotion Recognition from Single-Trial EEG Based on Kernel Fisher's Emotion Pattern and Imbalanced Quasiconformal Kernel Support Vector Machine

    PubMed Central

    Liu, Yi-Hung; Wu, Chien-Te; Cheng, Wei-Teng; Hsiao, Yu-Tsung; Chen, Po-Ming; Teng, Jyh-Tong

    2014-01-01

    Electroencephalogram-based emotion recognition (EEG-ER) has received increasing attention in the fields of health care, affective computing, and brain-computer interface (BCI). However, satisfactory ER performance within a bi-dimensional and non-discrete emotional space using single-trial EEG data remains a challenging task. To address this issue, we propose a three-layer scheme for single-trial EEG-ER. In the first layer, a set of spectral powers of different EEG frequency bands are extracted from multi-channel single-trial EEG signals. In the second layer, the kernel Fisher's discriminant analysis method is applied to further extract features with better discrimination ability from the EEG spectral powers. The feature vector produced by layer 2 is called a kernel Fisher's emotion pattern (KFEP), and is sent into layer 3 for further classification where the proposed imbalanced quasiconformal kernel support vector machine (IQK-SVM) serves as the emotion classifier. The outputs of the three layer EEG-ER system include labels of emotional valence and arousal. Furthermore, to collect effective training and testing datasets for the current EEG-ER system, we also use an emotion-induction paradigm in which a set of pictures selected from the International Affective Picture System (IAPS) are employed as emotion induction stimuli. The performance of the proposed three-layer solution is compared with that of other EEG spectral power-based features and emotion classifiers. Results on 10 healthy participants indicate that the proposed KFEP feature performs better than other spectral power features, and IQK-SVM outperforms traditional SVM in terms of the EEG-ER accuracy. Our findings also show that the proposed EEG-ER scheme achieves the highest classification accuracies of valence (82.68%) and arousal (84.79%) among all testing methods. PMID:25061837

  18. Selection of Haploid Maize Kernels from Hybrid Kernels for Plant Breeding Using Near-Infrared Spectroscopy and SIMCA Analysis

    SciTech Connect

    Jones, Roger W.; Reinot, Tonu; Frei, Ursula K.; Tseng, Yichia; Lübberstedt, Thomas; McClelland, John F.

    2012-04-01

    Samples of haploid and hybrid seed from three different maize donor genotypes after maternal haploid induction were used to test the capability of automated near-infrared transmission spectroscopy to individually differentiate haploid from hybrid seeds. Using a two-step chemometric analysis in which the seeds were first classified according to genotype and then the haploid or hybrid status was determined proved to be the most successful approach. This approach allowed 11 of 13 haploid and 25 of 25 hybrid kernels to be correctly identified from a mixture that included seeds of all the genotypes.

  19. Association mapping for kernel phytosterol content in almond

    PubMed Central

    Font i Forcada, Carolina; Velasco, Leonardo; Socias i Company, Rafel; Fernández i Martí, Ángel

    2015-01-01

    Almond kernels are a rich source of phytosterols, which are important compounds for human nutrition. The genetic control of phytosterol content has not yet been documented in almond. Association mapping (AM), also known as linkage disequilibrium (LD), was applied to an almond germplasm collection in order to provide new insight into the genetic control of total and individual sterol contents in kernels. Population structure analysis grouped the accessions into two principal groups, the Mediterranean and the non-Mediterranean. There was a strong subpopulation structure with LD decaying with increasing genetic distance, resulting in lower levels of LD between more distant markers. A significant impact of population structure on LD in the almond cultivar groups was observed. The mean r2-value for all intra-chromosomal loci pairs was 0.040, whereas, the r2 for the inter-chromosomal loci pairs was 0.036. For analysis of association between the markers and phenotypic traits five models were tested. The mixed linear model (MLM) approach using co-ancestry values from population structure and kinship estimates (K model) as covariates identified a maximum of 13 significant associations. Most of the associations found appeared to map within the interval where many candidate genes involved in the sterol biosynthesis pathway are predicted in the peach genome. These findings provide a valuable foundation for quality gene identification and molecular marker assisted breeding in almond. PMID:26217374

  20. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.

    PubMed

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-08-16

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

  1. KERNEL-SMOOTHED CONDITIONAL QUANTILES OF CORRELATED BIVARIATE DISCRETE DATA

    PubMed Central

    De Gooijer, Jan G.; Yuan, Ao

    2012-01-01

    Socio-economic variables are often measured on a discrete scale or rounded to protect confidentiality. Nevertheless, when exploring the effect of a relevant covariate on the outcome distribution of a discrete response variable, virtually all common quantile regression methods require the distribution of the covariate to be continuous. This paper departs from this basic requirement by presenting an algorithm for nonparametric estimation of conditional quantiles when both the response variable and the covariate are discrete. Moreover, we allow the variables of interest to be pairwise correlated. For computational efficiency, we aggregate the data into smaller subsets by a binning operation, and make inference on the resulting prebinned data. Specifically, we propose two kernel-based binned conditional quantile estimators, one for untransformed discrete response data and one for rank-transformed response data. We establish asymptotic properties of both estimators. A practical procedure for jointly selecting band- and binwidth parameters is also presented. Simulation results show excellent estimation accuracy in terms of bias, mean squared error, and confidence interval coverage. Typically prebinning the data leads to considerable computational savings when large datasets are under study, as compared to direct (un)conditional quantile kernel estimation of multivariate data. With this in mind, we illustrate the proposed methodology with an application to a large dataset concerning US hospital patients with congestive heart failure. PMID:23667297

  2. Parsimonious kernel extreme learning machine in primal via Cholesky factorization.

    PubMed

    Zhao, Yong-Ping

    2016-08-01

    Recently, extreme learning machine (ELM) has become a popular topic in machine learning community. By replacing the so-called ELM feature mappings with the nonlinear mappings induced by kernel functions, two kernel ELMs, i.e., P-KELM and D-KELM, are obtained from primal and dual perspectives, respectively. Unfortunately, both P-KELM and D-KELM possess the dense solutions in direct proportion to the number of training data. To this end, a constructive algorithm for P-KELM (CCP-KELM) is first proposed by virtue of Cholesky factorization, in which the training data incurring the largest reductions on the objective function are recruited as significant vectors. To reduce its training cost further, PCCP-KELM is then obtained with the application of a probabilistic speedup scheme into CCP-KELM. Corresponding to CCP-KELM, a destructive P-KELM (CDP-KELM) is presented using a partial Cholesky factorization strategy, where the training data incurring the smallest reductions on the objective function after their removals are pruned from the current set of significant vectors. Finally, to verify the efficacy and feasibility of the proposed algorithms in this paper, experiments on both small and large benchmark data sets are investigated.

  3. Seismic hazard of the Iberian Peninsula: evaluation with kernel functions

    NASA Astrophysics Data System (ADS)

    Crespo, M. J.; Martínez, F.; Martí, J.

    2014-05-01

    The seismic hazard of the Iberian Peninsula is analysed using a nonparametric methodology based on statistical kernel functions; the activity rate is derived from the catalogue data, both its spatial dependence (without a seismogenic zonation) and its magnitude dependence (without using Gutenberg-Richter's relationship). The catalogue is that of the Instituto Geográfico Nacional, supplemented with other catalogues around the periphery; the quantification of events has been homogenised and spatially or temporally interrelated events have been suppressed to assume a Poisson process. The activity rate is determined by the kernel function, the bandwidth and the effective periods. The resulting rate is compared with that produced using Gutenberg-Richter statistics and a zoned approach. Three attenuation relationships have been employed, one for deep sources and two for shallower events, depending on whether their magnitude was above or below 5. The results are presented as seismic hazard maps for different spectral frequencies and for return periods of 475 and 2475 yr, which allows constructing uniform hazard spectra.

  4. Seismic hazards of the Iberian Peninsula - evaluation with kernel functions

    NASA Astrophysics Data System (ADS)

    Crespo, M. J.; Martínez, F.; Martí, J.

    2013-08-01

    The seismic hazard of the Iberian Peninsula is analysed using a nonparametric methodology based on statistical kernel functions; the activity rate is derived from the catalogue data, both its spatial dependence (without a seismogenetic zonation) and its magnitude dependence (without using Gutenberg-Richter's law). The catalogue is that of the Instituto Geográfico Nacional, supplemented with other catalogues around the periphery; the quantification of events has been homogenised and spatially or temporally interrelated events have been suppressed to assume a Poisson process. The activity rate is determined by the kernel function, the bandwidth and the effective periods. The resulting rate is compared with that produced using Gutenberg-Richter statistics and a zoned approach. Three attenuation laws have been employed, one for deep sources and two for shallower events, depending on whether their magnitude was above or below 5. The results are presented as seismic hazard maps for different spectral frequencies and for return periods of 475 and 2475 yr, which allows constructing uniform hazard spectra.

  5. Gaussian Kernel Based Classification Approach for Wheat Identification

    NASA Astrophysics Data System (ADS)

    Aggarwal, R.; Kumar, A.; Raju, P. L. N.; Krishna Murthy, Y. V. N.

    2014-11-01

    Agriculture holds a pivotal role in context to India, which is basically agrarian economy. Crop type identification is a key issue for monitoring agriculture and is the basis for crop acreage and yield estimation. However, it is very challenging to identify a specific crop using single date imagery. Hence, it is highly important to go for multi-temporal analysis approach for specific crop identification. This research work deals with implementation of fuzzy classifier; Possibilistic c-Means (PCM) with and without kernel based approach, using temporal data of Landsat 8- OLI (Operational Land Imager) for identification of wheat in Radaur City, Haryana. The multi- temporal dataset covers complete phenological cycle that is from seedling to ripening of wheat crop growth. The experimental results show that inclusion of Gaussian kernel, with Euclidean Norm (ED Norm) in Possibilistic c-Means (KPCM), soft classifier has been more robust in identification of the wheat crop. Also, identification of all the wheat fields is dependent upon appropriate selection of the temporal date. The best combination of temporal data corresponds to tillering, stem extension, heading and ripening stages of wheat crop. Entropy at testing sites of wheat has been used to validate the classified results. The entropy value at testing sites was observed to be low, implying lower uncertainty of existence of any other class at wheat test sites and high certainty of existence of wheat crop.

  6. Overcoming Unix kernel deficiencies in a portable, distributed storage system

    SciTech Connect

    Gary, M.

    1990-01-01

    The LINCS Storage System at Lawrence Livermore National Laboratory was designed to provide an efficient, portable, distributed file and directory system capable of running on a variety of hardware platforms, consistent with the IEEE Mass Storage System Reference Model. Our intent was to meet these requirements with a storage system running atop standard, unmodified versions of the Unix operating system. Most of the system components runs as ordinary user processes. However, for those components that were implemented in the kernel to improve performances, Unix presented a number of hurdles. These included the lack of a lightweight tasking facility in the kernel; process-blocked I/O; inefficient data transfer; and the lack of optimized drivers for storage devices. How we overcame these difficulties is the subject of this paper. Ideally, future evolution of Unix by vendors will provide the missing facilities; until then, however, data centers adopting Unix operating systems for large-scale distributed computing will have to provide similar solutions. 11 refs., 5 figs.

  7. Very long chain fatty acid synthesis in sunflower kernels.

    PubMed

    Salas, Joaquín J; Martínez-Force, Enrique; Garcés, Rafael

    2005-04-01

    Most common seed oils contain small amounts of very long chain fatty acids (VLCFAs), the main components of oils from species such as Brassica napus or Lunnaria annua. These fatty acids are synthesized from acyl-CoA precursors in the endoplasmic reticulum through the activity of a dissociated enzyme complex known as fatty acid elongase. We studied the synthesis of the arachidic, behenic, and lignoceric VLCFAs in sunflower kernels, in which they account for 1-3% of the saturated fatty acids. These VLCFAs are synthesized from 18:0-CoA by membrane-bound fatty acid elongases, and their biosynthesis is mainly dependent on NADPH equivalents. Two condensing enzymes appear to be responsible for the synthesis of VLCFAs in sunflower kernels, beta-ketoacyl-CoA synthase-I (KCS-I) and beta-ketoacyl-CoA synthase-II (KCS-II). Both of these enzymes were resolved by ion exchange chromatography and display different substrate specificities. While KCS-I displays a preference for 20:0-CoA, 18:0-CoA was more efficiently elongated by KCS-II. Both enzymes have different sensitivities to pH and Triton X-100, and their kinetic properties indicate that both are strongly inhibited by the presence of their substrates. In light of these results, the VLCFA composition of sunflower oil is considered in relation to that in other commercially exploited oils.

  8. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation

    PubMed Central

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-01-01

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888

  9. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods

    PubMed Central

    Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  10. Parsimonious kernel extreme learning machine in primal via Cholesky factorization.

    PubMed

    Zhao, Yong-Ping

    2016-08-01

    Recently, extreme learning machine (ELM) has become a popular topic in machine learning community. By replacing the so-called ELM feature mappings with the nonlinear mappings induced by kernel functions, two kernel ELMs, i.e., P-KELM and D-KELM, are obtained from primal and dual perspectives, respectively. Unfortunately, both P-KELM and D-KELM possess the dense solutions in direct proportion to the number of training data. To this end, a constructive algorithm for P-KELM (CCP-KELM) is first proposed by virtue of Cholesky factorization, in which the training data incurring the largest reductions on the objective function are recruited as significant vectors. To reduce its training cost further, PCCP-KELM is then obtained with the application of a probabilistic speedup scheme into CCP-KELM. Corresponding to CCP-KELM, a destructive P-KELM (CDP-KELM) is presented using a partial Cholesky factorization strategy, where the training data incurring the smallest reductions on the objective function after their removals are pruned from the current set of significant vectors. Finally, to verify the efficacy and feasibility of the proposed algorithms in this paper, experiments on both small and large benchmark data sets are investigated. PMID:27203553

  11. Kernels and point processes associated with Whittaker functions

    NASA Astrophysics Data System (ADS)

    Blower, Gordon; Chen, Yang

    2016-09-01

    This article considers Whittaker's confluent hypergeometric function Wκ,μ where κ is real and μ is real or purely imaginary. Then φ(x) = x-μ-1/2Wκ,μ(x) arises as the scattering function of a continuous time linear system with state space L2(1/2, ∞) and input and output spaces C. The Hankel operator Γφ on L2(0, ∞) is expressed as a matrix with respect to the Laguerre basis and gives the Hankel matrix of moments of a Jacobi weight w0(x) = xb(1 - x)a. The operation of translating φ is equivalent to deforming w0 to give wt(x) = e-t/xxb(1 - x)a. The determinant of the Hankel matrix of moments of wɛ satisfies the σ form of Painlevé's transcendental differential equation PV. It is shown that Γφ gives rise to the Whittaker kernel from random matrix theory, as studied by Borodin and Olshanski [Commun. Math. Phys. 211, 335-358 (2000)]. Whittaker kernels are closely related to systems of orthogonal polynomials for a Pollaczek-Jacobi type weight lying outside the usual Szegö class.

  12. Risk Classification with an Adaptive Naive Bayes Kernel Machine Model

    PubMed Central

    Minnier, Jessica; Yuan, Ming; Liu, Jun S.; Cai, Tianxi

    2014-01-01

    Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models. PMID:26236061

  13. Convex-relaxed kernel mapping for image segmentation.

    PubMed

    Ben Salah, Mohamed; Ben Ayed, Ismail; Jing Yuan; Hong Zhang

    2014-03-01

    This paper investigates a convex-relaxed kernel mapping formulation of image segmentation. We optimize, under some partition constraints, a functional containing two characteristic terms: 1) a data term, which maps the observation space to a higher (possibly infinite) dimensional feature space via a kernel function, thereby evaluating nonlinear distances between the observations and segments parameters and 2) a total-variation term, which favors smooth segment surfaces (or boundaries). The algorithm iterates two steps: 1) a convex-relaxation optimization with respect to the segments by solving an equivalent constrained problem via the augmented Lagrange multiplier method and 2) a convergent fixed-point optimization with respect to the segments parameters. The proposed algorithm can bear with a variety of image types without the need for complex and application-specific statistical modeling, while having the computational benefits of convex relaxation. Our solution is amenable to parallelized implementations on graphics processing units (GPUs) and extends easily to high dimensions. We evaluated the proposed algorithm with several sets of comprehensive experiments and comparisons, including: 1) computational evaluations over 3D medical-imaging examples and high-resolution large-size color photographs, which demonstrate that a parallelized implementation of the proposed method run on a GPU can bring a significant speed-up and 2) accuracy evaluations against five state-of-the-art methods over the Berkeley color-image database and a multimodel synthetic data set, which demonstrates competitive performances of the algorithm. PMID:24723519

  14. TORCH Computational Reference Kernels - A Testbed for Computer Science Research

    SciTech Connect

    Kaiser, Alex; Williams, Samuel Webb; Madduri, Kamesh; Ibrahim, Khaled; Bailey, David H.; Demmel, James W.; Strohmaier, Erich

    2010-12-02

    For decades, computer scientists have sought guidance on how to evolve architectures, languages, and programming models in order to improve application performance, efficiency, and productivity. Unfortunately, without overarching advice about future directions in these areas, individual guidance is inferred from the existing software/hardware ecosystem, and each discipline often conducts their research independently assuming all other technologies remain fixed. In today's rapidly evolving world of on-chip parallelism, isolated and iterative improvements to performance may miss superior solutions in the same way gradient descent optimization techniques may get stuck in local minima. To combat this, we present TORCH: A Testbed for Optimization ResearCH. These computational reference kernels define the core problems of interest in scientific computing without mandating a specific language, algorithm, programming model, or implementation. To compliment the kernel (problem) definitions, we provide a set of algorithmically-expressed verification tests that can be used to verify a hardware/software co-designed solution produces an acceptable answer. Finally, to provide some illumination as to how researchers have implemented solutions to these problems in the past, we provide a set of reference implementations in C and MATLAB.

  15. A Testbed of Parallel Kernels for Computer Science Research

    SciTech Connect

    Bailey, David; Demmel, James; Ibrahim, Khaled; Kaiser, Alex; Koniges, Alice; Madduri, Kamesh; Shalf, John; Strohmaier, Erich; Williams, Samuel

    2010-04-30

    initial result of the more modern study was the seven dwarfs, which was subsequently extended to 13 motifs. These motifs have already been useful in defining classes of applications for architecture-software studies. However, these broad-brush problem statements often miss the nuance seen in individual kernels. For example, the computational requirements of particle methods vary greatly between the naive (but more accurate) direct calculations and the particle-mesh and particle-tree codes. Thus we commenced our study with an enumeration of problems, but then proceeded by providing not only reference implementations for each problem, but more importantly a mathematical definition that allows one to escape iterative approaches to software/hardware optimization. To ensure long term value, we have augmented each of our reference implementations with both a scalable problem generator and a verification scheme. In a paper we have prepared that documents our efforts, we describe in detail this process of problem definition, scalable input creation, verification, and implementation of reference codes for the scientific computing domain. Table 1 enumerates and describes the level of support we've developed for each kernel. We group these important kernels using the Berkeley dwarfs/motifs taxonomy using a red box in the appropriate column. As kernels become progressively complex, they build upon other, simpler computational methods. We note this dependency via orange boxes. After enumeration of the important numerical problems, we created a domain-appropriate high-level definition of each problem. To ensure future endeavors are not tainted by existing implementations, we specified the problem definition to be independent of both computer architecture and existing programming languages, models, and data types. Then, to provide context as to how such kernels productively map to existing architectures, languages and programming models, we produced reference implementations for most of

  16. Power Prediction in Smart Grids with Evolutionary Local Kernel Regression

    NASA Astrophysics Data System (ADS)

    Kramer, Oliver; Satzger, Benjamin; Lässig, Jörg

    Electric grids are moving from a centralized single supply chain towards a decentralized bidirectional grid of suppliers and consumers in an uncertain and dynamic scenario. Soon, the growing smart meter infrastructure will allow the collection of terabytes of detailed data about the grid condition, e.g., the state of renewable electric energy producers or the power consumption of millions of private customers, in very short time steps. For reliable prediction strong and fast regression methods are necessary that are able to cope with these challenges. In this paper we introduce a novel regression technique, i.e., evolutionary local kernel regression, a kernel regression variant based on local Nadaraya-Watson estimators with independent bandwidths distributed in data space. The model is regularized with the CMA-ES, a stochastic non-convex optimization method. We experimentally analyze the load forecast behavior on real power consumption data. The proposed method is easily parallelizable, and therefore well appropriate for large-scale scenarios in smart grids.

  17. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.

    PubMed

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-01-01

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888

  18. Fast Query-Optimized Kernel-Machine Classification

    NASA Technical Reports Server (NTRS)

    Mazzoni, Dominic; DeCoste, Dennis

    2004-01-01

    A recently developed algorithm performs kernel-machine classification via incremental approximate nearest support vectors. The algorithm implements support-vector machines (SVMs) at speeds 10 to 100 times those attainable by use of conventional SVM algorithms. The algorithm offers potential benefits for classification of images, recognition of speech, recognition of handwriting, and diverse other applications in which there are requirements to discern patterns in large sets of data. SVMs constitute a subset of kernel machines (KMs), which have become popular as models for machine learning and, more specifically, for automated classification of input data on the basis of labeled training data. While similar in many ways to k-nearest-neighbors (k-NN) models and artificial neural networks (ANNs), SVMs tend to be more accurate. Using representations that scale only linearly in the numbers of training examples, while exploring nonlinear (kernelized) feature spaces that are exponentially larger than the original input dimensionality, KMs elegantly and practically overcome the classic curse of dimensionality. However, the price that one must pay for the power of KMs is that query-time complexity scales linearly with the number of training examples, making KMs often orders of magnitude more computationally expensive than are ANNs, decision trees, and other popular machine learning alternatives. The present algorithm treats an SVM classifier as a special form of a k-NN. The algorithm is based partly on an empirical observation that one can often achieve the same classification as that of an exact KM by using only small fraction of the nearest support vectors (SVs) of a query. The exact KM output is a weighted sum over the kernel values between the query and the SVs. In this algorithm, the KM output is approximated with a k-NN classifier, the output of which is a weighted sum only over the kernel values involving k selected SVs. Before query time, there are gathered

  19. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    SciTech Connect

    Huang, Jessie Y.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.; Eklund, David; Childress, Nathan L.

    2013-12-15

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm.Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels.Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found

  20. Feasibility of detecting Aflatoxin B1 in single maize kernels using hyperspectral imaging

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The feasibility of detecting Aflatoxin B1 (AFB1) in single maize kernel inoculated with Aspergillus flavus conidia in the field, as well as its spatial distribution in the kernels, was assessed using near-infrared hyperspectral imaging (HSI) technique. Firstly, an image mask was applied to a pixel-b...