Time Slows Down during Accidents
Arstila, Valtteri
2012-01-01
The experienced speed of the passage of time is not constant as time can seem to fly or slow down depending on the circumstances we are in. Anecdotally accidents and other frightening events are extreme examples of the latter; people who have survived accidents often report altered phenomenology including how everything appeared to happen in slow motion. While the experienced phenomenology has been investigated, there are no explanations about how one can have these experiences. Instead, the only recently discussed explanation suggests that the anecdotal phenomenology is due to memory effects and hence not really experienced during the accidents. The purpose of this article is (i) to reintroduce the currently forgotten comprehensively altered phenomenology that some people experience during the accidents, (ii) to explain why the recent experiments fail to address the issue at hand, and (iii) to suggest a new framework to explain what happens when people report having experiences of time slowing down in these cases. According to the suggested framework, our cognitive processes become rapidly enhanced. As a result, the relation between the temporal properties of events in the external world and in internal states becomes distorted with the consequence of external world appearing to slow down. That is, the presented solution is a realist one in a sense that it maintains that sometimes people really do have experiences of time slowing down. PMID:22754544
Time Slows Down during Accidents.
Arstila, Valtteri
2012-01-01
The experienced speed of the passage of time is not constant as time can seem to fly or slow down depending on the circumstances we are in. Anecdotally accidents and other frightening events are extreme examples of the latter; people who have survived accidents often report altered phenomenology including how everything appeared to happen in slow motion. While the experienced phenomenology has been investigated, there are no explanations about how one can have these experiences. Instead, the only recently discussed explanation suggests that the anecdotal phenomenology is due to memory effects and hence not really experienced during the accidents. The purpose of this article is (i) to reintroduce the currently forgotten comprehensively altered phenomenology that some people experience during the accidents, (ii) to explain why the recent experiments fail to address the issue at hand, and (iii) to suggest a new framework to explain what happens when people report having experiences of time slowing down in these cases. According to the suggested framework, our cognitive processes become rapidly enhanced. As a result, the relation between the temporal properties of events in the external world and in internal states becomes distorted with the consequence of external world appearing to slow down. That is, the presented solution is a realist one in a sense that it maintains that sometimes people really do have experiences of time slowing down. PMID:22754544
Is cosmic acceleration slowing down?
Shafieloo, Arman; Sahni, Varun; Starobinsky, Alexei A.
2009-11-15
We investigate the course of cosmic expansion in its recent past using the Constitution SN Ia sample, along with baryon acoustic oscillations (BAO) and cosmic microwave background (CMB) data. Allowing the equation of state of dark energy (DE) to vary, we find that a coasting model of the universe (q{sub 0}=0) fits the data about as well as Lambda cold dark matter. This effect, which is most clearly seen using the recently introduced Om diagnostic, corresponds to an increase of Om and q at redshifts z < or approx. 0.3. This suggests that cosmic acceleration may have already peaked and that we are currently witnessing its slowing down. The case for evolving DE strengthens if a subsample of the Constitution set consisting of SNLS+ESSENCE+CfA SN Ia data is analyzed in combination with BAO+CMB data. The effect we observe could correspond to DE decaying into dark matter (or something else)
Critical slowing down in a dynamic duopoly
NASA Astrophysics Data System (ADS)
Escobido, M. G. O.; Hatano, N.
2015-01-01
Anticipating critical transitions is very important in economic systems as it can mean survival or demise of firms under stressful competition. As such identifying indicators that can provide early warning to these transitions are very crucial. In other complex systems, critical slowing down has been shown to anticipate critical transitions. In this paper, we investigate the applicability of the concept in the heterogeneous quantity competition between two firms. We develop a dynamic model where the duopoly can adjust their production in a logistic process. We show that the resulting dynamics is formally equivalent to a competitive Lotka-Volterra system. We investigate the behavior of the dominant eigenvalues and identify conditions that critical slowing down can provide early warning to the critical transitions in the dynamic duopoly.
Lead Slowing Down Spectrometer Status Report
Warren, Glen A.; Anderson, Kevin K.; Bonebrake, Eric; Casella, Andrew M.; Danon, Yaron; Devlin, M.; Gavron, Victor A.; Haight, R. C.; Imel, G. R.; Kulisek, Jonathan A.; O'Donnell, J. M.; Weltz, Adam
2012-06-07
This report documents the progress that has been completed in the first half of FY2012 in the MPACT-funded Lead Slowing Down Spectrometer project. Significant progress has been made on the algorithm development. We have an improve understanding of the experimental responses in LSDS for fuel-related material. The calibration of the ultra-depleted uranium foils was completed, but the results are inconsistent from measurement to measurement. Future work includes developing a conceptual model of an LSDS system to assay plutonium in used fuel, improving agreement between simulations and measurement, design of a thorium fission chamber, and evaluation of additional detector techniques.
Lead Slowing Down Spectrometer Research Plans
Warren, Glen A.; Kulisek, Jonathan A.; Gavron, Victor; Danon, Yaron; Weltz, Adam; Harris, Jason; Stewart, T.
2013-03-22
The MPACT-funded Lead Slowing Down Spectrometry (LSDS) project has been evaluating the feasibility of using LSDS techniques to assay fissile isotopes in used nuclear fuel assemblies. The approach has the potential to provide considerable improvement in the assay of fissile isotopic masses in fuel assemblies compared to other non-destructive techniques in a direct and independent manner. The LSDS collaborations suggests that the next step to in empirically testing the feasibility is to conduct measurements on fresh fuel assemblies to understand investigate self-attenuation and fresh mixed-oxide (MOX) fuel rodlets so we may betterto understand extraction of masses for 235U and 239Pu. While progressing toward these goals, the collaboration also strongly suggests the continued development of enabling technology such as detector development and algorithm development, thatwhich could provide significant performance benefits.
A Comprehensive Investigation on the Slowing Down of Cosmic Acceleration
NASA Astrophysics Data System (ADS)
Wang, Shuang; Hu, Yazhou; Li, Miao; Li, Nan
2016-04-01
Shafieloo et al. first proposed the possibility that the current cosmic acceleration (CA) is slowing down. However, this is rather counterintuitive because a slowing down CA cannot be accommodated in most mainstream cosmological models. In this work, by exploring the evolutionary trajectories of the dark energy equation of state w(z) and deceleration parameter q(z), we present a comprehensive investigation on the slowing down of CA from both the theoretical and the observational sides. For the theoretical side, we study the impact of different w(z) using six parametrization models, and then we discuss the effects of spatial curvature. For the observational side, we investigate the effects of different type Ia supernovae (SNe Ia), baryon acoustic oscillation (BAO), and cosmic microwave background (CMB) data. We find that (1) the evolution of CA is insensitive to the specific form of w(z); in contrast, a non-flat universe favors a slowing down CA more than a flat universe. (2) SNLS3 SNe Ia data sets favor a slowing down CA at a 1σ confidence level, while JLA SNe Ia samples prefer an eternal CA; in contrast, the effects of different BAO data are negligible. (3) Compared with CMB distance prior data, full CMB data favor a slowing down CA more. (4) Due to the low significance, the slowing down of CA is still a theoretical possibility that cannot be confirmed by the current observations.
Methionine restriction slows down senescence in human diploid fibroblasts
Kozieł, Rafał; Ruckenstuhl, Christoph; Albertini, Eva; Neuhaus, Michael; Netzberger, Christine; Bust, Maria; Madeo, Frank; Wiesner, Rudolf J; Jansen-Dürr, Pidder
2014-01-01
Methionine restriction (MetR) extends lifespan in animal models including rodents. Using human diploid fibroblasts (HDF), we report here that MetR significantly extends their replicative lifespan, thereby postponing cellular senescence. MetR significantly decreased activity of mitochondrial complex IV and diminished the accumulation of reactive oxygen species. Lifespan extension was accompanied by a significant decrease in the levels of subunits of mitochondrial complex IV, but also complex I, which was due to a decreased translation rate of several mtDNA-encoded subunits. Together, these findings indicate that MetR slows down aging in human cells by modulating mitochondrial protein synthesis and respiratory chain assembly. PMID:25273919
Report on First Activations with the Lead Slowing Down Spectrometer
Warren, Glen A.; Mace, Emily K.; Pratt, Sharon L.; Stave, Sean; Woodring, Mitchell L.
2011-03-03
On Feb. 17 and 18 2011, six items were irradiated with neutrons using the Lead Slowing Down Spectrometer. After irradiation, dose measurements and gamma-spectrometry measurements were completed on all of the samples. No contamination was found on the samples, and all but one provided no dose. Gamma-spectroscopy measurements qualitatively agreed with expectations based on the materials, with the exception of silver. We observed activation in the room in general, mostly due to 56Mn and 24Na. Most of the activation was short lived, with half-lives on the scale of hours, except for 198Au which has a half-life of 2.7 d.
The promise of slow down ageing may come from curcumin.
Sikora, E; Bielak-Zmijewska, A; Mosieniak, G; Piwocka, K
2010-01-01
No genes exist that have been selected to promote aging. The evolutionary theory of aging tells us that there is a trade-off between body maintenance and investment in reproduction. It is commonly acceptable that the ageing process is driven by the lifelong accumulation of molecular damages mainly due to reactive oxygen species (ROS) produced by mitochondria as well as random errors in DNA replication. Although ageing itself is not a disease, numerous diseases are age-related, such as cancer, Alzheimer's disease, atherosclerosis, metabolic disorders and others, likely caused by low grade inflammation driven by oxygen stress and manifested by increased level of pro-inflammatory cytokines such as IL-1, IL-6 and TNF-alpha, encoded by genes activated by the transcription factor NF-kappaB. It is believed that ageing is plastic and can be slowed down by caloric restriction as well as by some nutraceuticals. As the low grade inflammatory process is believed substantially to contribute to ageing, slowing ageing and postponing the onset of age-related diseases may be achieved by blocking the NF-kappaB-dependent inflammation. In this review we consider the possibility of the natural spice curcumin, a powerful antioxidant, anti-inflammatory agent and efficient inhibitor of NF-kappaB and the mTOR signaling pathway which overlaps that of NF-kappaB, to slow down ageing. PMID:20388102
Report on Second Activations with the Lead Slowing Down Spectrometer
Stave, Sean C.; Mace, Emily K.; Pratt, Sharon L.; Warren, Glen A.
2012-04-27
Summary On August 18 and 19 2011, five items were irradiated with neutrons using the Lead Slowing Down Spectrometer (LSDS). After irradiation, dose measurements and gamma-spectrometry measurements were completed on all of the samples. No contamination was found on the samples, and all but one provided no dose. Gamma-spectroscopy measurements qualitatively agreed with expectations based on the materials. As during the first activation run, we observed activation in the room in general, mostly due to 56Mn and 24Na. Most of the activation of the samples was short lived, with half-lives on the scale of hours to days, except for 60Co which has a half-life of 5.3 y.
Cosmic slowing down of acceleration for several dark energy parametrizations
Magaña, Juan; Cárdenas, Víctor H.; Motta, Verónica E-mail: victor.cardenas@uv.cl
2014-10-01
We further investigate slowing down of acceleration of the universe scenario for five parametrizations of the equation of state of dark energy using four sets of Type Ia supernovae data. In a maximal probability analysis we also use the baryon acoustic oscillation and cosmic microwave background observations. We found the low redshift transition of the deceleration parameter appears, independently of the parametrization, using supernovae data alone except for the Union 2.1 sample. This feature disappears once we combine the Type Ia supernovae data with high redshift data. We conclude that the rapid variation of the deceleration parameter is independent of the parametrization. We also found more evidence for a tension among the supernovae samples, as well as for the low and high redshift data.
Did growth of high Andes slow down Nazca plate subduction?
NASA Astrophysics Data System (ADS)
Quinteros, J.; Sobolev, S. V.
2010-12-01
The convergence velocity rate of the Nazca and South-American plate and its variations during the last 100 My are quite well-known from the global plate reconstructions. The key observation is that the rate of Nazca plate subduction has decreased by about 2 times during last 20 Myr and particularly since 10 Ma. During the same time the Central Andes have grown to its present 3-4 km height. Based on the thin-shell model, coupled with mantle convection, it was suggested that slowing down of Nazca plate resulted from the additional load exerted by the Andes. However, the thin-shell model, that integrates stresses and velocities vertically and therefore has no vertical resolution, is not an optimal tool to model a subduction zone. More appropriate would be modeling it with full thermomechanical formulation and self-consistent subduction. We performed a set of experiments to estimate the influence that an orogen like the Andes could have on an ongoing subduction. We used an enhanced 2D version of the SLIM-3D code suitable to simulate the evolution of a subducting slab in a self-consistent manner (gravity driven) at vertical crossections through upper mantle, transition zone and shallower lower mantle. The model utilizes non-linear temperature- and stress-dependant visco-elasto-plastic rheology and phase transitions at 410 and 660 km depth. We started from a reference case with a similar configuration as both Nazca and South-America plates. After some Mys of slow kinematicaly imposed subduction, to develop a coherent thermo-mechanical state, subduction was totally dynamic. On the other cases, the crust was slowly thickened artificially during 10 My to generate the Andean topography. Although our first results show no substantial changes on the velocity pattern of the subduction, we, however, consider this result as preliminary. At the meeting we plan to report completed and verified modeling results and discuss other possible cases of the late Cenozoic slowing down of
How to slow down light and where relativity theory fails
NASA Astrophysics Data System (ADS)
Zhang, Meggie
2013-03-01
Research found logical errors in mathematics and in physics. After discovered wave-particle duality made an assumption I reinterpreted quantum mechanic and I was able to find new information from existing publications and concluded that photon is not a fundamental particle which has a structure. These work has been presented at several APS meetings and EuNPC2012. During my research I also arrived at the exact same conclusion using Newton's theory of space-time, then found the assumptions that relativity theory made failed logical test and violated basic mathematical logic. And Minkowski space violated Newton's law of motion, Lorenz 4-dimensional transformation was mathematically incomplete. After modifying existing physics theories I designed an experiment to demonstrate where light can be slow down or stop for structural study. Such method were also turn into a continuous room temperature fusion method. However the discoveries involves large amount of complex logical analysis. Physicists are generally not philosophers, therefore to make the discovery fully understood by most physicists is very challenging. This work is supported by Dr. Kursh at Northeastern University.
Slowing Down Downhill Folding: A Three-Probe Study
Kim, Seung Joong; Matsumura, Yoshitaka; Dumont, Charles; Kihara, Hiroshi; Gruebele, Martin
2009-09-11
The mutant Tyr{sup 22}Trp/Glu{sup 33}Tyr/Gly{sup 46}Ala/Gly{sup 48}Ala of {lambda} repressor fragment {lambda}6-85 was previously assigned as an incipient downhill folder. We slow down its folding in a cryogenic water-ethylene-glycol solvent (-18 to -28 C). The refolding kinetics are probed by small-angle x-ray scattering, circular dichroism, and fluorescence to measure the radius of gyration, the average secondary structure content, and the native packing around the single tryptophan residue. The main resolved kinetic phase of the mutant is probe independent and faster than the main phase observed for the pseudo-wild-type. Excess helical structure formed early on by the mutant may reduce the formation of turns and prevent the formation of compact misfolded states, speeding up the overall folding process. Extrapolation of our main cryogenic folding phase and previous T-jump measurements to 37 C yields nearly the same refolding rate as extrapolated by Oas and co-workers from NMR line-shape data. Taken together, all the data consistently indicate a folding speed limit of {approx}4.5 {micro}s for this fast folder.
Hydrogen Bonding Slows Down Surface Diffusion of Molecular Glasses.
Chen, Yinshan; Zhang, Wei; Yu, Lian
2016-08-18
Surface-grating decay has been measured for three organic glasses with extensive hydrogen bonding: sorbitol, maltitol, and maltose. For 1000 nm wavelength gratings, the decay occurs by viscous flow in the entire range of temperature studied, covering the viscosity range 10(5)-10(11) Pa s, whereas under the same conditions, the decay mechanism transitions from viscous flow to surface diffusion for organic glasses of similar molecular sizes but with no or limited hydrogen bonding. These results indicate that extensive hydrogen bonding slows down surface diffusion in organic glasses. This effect arises because molecules can preserve hydrogen bonding even near the surface so that the loss of nearest neighbors does not translate into a proportional decrease of the kinetic barrier for diffusion. This explanation is consistent with a strong correlation between liquid fragility and the surface enhancement of diffusion, both reporting resistance of a liquid to dynamic excitation. Slow surface diffusion is expected to hinder any processes that rely on surface transport, for example, surface crystal growth and formation of stable glasses by vapor deposition. PMID:27404465
Lead Slowing Down Spectrometer FY2013 Annual Report
Warren, Glen A.; Kulisek, Jonathan A.; Gavron, Victor A.; Danon, Yaron; Weltz, Adam; Harris, Jason; Stewart, T.
2013-10-29
Executive Summary The Lead Slowing Down Spectrometry (LSDS) project, funded by the Materials Protection And Control Technology campaign, has been evaluating the feasibility of using LSDS techniques to assay fissile isotopes in used nuclear fuel assemblies. The approach has the potential to provide considerable improvement in the assay of fissile isotopic masses in fuel assemblies compared to other non-destructive techniques in a direct and independent manner. This report is a high level summary of the progress completed in FY2013. This progress included: • Fabrication of a 4He scintillator detector to detect fast neutrons in the LSDS operating environment. Testing of the detector will be conducted in FY2014. • Design of a large area 232Th fission chamber. • Analysis using the Los Alamos National Laboratory perturbation model estimated the required number of neutrons for an LSDS measurement to be 10 to the 16th source neutrons. • Application of the algorithms developed at Pacific Northwest National Laboratory to LSDS measurement data of various fissile samples conducted in 2012. The results concluded that the 235U could be measured to 2.7% and the 239Pu could be measured to 6.3%. Significant effort is yet needed to demonstrate the applicability of these algorithms for used-fuel assemblies, but the results reported here are encouraging in demonstrating that we are making progress toward that goal. • Development and cost-analysis of a research plan for the next critical demonstration measurements. The plan suggests measurements on fresh fuel sub assemblies as a means to experimentally test self-attenuation and the use of fresh mixed-oxide fuel as a means to test simultaneous measurement of 235U and 239Pu.
NASA Astrophysics Data System (ADS)
Fomin, Fedor V.
Preprocessing (data reduction or kernelization) as a strategy of coping with hard problems is universally used in almost every implementation. The history of preprocessing, like applying reduction rules simplifying truth functions, can be traced back to the 1950's [6]. A natural question in this regard is how to measure the quality of preprocessing rules proposed for a specific problem. For a long time the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this anomaly was that if we start with an instance I of an NP-hard problem and can show that in polynomial time we can replace this with an equivalent instance I' with |I'| < |I| then that would imply P=NP in classical complexity.
Ono, M.; Wada, K.; Kitada, T.
2012-07-01
Simplified treatment of resonance elastic scattering model considering thermal motion of heavy nuclides and the energy dependence of the resonance cross section was implemented into NJOY [1]. In order to solve deterministic slowing down equation considering the effect of up-scattering without iterative calculations, scattering kernel for heavy nuclides is pre-calculated by the formula derived by Ouisloumen and Sanchez [2], and neutron spectrum in up-scattering term is expressed by NR approximation. To check the verification of the simplified treatment, the treatment is applied to U-238 for the energy range from 4 eV to 200 eV. Calculated multi-group capture cross section of U-238 is greater than that of conventional method and the increase of the capture cross sections is remarkable as the temperature becomes high. Therefore Doppler coefficient calculated in UO{sub 2} fuel pin is calculated more negative value than that on conventional method. The impact on Doppler coefficient is equivalent to the results of exact treatment of resonance elastic scattering reported in previous studies [2-7]. The agreement supports the validation of the simplified treatment and therefore this treatment is applied for other heavy nuclide to evaluate the Doppler coefficient in MOX fuel. The result shows that the impact of considering thermal agitation in resonance scattering in Doppler coefficient comes mainly from U-238 and that of other heavy nuclides such as Pu-239, 240 etc. is not comparable in MOX fuel. (authors)
Critical slowing down and hyperuniformity on approach to jamming
NASA Astrophysics Data System (ADS)
Atkinson, Steven; Zhang, Ge; Hopkins, Adam B.; Torquato, Salvatore
2016-07-01
Hyperuniformity characterizes a state of matter that is poised at a critical point at which density or volume-fraction fluctuations are anomalously suppressed at infinite wavelengths. Recently, much attention has been given to the link between strict jamming (mechanical rigidity) and (effective or exact) hyperuniformity in frictionless hard-particle packings. However, in doing so, one must necessarily study very large packings in order to access the long-ranged behavior and to ensure that the packings are truly jammed. We modify the rigorous linear programming method of Donev et al. [J. Comput. Phys. 197, 139 (2004), 10.1016/j.jcp.2003.11.022] in order to test for jamming in putatively collectively and strictly jammed packings of hard disks in two dimensions. We show that this rigorous jamming test is superior to standard ways to ascertain jamming, including the so-called "pressure-leak" test. We find that various standard packing protocols struggle to reliably create packings that are jammed for even modest system sizes of N ≈103 bidisperse disks in two dimensions; importantly, these packings have a high reduced pressure that persists over extended amounts of time, meaning that they appear to be jammed by conventional tests, though rigorous jamming tests reveal that they are not. We present evidence that suggests that deviations from hyperuniformity in putative maximally random jammed (MRJ) packings can in part be explained by a shortcoming of the numerical protocols to generate exactly jammed configurations as a result of a type of "critical slowing down" as the packing's collective rearrangements in configuration space become locally confined by high-dimensional "bottlenecks" from which escape is a rare event. Additionally, various protocols are able to produce packings exhibiting hyperuniformity to different extents, but this is because certain protocols are better able to approach exactly jammed configurations. Nonetheless, while one should not generally
How Accurately Can We Calculate Neutrons Slowing Down In Water ?
Cullen, D E; Blomquist, R; Greene, M; Lent, E; MacFarlane, R; McKinley, S; Plechaty, E; Sublet, J C
2006-03-30
We have compared the results produced by a variety of currently available Monte Carlo neutron transport codes for the relatively simple problem of a fast source of neutrons slowing down and thermalizing in water. Initial comparisons showed rather large differences in the calculated flux; up to 80% differences. By working together we iterated to improve the results by: (1) insuring that all codes were using the same data, (2) improving the models used by the codes, and (3) correcting errors in the codes; no code is perfect. Even after a number of iterations we still found differences, demonstrating that our Monte Carlo and supporting codes are far from perfect; in particularly we found that the often overlooked nuclear data processing codes can be the weakest link in our systems of codes. The results presented here represent the today's state-of-the-art, in the sense that all of the Monte Carlo codes are modern, widely available and used codes. They all use the most up-to-date nuclear data, and the results are very recent, weeks or at most a few months old; these are the results that current users of these codes should expect to obtain from them. As such, the accuracy and limitations of the codes presented here should serve as guidelines to code users in interpreting their results for similar problems. We avoid crystal ball gazing, in the sense that we limit the scope of this report to what is available to code users today, and we avoid predicting future improvements that may or may not actual come to pass. An exception that we make is in presenting results for an improved thermal scattering model currently being testing using advanced versions of NJOY and MCNP that are not currently available to users, but are planned for release in the not too distant future. The other exception is to show comparisons between experimentally measured water cross sections and preliminary ENDF/B-VII thermal scattering law, S({alpha},{beta}) data; although these data are strictly
Critical slowing down and hyperuniformity on approach to jamming.
Atkinson, Steven; Zhang, Ge; Hopkins, Adam B; Torquato, Salvatore
2016-07-01
Hyperuniformity characterizes a state of matter that is poised at a critical point at which density or volume-fraction fluctuations are anomalously suppressed at infinite wavelengths. Recently, much attention has been given to the link between strict jamming (mechanical rigidity) and (effective or exact) hyperuniformity in frictionless hard-particle packings. However, in doing so, one must necessarily study very large packings in order to access the long-ranged behavior and to ensure that the packings are truly jammed. We modify the rigorous linear programming method of Donev et al. [J. Comput. Phys. 197, 139 (2004)JCTPAH0021-999110.1016/j.jcp.2003.11.022] in order to test for jamming in putatively collectively and strictly jammed packings of hard disks in two dimensions. We show that this rigorous jamming test is superior to standard ways to ascertain jamming, including the so-called "pressure-leak" test. We find that various standard packing protocols struggle to reliably create packings that are jammed for even modest system sizes of N≈10^{3} bidisperse disks in two dimensions; importantly, these packings have a high reduced pressure that persists over extended amounts of time, meaning that they appear to be jammed by conventional tests, though rigorous jamming tests reveal that they are not. We present evidence that suggests that deviations from hyperuniformity in putative maximally random jammed (MRJ) packings can in part be explained by a shortcoming of the numerical protocols to generate exactly jammed configurations as a result of a type of "critical slowing down" as the packing's collective rearrangements in configuration space become locally confined by high-dimensional "bottlenecks" from which escape is a rare event. Additionally, various protocols are able to produce packings exhibiting hyperuniformity to different extents, but this is because certain protocols are better able to approach exactly jammed configurations. Nonetheless, while one should
The Pedagogy of Slowing Down: Teaching Talmud in a Summer Kollel
ERIC Educational Resources Information Center
Kanarek, Jane
2010-01-01
This article explores a set of practices in the teaching of Talmud called "the pedagogy of slowing down." Through the author's analysis of her own teaching in an intensive Talmud class, "the pedagogy of slowing down" emerges as a pedagogical and cultural model in which the students learn to read more closely and to investigate the multiplicity of…
Anomalous versus Slowed-Down Brownian Diffusion in the Ligand-Binding Equilibrium
Soula, Hédi; Caré, Bertrand; Beslon, Guillaume; Berry, Hugues
2013-01-01
Measurements of protein motion in living cells and membranes consistently report transient anomalous diffusion (subdiffusion) that converges back to a Brownian motion with reduced diffusion coefficient at long times after the anomalous diffusion regime. Therefore, slowed-down Brownian motion could be considered the macroscopic limit of transient anomalous diffusion. On the other hand, membranes are also heterogeneous media in which Brownian motion may be locally slowed down due to variations in lipid composition. Here, we investigate whether both situations lead to a similar behavior for the reversible ligand-binding reaction in two dimensions. We compare the (long-time) equilibrium properties obtained with transient anomalous diffusion due to obstacle hindrance or power-law-distributed residence times (continuous-time random walks) to those obtained with space-dependent slowed-down Brownian motion. Using theoretical arguments and Monte Carlo simulations, we show that these three scenarios have distinctive effects on the apparent affinity of the reaction. Whereas continuous-time random walks decrease the apparent affinity of the reaction, locally slowed-down Brownian motion and local hindrance by obstacles both improve it. However, only in the case of slowed-down Brownian motion is the affinity maximal when the slowdown is restricted to a subregion of the available space. Hence, even at long times (equilibrium), these processes are different and exhibit irreconcilable behaviors when the area fraction of reduced mobility changes. PMID:24209851
Critical Slowing Down of the Charge Carrier Dynamics at the Mott Metal-Insulator Transition
NASA Astrophysics Data System (ADS)
Hartmann, Benedikt; Zielke, David; Polzin, Jana; Sasaki, Takahiko; Müller, Jens
2015-05-01
We report on the dramatic slowing down of the charge carrier dynamics in a quasi-two-dimensional organic conductor, which can be reversibly tuned through the Mott metal-insulator transition (MIT). At the finite-temperature critical end point, we observe a divergent increase of the resistance fluctuations accompanied by a drastic shift of spectral weight to low frequencies, demonstrating the critical slowing down of the order parameter (doublon density) fluctuations. The slow dynamics is accompanied by non-Gaussian fluctuations, indicative of correlated charge carrier dynamics. A possible explanation is a glassy freezing of the electronic system as a precursor of the Mott MIT.
Critical slowing down and critical exponents in LD/PIN optically-bistable semiconductor lasers
Zhong Lichen; Guo Yili
1988-04-01
Critical slowing down for LD/PIN bistable optical semiconductor lasers and the critical exponents ..gamma.. for this system have been experimentally investigated. The experimental value ..gamma..approx.0.53 is basically in agreement with the theoretically predicted value of 0.5.
ACTIV: Sandwich Detector Activity from In-Pile Slowing-Down Spectra Experiment
2013-08-01
ACTIV calculates the activities of a sandwich detector, to be used for in-pile measurements in slowing-down spectra below a few keV. The effect of scattering with energy degradation in the filter and in the detectors has been included to a first approximation.
"Slow Down, You Move Too Fast:" Literature Circles as Reflective Practice
ERIC Educational Resources Information Center
Sanacore, Joseph
2013-01-01
Becoming an effective literacy learner requires a bit of slowing down and appreciating the reflective nature of reading and writing. Literature circles support this instructional direction because they provide opportunities for immersing students in discussions that encourage their personal responses. When students feel their personal responses…
49 CFR 392.11 - Railroad grade crossings; slowing down required.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 5 2014-10-01 2014-10-01 false Railroad grade crossings; slowing down required... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit...
49 CFR 392.11 - Railroad grade crossings; slowing down required.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 5 2012-10-01 2012-10-01 false Railroad grade crossings; slowing down required... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit...
49 CFR 392.11 - Railroad grade crossings; slowing down required.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false Railroad grade crossings; slowing down required... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit...
49 CFR 392.11 - Railroad grade crossings; slowing down required.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 5 2013-10-01 2013-10-01 false Railroad grade crossings; slowing down required... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit...
49 CFR 392.11 - Railroad grade crossings; slowing down required.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Railroad grade crossings; slowing down required... REGULATIONS DRIVING OF COMMERCIAL MOTOR VEHICLES Driving of Commercial Motor Vehicles § 392.11 Railroad grade..., upon approaching a railroad grade crossing, be driven at a rate of speed which will permit...
Critical slowing down in polarization switching of vertical-cavity surface-emitting lasers
NASA Astrophysics Data System (ADS)
Wu, Yu-Heng; Li, Yueh-Chen; Kuo, Wang-Chuang; Yen, Tsu-Chiang
2014-05-01
This research investigated the critical slowing down in polarization switching (PS) of vertical-cavity surface-emitting lasers (VCSELs). The experiments were performed by step-function current injection in two types: step-up and stepdown. In the case of step-up and step-down, the relationship between relaxation time and final current in this experiment resembles critical slowing down (CSD). The critical currents of two step-function current experiment are compared. The PS in this experiment is a static case. We also find that the divergence of relaxation time follow a power law. These results contribute to the understanding of the mechanism of CSD in VCSEL's PS (VPS).
LSP simulations of fast ions slowing down in cool magnetized plasma
NASA Astrophysics Data System (ADS)
Evans, Eugene S.; Cohen, Samuel A.
2015-11-01
In MFE devices, rapid transport of fusion products, e.g., tritons and alpha particles, from the plasma core into the scrape-off layer (SOL) could perform the dual roles of energy and ash removal. Through these two processes in the SOL, the fast particle slowing-down time will have a major effect on the energy balance of a fusion reactor and its neutron emissions, topics of great importance. In small field-reversed configuration (FRC) devices, the first-orbit trajectories of most fusion products will traverse the SOL, potentially allowing those particles to deposit their energy in the SOL and eventually be exhausted along the open field lines. However, the dynamics of the fast-ion energy loss processes under conditions expected in the FRC SOL, where the Debye length is greater than the electron gyroradius, are not fully understood. What modifications to the classical slowing down rate are necessary? Will instabilities accelerate the energy loss? We use LSP, a 3D PIC code, to examine the effects of SOL plasma parameters (density, temperature and background magnetic field strength) on the slowing down time of fast ions in a cool plasma with parameters similar to those expected in the SOL of small FRC reactors. This work supported by DOE contract DE-AC02-09CH11466.
Critical slowing down as early warning for the onset of collapse in mutualistic communities.
Dakos, Vasilis; Bascompte, Jordi
2014-12-01
Tipping points are crossed when small changes in external conditions cause abrupt unexpected responses in the current state of a system. In the case of ecological communities under stress, the risk of approaching a tipping point is unknown, but its stakes are high. Here, we test recently developed critical slowing-down indicators as early-warning signals for detecting the proximity to a potential tipping point in structurally complex ecological communities. We use the structure of 79 empirical mutualistic networks to simulate a scenario of gradual environmental change that leads to an abrupt first extinction event followed by a sequence of species losses until the point of complete community collapse. We find that critical slowing-down indicators derived from time series of biomasses measured at the species and community level signal the proximity to the onset of community collapse. In particular, we identify specialist species as likely the best-indicator species for monitoring the proximity of a community to collapse. In addition, trends in slowing-down indicators are strongly correlated to the timing of species extinctions. This correlation offers a promising way for mapping species resilience and ranking species risk to extinction in a given community. Our findings pave the road for combining theory on tipping points with patterns of network structure that might prove useful for the management of a broad class of ecological networks under global environmental change. PMID:25422412
Small but slow world: How network topology and burstiness slow down spreading
NASA Astrophysics Data System (ADS)
Karsai, M.; Kivelä, M.; Pan, R. K.; Kaski, K.; Kertész, J.; Barabási, A.-L.; Saramäki, J.
2011-02-01
While communication networks show the small-world property of short paths, the spreading dynamics in them turns out slow. Here, the time evolution of information propagation is followed through communication networks by using empirical data on contact sequences and the susceptible-infected model. Introducing null models where event sequences are appropriately shuffled, we are able to distinguish between the contributions of different impeding effects. The slowing down of spreading is found to be caused mainly by weight-topology correlations and the bursty activity patterns of individuals.
Measurements with the high flux lead slowing-down spectrometer at LANL
NASA Astrophysics Data System (ADS)
Danon, Y.; Romano, C.; Thompson, J.; Watson, T.; Haight, R. C.; Wender, S. A.; Vieira, D. J.; Bond, E.; Wilhelmy, J. B.; O'Donnell, J. M.; Michaudon, A.; Bredeweg, T. A.; Schurman, T.; Rochman, D.; Granier, T.; Ethvignot, T.; Taieb, J.; Becker, J. A.
2007-08-01
A Lead Slowing-Down Spectrometer (LSDS) was recently installed at LANL [D. Rochman, R.C. Haight, J.M. O'Donnell, A. Michaudon, S.A. Wender, D.J. Vieira, E.M. Bond, T.A. Bredeweg, A. Kronenberg, J.B. Wilhelmy, T. Ethvignot, T. Granier, M. Petit, Y. Danon, Characteristics of a lead slowing-down spectrometer coupled to the LANSCE accelerator, Nucl. Instr. and Meth. A 550 (2005) 397]. The LSDS is comprised of a cube of pure lead 1.2 m on the side, with a spallation pulsed neutron source in its center. The LSDS is driven by 800 MeV protons with a time-averaged current of up to 1 μA, pulse widths of 0.05-0.25 μs and a repetition rate of 20-40 Hz. Spallation neutrons are created by directing the proton beam into an air-cooled tungsten target in the center of the lead cube. The neutrons slow down by scattering interactions with the lead and thus enable measurements of neutron-induced reaction rates as a function of the slowing-down time, which correlates to neutron energy. The advantage of an LSDS as a neutron spectrometer is that the neutron flux is 3-4 orders of magnitude higher than a standard time-of-flight experiment at the equivalent flight path, 5.6 m. The effective energy range is 0.1 eV to 100 keV with a typical energy resolution of 30% from 1 eV to 10 keV. The average neutron flux between 1 and 10 keV is about 1.7 × 109 n/cm2/s/μA. This high flux makes the LSDS an important tool for neutron-induced cross section measurements of ultra-small samples (nanograms) or of samples with very low cross sections. The LSDS at LANL was initially built in order to measure the fission cross section of the short-lived metastable isotope of U-235, however it can also be used to measure (n, α) and (n, p) reactions. Fission cross section measurements were made with samples of 235U, 236U, 238U and 239Pu. The smallest sample measured was 10 ng of 239Pu. Measurement of (n, α) cross section with 760 ng of Li-6 was also demonstrated. Possible future cross section measurements
First Measurements with a Lead Slowing-Down Spectrometer at LANSCE
Rochman, D.; Haight, R.C.; Wender, S.A.; O'Donnell, J.M.; Michaudon, A.; Huff, K.; Vieira, D.J.; Bond, E.; Rundberg, R.S.; Kronenberg, A.; Wilhelmy, J.; Bredeweg, T.A.; Schwantes, J.; Ethvignot, T.; Granier, T.; Petit, M.; Danon, Y.
2005-05-24
The characteristics of a Lead Slowing-Down Spectrometer (LSDS) installed at the Los Alamos Neutron Science Center (LANSCE) are presented in this paper. This instrument is designed to study neutron-induced fission on ultra small quantities of actinides, on the order of tens of nanograms or less. The measurements of the energy-time relation, energy resolution and neutron flux are compared to simulations performed with MCNPX. Results on neutron-induced fission of 235U and 239Pu with tens of micrograms and tens of nanograms, respectively, are presented. Finally, a digital filter designed to improve the detection of fission events at short time after the proton pulses is described.
Dynamic slowing-down in dense microemulsions near the percolation threshold
NASA Astrophysics Data System (ADS)
Chen, S. H.; Mallamace, F.; Rouch, J.; Tartaglia, P.
1992-05-01
We review a series of investigations of the static and dynamic properties of a three-component water-in-oil microemulsion system in which the molar ratio of water to surfactant is kept constant. This system behaves effectively like a two-domponent macromolecular fluid in which there are spherical, surfactant coated water droplets of macroscopic dimensions dispersed in a continuum of oil. The properties investigated include electrical conductivity, dielectric relaxation, shear viscosity and viscoelastic relaxation, static neutron and light scattering and dynamic light scattering. We focus mainly on the phenomena of the dynamic slowing-down of the dielectric relaxation and the droplet density fluctuations as the system approaches the percolation threshold from below, both in temperature and in volume fraction. A theory of static and dynamic light scattering, formulated along the lines of scattering from a system of polydisperse fractal clusters, quantitatively accounts for the dynamic slowing-down phenomenon and the non-exponential decay of the time correlation function.
Modeling resonance interference by 0-D slowing-down solution with embedded self-shielding method
Liu, Y.; Martin, W.; Kim, K. S.; Williams, M.
2013-07-01
The resonance integral table based methods employing conventional multigroup structure for the resonance self-shielding calculation have a common difficulty on treating the resonance interference. The problem arises due to the lack of sufficient energy dependence of the resonance cross sections when the calculation is performed in the multigroup structure. To address this, a resonance interference factor model has been proposed to account for the interference effect by comparing the interfered and non-interfered effective cross sections obtained from 0-D homogeneous slowing-down solutions by continuous-energy cross sections. A rigorous homogeneous slowing-down solver is developed with two important features for reducing the calculation time and memory requirement for practical applications. The embedded self-shielding method (ESSM) is chosen as the multigroup resonance self-shielding solver as an integral component of the interference method. The interference method is implemented in the DeCART transport code. Verification results show that the code system provides more accurate effective cross sections and multiplication factors than the conventional interference method for UO{sub 2} and MOX fuel cases. The additional computing time and memory for the interference correction is acceptable for the test problems including a depletion case with 87 isotopes in the fuel region. (authors)
Lenton, T. M.; Livina, V. N.; Dakos, V.; Van Nes, E. H.; Scheffer, M.
2012-01-01
We address whether robust early warning signals can, in principle, be provided before a climate tipping point is reached, focusing on methods that seek to detect critical slowing down as a precursor of bifurcation. As a test bed, six previously analysed datasets are reconsidered, three palaeoclimate records approaching abrupt transitions at the end of the last ice age and three models of varying complexity forced through a collapse of the Atlantic thermohaline circulation. Approaches based on examining the lag-1 autocorrelation function or on detrended fluctuation analysis are applied together and compared. The effects of aggregating the data, detrending method, sliding window length and filtering bandwidth are examined. Robust indicators of critical slowing down are found prior to the abrupt warming event at the end of the Younger Dryas, but the indicators are less clear prior to the Bølling-Allerød warming, or glacial termination in Antarctica. Early warnings of thermohaline circulation collapse can be masked by inter-annual variability driven by atmospheric dynamics. However, rapidly decaying modes can be successfully filtered out by using a long bandwidth or by aggregating data. The two methods have complementary strengths and weaknesses and we recommend applying them together to improve the robustness of early warnings. PMID:22291229
Synchronous slowing down in coupled logistic maps via random network topology.
Wang, Sheng-Jun; Du, Ru-Hai; Jin, Tao; Wu, Xing-Sen; Qu, Shi-Xian
2016-01-01
The speed and paths of synchronization play a key role in the function of a system, which has not received enough attention up to now. In this work, we study the synchronization process of coupled logistic maps that reveals the common features of low-dimensional dissipative systems. A slowing down of synchronization process is observed, which is a novel phenomenon. The result shows that there are two typical kinds of transient process before the system reaches complete synchronization, which is demonstrated by both the coupled multiple-period maps and the coupled multiple-band chaotic maps. When the coupling is weak, the evolution of the system is governed mainly by the local dynamic, i.e., the node states are attracted by the stable orbits or chaotic attractors of the single map and evolve toward the synchronized orbit in a less coherent way. When the coupling is strong, the node states evolve in a high coherent way toward the stable orbit on the synchronized manifold, where the collective dynamics dominates the evolution. In a mediate coupling strength, the interplay between the two paths is responsible for the slowing down. The existence of different synchronization paths is also proven by the finite-time Lyapunov exponent and its distribution. PMID:27021897
Synchronous slowing down in coupled logistic maps via random network topology
NASA Astrophysics Data System (ADS)
Wang, Sheng-Jun; Du, Ru-Hai; Jin, Tao; Wu, Xing-Sen; Qu, Shi-Xian
2016-03-01
The speed and paths of synchronization play a key role in the function of a system, which has not received enough attention up to now. In this work, we study the synchronization process of coupled logistic maps that reveals the common features of low-dimensional dissipative systems. A slowing down of synchronization process is observed, which is a novel phenomenon. The result shows that there are two typical kinds of transient process before the system reaches complete synchronization, which is demonstrated by both the coupled multiple-period maps and the coupled multiple-band chaotic maps. When the coupling is weak, the evolution of the system is governed mainly by the local dynamic, i.e., the node states are attracted by the stable orbits or chaotic attractors of the single map and evolve toward the synchronized orbit in a less coherent way. When the coupling is strong, the node states evolve in a high coherent way toward the stable orbit on the synchronized manifold, where the collective dynamics dominates the evolution. In a mediate coupling strength, the interplay between the two paths is responsible for the slowing down. The existence of different synchronization paths is also proven by the finite-time Lyapunov exponent and its distribution.
Critical slowing down associated with regime shifts in the US housing market
NASA Astrophysics Data System (ADS)
Tan, James Peng Lung; Cheong, Siew Siew Ann
2014-02-01
Complex systems are described by a large number of variables with strong and nonlinear interactions. Such systems frequently undergo regime shifts. Combining insights from bifurcation theory in nonlinear dynamics and the theory of critical transitions in statistical physics, we know that critical slowing down and critical fluctuations occur close to such regime shifts. In this paper, we show how universal precursors expected from such critical transitions can be used to forecast regime shifts in the US housing market. In the housing permit, volume of homes sold and percentage of homes sold for gain data, we detected strong early warning signals associated with a sequence of coupled regime shifts, starting from a Subprime Mortgage Loans transition in 2003-2004 and ending with the Subprime Crisis in 2007-2008. Weaker signals of critical slowing down were also detected in the US housing market data during the 1997-1998 Asian Financial Crisis and the 2000-2001 Technology Bubble Crisis. Backed by various macroeconomic data, we propose a scenario whereby hot money flowing back into the US during the Asian Financial Crisis fueled the Technology Bubble. When the Technology Bubble collapsed in 2000-2001, the hot money then flowed into the US housing market, triggering the Subprime Mortgage Loans transition in 2003-2004 and an ensuing sequence of transitions. We showed how this sequence of couple transitions unfolded in space and in time over the whole of US.
Lenton, T M; Livina, V N; Dakos, V; van Nes, E H; Scheffer, M
2012-03-13
We address whether robust early warning signals can, in principle, be provided before a climate tipping point is reached, focusing on methods that seek to detect critical slowing down as a precursor of bifurcation. As a test bed, six previously analysed datasets are reconsidered, three palaeoclimate records approaching abrupt transitions at the end of the last ice age and three models of varying complexity forced through a collapse of the Atlantic thermohaline circulation. Approaches based on examining the lag-1 autocorrelation function or on detrended fluctuation analysis are applied together and compared. The effects of aggregating the data, detrending method, sliding window length and filtering bandwidth are examined. Robust indicators of critical slowing down are found prior to the abrupt warming event at the end of the Younger Dryas, but the indicators are less clear prior to the Bølling-Allerød warming, or glacial termination in Antarctica. Early warnings of thermohaline circulation collapse can be masked by inter-annual variability driven by atmospheric dynamics. However, rapidly decaying modes can be successfully filtered out by using a long bandwidth or by aggregating data. The two methods have complementary strengths and weaknesses and we recommend applying them together to improve the robustness of early warnings. PMID:22291229
Development for fissile assay in recycled fuel using lead slowing down spectrometer
Lee, Yong Deok; Je Park, C.; Kim, Ho-Dong; Song, Kee Chan
2013-07-01
A future nuclear energy system is under development to turn spent fuels produced by PWRs into fuels for a SFR (Sodium Fast Reactor) through the pyrochemical process. The knowledge of the isotopic fissile content of the new fuel is very important for fuel safety. A lead slowing down spectrometer (LSDS) is under development to analyze the fissile material content (Pu{sup 239}, Pu{sup 241} and U{sup 235}) of the fuel. The LSDS requires a neutron source, the neutrons will be slowed down through their passage in a lead medium and will finally enter the fuel and will induce fission reactions that will be analysed and the isotopic content of the fuel will be then determined. The issue is that the spent fuel emits intense gamma rays and neutrons by spontaneous fission. The threshold fission detector screens the prompt fast fission neutrons and as a result the LSDS is not influenced by the high level radiation background. The energy resolution of LSDS is good in the range 0.1 eV to 1 keV. It is also the range in which the fission reaction is the most discriminating for the considered fissile isotopes. An electron accelerator has been chosen to produce neutrons with an adequate target through (e{sup -},γ)(γ,n) reactions.
Synchronous slowing down in coupled logistic maps via random network topology
Wang, Sheng-Jun; Du, Ru-Hai; Jin, Tao; Wu, Xing-Sen; Qu, Shi-Xian
2016-01-01
The speed and paths of synchronization play a key role in the function of a system, which has not received enough attention up to now. In this work, we study the synchronization process of coupled logistic maps that reveals the common features of low-dimensional dissipative systems. A slowing down of synchronization process is observed, which is a novel phenomenon. The result shows that there are two typical kinds of transient process before the system reaches complete synchronization, which is demonstrated by both the coupled multiple-period maps and the coupled multiple-band chaotic maps. When the coupling is weak, the evolution of the system is governed mainly by the local dynamic, i.e., the node states are attracted by the stable orbits or chaotic attractors of the single map and evolve toward the synchronized orbit in a less coherent way. When the coupling is strong, the node states evolve in a high coherent way toward the stable orbit on the synchronized manifold, where the collective dynamics dominates the evolution. In a mediate coupling strength, the interplay between the two paths is responsible for the slowing down. The existence of different synchronization paths is also proven by the finite-time Lyapunov exponent and its distribution. PMID:27021897
Slow down of a globally neutral relativistic e‑e+ beam shearing the vacuum
NASA Astrophysics Data System (ADS)
Alves, E. P.; Grismayer, T.; Silveirinha, M. G.; Fonseca, R. A.; Silva, L. O.
2016-01-01
The microphysics of relativistic collisionless shear flows is investigated in a configuration consisting of a globally neutral, relativistic {{e}-}{{e}+} beam streaming through a hollow plasma/dielectric channel. We show through multidimensional particle-in-cell simulations that this scenario excites the mushroom instability (MI), a transverse shear instability on the electron-scale, when there is no overlap (no contact) between the {{e}-}{{e}+} beam and the walls of the hollow plasma channel. The onset of the MI leads to the conversion of the beam’s kinetic energy into magnetic (and electric) field energy, effectively slowing down a globally neutral body in the absence of contact. The collisionless shear physics explored in this configuration may operate in astrophysical environments, particularly in highly relativistic and supersonic settings where macroscopic shear processes are stable.
Critical slowing down as early warning for the onset and termination of depression
van de Leemput, Ingrid A.; Wichers, Marieke; Cramer, Angélique O. J.; Borsboom, Denny; Tuerlinckx, Francis; Kuppens, Peter; van Nes, Egbert H.; Viechtbauer, Wolfgang; Giltay, Erik J.; Aggen, Steven H.; Derom, Catherine; Jacobs, Nele; Kendler, Kenneth S.; van der Maas, Han L. J.; Neale, Michael C.; Peeters, Frenk; Thiery, Evert; Zachar, Peter; Scheffer, Marten
2014-01-01
About 17% of humanity goes through an episode of major depression at some point in their lifetime. Despite the enormous societal costs of this incapacitating disorder, it is largely unknown how the likelihood of falling into a depressive episode can be assessed. Here, we show for a large group of healthy individuals and patients that the probability of an upcoming shift between a depressed and a normal state is related to elevated temporal autocorrelation, variance, and correlation between emotions in fluctuations of autorecorded emotions. These are indicators of the general phenomenon of critical slowing down, which is expected to occur when a system approaches a tipping point. Our results support the hypothesis that mood may have alternative stable states separated by tipping points, and suggest an approach for assessing the likelihood of transitions into and out of depression. PMID:24324144
Structure and dynamics of water in crowded environments slows down peptide conformational changes
Lu, Cheng; Prada-Gracia, Diego; Rao, Francesco
2014-07-28
The concentration of macromolecules inside the cell is high with respect to conventional in vitro experiments or simulations. In an effort to characterize the effects of crowding on the thermodynamics and kinetics of disordered peptides, molecular dynamics simulations were run at different concentrations by varying the number of identical weakly interacting peptides inside the simulation box. We found that the presence of crowding does not influence very much the overall thermodynamics. On the other hand, peptide conformational dynamics was found to be strongly affected, resulting in a dramatic slowing down at larger concentrations. The observation of long lived water bridges between peptides at higher concentrations points to a nontrivial role of the solvent in the altered peptide kinetics. Our results reinforce the idea for an active role of water in molecular crowding, an effect that is expected to be relevant for problems influenced by large solvent exposure areas like in intrinsically disordered proteins.
Analysis of spent fuel assay with a lead slowing down spectrometer
Gavron, Victor I; Smith, L Eric; Ressler, Jennifer J
2008-01-01
Assay of fissile materials in spent fuel that are produced or depleted during the operation of a reactor, is of paramount importance to nuclear materials accounting, verification of the reactor operation history, as well as for criticality considerations for storage. In order to prevent future proliferation following the spread of nuclear energy, we must develop accurate methods to assay large quantities of nuclear fuels. We analyze the potential of using a Lead Slowing Down Spectrometer for assaying spent fuel. We conclude that it is possible to design a system that will provide around 1% statistical precision in the determination of the {sup 239}Pu, {sup 241}Pu and {sup 235}U concentrations in a PWR spent-fuel assembly, for intermediate-to-high burnup levels, using commercial neutron sources, and a system of {sup 238}U threshold fission detectors. Pending further analysis of systematic errors, it is possible that missing pins can be detected, as can asymmetry in the fuel bundle.
Analysis of spent fuel assay with a lead slowing down spectrometer
Gavron, Victor I; Smith, L. Eric; Ressler, Jennifer J
2010-10-29
Assay of fissile materials in spent fuel that are produced or depleted during the operation of a reactor, is of paramount importance to nuclear materials accounting, verification of the reactor operation history, as well as for criticality considerations for storage. In order to prevent future proliferation following the spread of nuclear energy, we must develop accurate methods to assay large quantities of nuclear fuels. We analyze the potential of using a Lead Slowing Down Spectrometer for assaying spent fuel. We conclude that it is possible to design a system that will provide around 1% statistical precision in the determination of the {sup 239}Pu, {sup 241}Pu and {sup 235}U concentrations in a PWR spent-fuel assembly, for intermediate-to-high burnup levels, using commercial neutron sources, and a system of {sup 238}U threshold fission detectors. Pending further analysis of systematic errors, it is possible that missing pins can be detected, as can asymmetry in the fuel bundle.
Microdosimetry of the full slowing down of protons using Monte Carlo track structure simulations.
Liamsuwan, T; Uehara, S; Nikjoo, H
2015-09-01
The article investigates two approaches in microdosimetric calculations based on Monte Carlo track structure (MCTS) simulations of a 160-MeV proton beam. In the first approach, microdosimetric parameters of the proton beam were obtained using the weighted sum of proton energy distributions and microdosimetric parameters of proton track segments (TSMs). In the second approach, phase spaces of energy depositions obtained using MCTS simulations in the full slowing down (FSD) mode were used for the microdosimetric calculations. Targets of interest were water cylinders of 2.3-100 nm in diameters and heights. Frequency-averaged lineal energies ([Formula: see text]) obtained using both approaches agreed within the statistical uncertainties. Discrepancies beyond this level were observed for dose-averaged lineal energies ([Formula: see text]) towards the Bragg peak region due to the small number of proton energies used in the TSM approach and different energy deposition patterns in the TSM and FSD of protons. PMID:25904698
Equilibrium and stability in a heliotron with anisotropic hot particle slowing-down distribution
Cooper, W. A.; Asahi, Y.; Narushima, Y.; Suzuki, Y.; Watanabe, K. Y.; Graves, J. P.; Isaev, M. Yu.
2012-10-15
The equilibrium and linear fluid Magnetohydrodynamic (MHD) stability in an inward-shifted large helical device heliotron configuration are investigated with the 3D ANIMEC and TERPSICHORE codes, respectively. A modified slowing-down distribution function is invoked to study anisotropic pressure conditions. An appropriate choice of coefficients and exponents allows the simulation of neutral beam injection in which the angle of injection is varied from parallel to perpendicular. The fluid stability analysis concentrates on the application of the Johnson-Kulsrud-Weimer energy principle. The growth rates are maximum at <{beta}>{approx}2%, decrease significantly at <{beta}>{approx}4.5%, do not vary significantly with variations of the injection angle and are similar to those predicted with a bi-Maxwellian hot particle distribution function model. Stability is predicted at <{beta}>{approx}2.5% with a sufficiently peaked energetic particle pressure profile. Electrostatic potential forms from the MHD instability necessary for guiding centre orbit following are calculated.
Lead Slowing Down Spectrometry Analysis of Data from Measurements on Nuclear Fuel
Warren, Glen A.; Anderson, Kevin K.; Kulisek, Jonathan A.; Danon, Yaron; Weltz, Adam; Gavron, Victor A.; Harris, Jason; Stewart, Trevor N.
2015-01-12
Improved non-destructive assay of isotopic masses in used nuclear fuel would be valuable for nuclear safeguards operations associated with the transport, storage and reprocessing of used nuclear fuel. Our collaboration is examining the feasibility of using lead slowing down spectrometry techniques to assay the isotopic fissile masses in used nuclear fuel assemblies. We present the application of our analysis algorithms on measurements conducted with a lead spectrometer. The measurements involved a single fresh fuel pin and discrete 239Pu and 235U samples. We are able to describe the isotopic fissile masses with root mean square errors over seven different configurations to 6.35% for 239Pu and 2.7% for 235U over seven different configurations. Funding Source(s):
Expertise makes the world slow down: judgements of duration are influenced by domain knowledge.
Rhodes, Matthew G; McCabe, David P
2009-12-01
Experts often appear to perceive time differently from novices. The current study thus examined perceptions of time as a function of domain expertise. Specifically, individuals with high or low levels of knowledge of American football made judgements of duration for briefly presented words that were unrelated to football (e.g., rooster), football specific (e.g., touchdown), or ambiguous (e.g., huddle). Results showed that high-knowledge individuals judged football-specific words as having been presented for a longer duration than unrelated or ambiguous words. In contrast, low-knowledge participants exhibited no systematic differences in judgements of duration based on the type of word presented. These findings are discussed within a fluency attribution framework, which suggests that experts' fluent perception of domain-relevant stimuli leads to the subjective impression that time slows down in one's domain of expertise. PMID:19691007
The study of dynamics heterogeneity and slow down of silica by molecular dynamics simulation
NASA Astrophysics Data System (ADS)
San, L. T.; Hung, P. K.; Hue, H. V.
2016-06-01
We have numerically studied the diffusion in silica liquids via the SiOx → SiOx±1, OSiy → OSiy±1 reactions and coordination cells (CC). Five models with temperatures from 1000 to 3500 K have been constructed by molecular dynamics simulation. We reveal that the reactions happen not randomly in the space. In addition, the reactions correlated strongly with the mobility of CC atom. Further we examine the clustering of atoms having unbroken bonds and restored bonds. The time evolution of these clusters under temperature is also considered. The simulation shows that both slow down and dynamic heterogeneity (DH) is related not only to the percolation of restored-rigid clusters near glass transition but also to their long lifetime.
Ravetto, P.; Sumini, M.; Ganapol, B.D.
1988-01-01
In an attempt to better understand the influence of prompt and delayed neutrons on nuclear reactor dynamics, a continuous slowing down model based on Fermi age theory was developed several years ago. This model was easily incorporated into the one-group diffusion equation and provided a realistic physical picture of how delayed and prompt neutrons slow down and simultaneously diffuse throughout a medium. The model allows for different slowing down times for each delayed neutron group as well as for prompt neutrons and for spectral differences between the two typed of neutrons. Because of its generality, this model serves not only a a useful predictive tool to anticipate reactor transients, but also as an excellent educational tool to demonstrate the effect of delayed neutrons in reactor kinetics. However, because of numerical complications, the slowing down model could not be developed to its full potential. In particular, the major limitation was the inversion of the Laplace transform, which relied on a knowledge of the poles associated with the resulting transformed flux. For this reason, only one group of delayed neutrons and times longer than the slowing down times could be considered. As is shown, the new inversion procedure removes the short time limitation as well as allows for any number of delayed neutron groups. The inversion technique is versatile and is useful in teaching numerical methods in nuclear science.
Lead Slowing-Down Spectrometry for Spent Fuel Assay: FY11 Status Report
Warren, Glen A.; Casella, Andrew M.; Haight, R. C.; Anderson, Kevin K.; Danon, Yaron; Hatchett, D.; Becker, Bjorn; Devlin, M.; Imel, G. R.; Beller, D.; Gavron, A.; Kulisek, Jonathan A.; Bowyer, Sonya M.; Gesh, Christopher J.; O'Donnell, J. M.
2011-08-01
Executive Summary Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today’s confirmatory assay methods. This document is a progress report for FY2011 collaboration activities. Progress made by the collaboration in FY2011 continues to indicate the promise of LSDS techniques applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model demonstrated the potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space. Similar results were obtained using a perturbation approach developed by LANL. Benchmark measurements have been successfully conducted at LANL and at RPI using their respective LSDS instruments. The ISU and UNLV collaborative effort is focused on the fabrication and testing of prototype fission chambers lined with ultra-depleted 238U and 232Th, and uranium deposition on a stainless steel disc using spiked U3O8 from room temperature ionic liquid was successful, with improving thickness obtained. In FY2012, the collaboration plans a broad array of activities. PNNL will focus on optimizing its empirical model and minimizing its reliance on calibration data, as well continuing efforts on developing an analytical model. Additional measurements are
Slowing Down Times and dE/dX for Surface μ^+ in Low Pressure Gases
NASA Astrophysics Data System (ADS)
Senba, Masayoshi; Fleming, Donald; Arseneau, Donald; Pan, James; Mayne, Howard
2000-05-01
The times taken for surface muons to slow down from initial energies of ~ 2 MeV to the energy of the last charge exchange cycle, ~ 10 eV, has been measured using a novel technique in low pressure gases, from the phase of the μSR signal and its dependence on pressure. To our knowledge there are no other such data for postive muons in this energy range. These times can be converted to stoping powers, dE/dX, providing a unique test of the velocity dependence in the historic Bethe-Bloch equation, from a comparison of μ^+ and proton stopping powers. Calculations of the time spent in the charge-exchange regime have been carried out by an appropriate scaling of the atomic cross sections for proton charge exchange. The final thermalization time of the Mu atom, from about 10 eV to k_BT, has also been calculated in H2 gas, from cross sections determined from Quasi Classical Trajectories.
Stenager, Egon
2012-01-01
It has been suggested that exercise (or physical activity) might have the potential to have an impact on multiple sclerosis (MS) pathology and thereby slow down the disease process in MS patients. The objective of this literature review was to identify the literature linking physical exercise (or activity) and MS disease progression. A systematic literature search was conducted in the following databases: PubMed, SweMed+, Embase, Cochrane Library, PEDro, SPORTDiscus and ISI Web of Science. Different methodological approaches to the problem have been applied including (1) longitudinal exercise studies evaluating the effects on clinical outcome measures, (2) cross-sectional studies evaluating the relationship between fitness status and MRI findings, (3) cross-sectional and longitudinal studies evaluating the relationship between exercise/physical activity and disability/relapse rate and, finally, (4) longitudinal exercise studies applying the experimental autoimmune encephalomyelitis (EAE) animal model of MS. Data from intervention studies evaluating disease progression by clinical measures (1) do not support a disease-modifying effect of exercise; however, MRI data (2), patient-reported data (3) and data from the EAE model (4) indicate a possible disease-modifying effect of exercise, but the strength of the evidence limits definite conclusions. It was concluded that some evidence supports the possibility of a disease-modifying potential of exercise (or physical activity) in MS patients, but future studies using better methodologies are needed to confirm this. PMID:22435073
Critical Slowing Down in the Relaxor Ferroelectric K1-xLixTaO3(KLT)
NASA Astrophysics Data System (ADS)
Cai, Ling; Toulouse, Jean
2012-02-01
In this report, we illustrate an essential characteristic of mixed crystals such as KLT: the strong dependence of their macroscopic properties on the spatial distribution of the mixed ions in the crystal. As a prototypical relaxor ferroelectric, KLT exhibits a large dielectric constant, low frequency dispersion and a broad relaxation peak. Lithium randomly substitutes for potassium and, because of its smaller size, moves off-center in one of six possible <100> directions thus forming a local dipole. Correlations between these dipoles lead to the appearance of Polar Nanodomains (PNDs), the size and polarization of which depend on local density fluctuations or type of distribution of the Li ions (random homogeneous or locally clustered). The dielectric constant of two KLT crystals with almost identical average Li concentrations displays two radically different behaviors, which can be traced to two very different distributions of the lithium ions in the two crystals. This is particularly striking of the critical behaviors in the two separate crystals. A first order structural transition is observed in one crystal but critical slowing down is observed in the other. The type of spatial distribution present in each crystal can be inferred from the dielectric results.
Okumus, Burak; Landgraf, Dirk; Lai, Ghee Chuan; Bakhsi, Somenath; Arias-Castro, Juan Carlos; Yildiz, Sadik; Huh, Dann; Fernandez-Lopez, Raul; Peterson, Celeste N.; Toprak, Erdal; El Karoui, Meriem; Paulsson, Johan
2016-01-01
Many key regulatory proteins in bacteria are present in too low numbers to be detected with conventional methods, which poses a particular challenge for single-cell analyses because such proteins can contribute greatly to phenotypic heterogeneity. Here we develop a microfluidics-based platform that enables single-molecule counting of low-abundance proteins by mechanically slowing-down their diffusion within the cytoplasm of live Escherichia coli (E. coli) cells. Our technique also allows for automated microscopy at high throughput with minimal perturbation to native physiology, as well as viable enrichment/retrieval. We illustrate the method by analysing the control of the master regulator of the E. coli stress response, RpoS, by its adapter protein, SprE (RssB). Quantification of SprE numbers shows that though SprE is necessary for RpoS degradation, it is expressed at levels as low as 3–4 molecules per average cell cycle, and fluctuations in SprE are approximately Poisson distributed during exponential phase with no sign of bursting. PMID:27189321
Fission Physics and Cross Section Measurements with a Lead Slowing down Spectrometer
Romano, Catherine E; Danon, Yaron; Block, Richard; Thompson, Jason
2010-01-01
A Lead Slowing Down Spectrometer (LSDS) provides a high neutron flux environment that enables measurements of small samples ({approx}{mu}g) or samples with small cross sections (tens of {mu}b). The LSDS at Rensselaer Polytechnic Institute (RPI) was previously used for fission cross section measurements and for studies of methods for assay of used nuclear fuel. The effective energy range for the LSDS is 0.1 eV to 10 keV with energy resolution of about 35%. Two new LSDS applications were recently developed at RPI; the first enables simultaneous measurements of the fission cross section and fission fragment mass and energy distributions as a function of the incident neutron energy. The second enables measurements of the (n,{alpha}) and (n; p) cross sections for materials with a positive Q value for these reactions. Fission measurements of {sup 252}Cf, {sup 235}U, and {sup 239}Pu were completed and provide information on fission fragment and energy distributions in resonance clusters. Measurements of the (n,{alpha}) cross section for {sup 147,149}Sm were completed and compared to previously measured data. The new data indicate that the existing evaluations need to be adjusted.
Non-destructive Assay Measurements Using the RPI Lead Slowing Down Spectrometer
Becker, Bjorn; Weltz, Adam; Kulisek, Jonathan A.; Thompson, J. T.; Thompson, N.; Danon, Yaron
2013-10-01
The use of a Lead Slowing-Down Spectrometer (LSDS) is consid- ered as a possible option for non-destructive assay of fissile material of used nuclear fuel. The primary objective is to quantify the 239Pu and 235U fissile content via a direct measurement, distinguishing them through their characteristic fission spectra in the LSDS. In this pa- per, we present several assay measurements performed at the Rensse- laer Polytechnic Institute (RPI) to demonstrate the feasibility of such a method and to provide benchmark experiments for Monte Carlo cal- culations of the assay system. A fresh UOX fuel rod from the RPI Criticality Research Facility, a 239PuBe source and several highly en- riched 235U discs were assayed in the LSDS. The characteristic fission spectra were measured with 238U and 232Th threshold fission cham- bers, which are only sensitive to fission neutron with energy above the threshold. Despite the constant neutron and gamma background from the PuBe source and the intense interrogation neutron flux, the LSDS system was able to measure the characteristic 235U and 239Pu responses. All measurements were compared to Monte Carlo simula- tions. It was shown that the available simulation tools and models are well suited to simulate the assay, and that it is possible to calculate the absolute count rate in all investigated cases.
NASA Astrophysics Data System (ADS)
Mereuta, Loredana; Roy, Mahua; Asandei, Alina; Lee, Jong Kook; Park, Yoonkyung; Andricioaei, Ioan; Luchian, Tudor
2014-01-01
The microscopic details of how peptides translocate one at a time through nanopores are crucial determinants for transport through membrane pores and important in developing nano-technologies. To date, the translocation process has been too fast relative to the resolution of the single molecule techniques that sought to detect its milestones. Using pH-tuned single-molecule electrophysiology and molecular dynamics simulations, we demonstrate how peptide passage through the α-hemolysin protein can be sufficiently slowed down to observe intermediate single-peptide sub-states associated to distinct structural milestones along the pore, and how to control residence time, direction and the sequence of spatio-temporal state-to-state dynamics of a single peptide. Molecular dynamics simulations of peptide translocation reveal the time- dependent ordering of intermediate structures of the translocating peptide inside the pore at atomic resolution. Calculations of the expected current ratios of the different pore-blocking microstates and their time sequencing are in accord with the recorded current traces.
Traffic and Environmental Cues and Slow-Down Behaviors in Virtual Driving.
Hsu, Chun-Chia; Chuang, Kai-Hsiang
2016-02-01
This study used a driving simulator to investigate whether the presence of pedestrians and traffic engineering designs that reported to have reduction effects on overall traffic speed at intersections can facilitate drivers adopting lower impact speed behaviors at pedestrian crossings. Twenty-eight men (M age = 39.9 yr., SD = 11.5) with drivers' licenses participated. Nine studied measures were obtained from the speed profiles of each participant. A 14-km virtual road was presented to the participants. It included experimental scenarios of base intersection, pedestrian presence, pedestrian warning sign at intersection and in advance of intersection, and perceptual lane narrowing by hatching lines. Compared to the base intersection, the presence of pedestrians caused drivers to slow down earlier and reach a lower minimum speed before the pedestrian crossing. This speed behavior was not completely evident when adding a pedestrian warning sign at an intersection or having perceptual lane narrowing to the stop line. Additionally, installing pedestrian warning signs in advance of the intersections rather at the intersections was associated with higher impact speeds at pedestrian crossings. PMID:27420310
Mohapatra, Namrata; Tønnesen, Jan; Vlachos, Andreas; Kuner, Thomas; Deller, Thomas; Nägerl, U. Valentin; Santamaria, Fidel; Jedlicka, Peter
2016-01-01
Cl− plays a crucial role in neuronal function and synaptic inhibition. However, the impact of neuronal morphology on the diffusion and redistribution of intracellular Cl− is not well understood. The role of spines in Cl− diffusion along dendritic trees has not been addressed so far. Because measuring fast and spatially restricted Cl− changes within dendrites is not yet technically possible, we used computational approaches to predict the effects of spines on Cl− dynamics in morphologically complex dendrites. In all morphologies tested, including dendrites imaged by super-resolution STED microscopy in live brain tissue, spines slowed down longitudinal Cl− diffusion along dendrites. This effect was robust and could be observed in both deterministic as well as stochastic simulations. Cl− extrusion altered Cl− diffusion to a much lesser extent than the presence of spines. The spine-dependent slowing of Cl− diffusion affected the amount and spatial spread of changes in the GABA reversal potential thereby altering homosynaptic as well as heterosynaptic short-term ionic plasticity at GABAergic synapses in dendrites. Altogether, our results suggest a fundamental role of dendritic spines in shaping Cl− diffusion, which could be of relevance in the context of pathological conditions where spine densities and neural excitability are perturbed. PMID:26987404
Experimental observation of critical slowing down as an early warning of population collapse
NASA Astrophysics Data System (ADS)
Vorselen, Daan; Dai, Lei; Korolev, Kirill; Gore, Jeff
2012-02-01
Near tipping points marking population collapse or other critical transitions in complex systems small changes in conditions can result in drastic shifts in the system state. In theoretical models it is known that early warning signals can be used to predict the approach of these tipping points (bifurcations), but little is known about how these signals can be detected in practice. Here we use the budding yeast Saccharomyces cerevisiae to study these early warning signals in controlled experimental populations. We grow yeast in the sugar sucrose, where cooperative feeding dynamics causes a fold bifurcation; falling below a critical population size results in sudden collapse. We demonstrate the experimental observation of an increase in both the size and timescale of the fluctuations of population density near this fold bifurcation. Furthermore, we test the utility of theoretically predicted warning signals by observing them in two different slowly deteriorating environments. These findings suggest that these generic indicators of critical slowing down can be useful in predicting catastrophic changes in population biology.
NASA Astrophysics Data System (ADS)
Mohapatra, Namrata; Tønnesen, Jan; Vlachos, Andreas; Kuner, Thomas; Deller, Thomas; Nägerl, U. Valentin; Santamaria, Fidel; Jedlicka, Peter
2016-03-01
Cl‑ plays a crucial role in neuronal function and synaptic inhibition. However, the impact of neuronal morphology on the diffusion and redistribution of intracellular Cl‑ is not well understood. The role of spines in Cl‑ diffusion along dendritic trees has not been addressed so far. Because measuring fast and spatially restricted Cl‑ changes within dendrites is not yet technically possible, we used computational approaches to predict the effects of spines on Cl‑ dynamics in morphologically complex dendrites. In all morphologies tested, including dendrites imaged by super-resolution STED microscopy in live brain tissue, spines slowed down longitudinal Cl‑ diffusion along dendrites. This effect was robust and could be observed in both deterministic as well as stochastic simulations. Cl‑ extrusion altered Cl‑ diffusion to a much lesser extent than the presence of spines. The spine-dependent slowing of Cl‑ diffusion affected the amount and spatial spread of changes in the GABA reversal potential thereby altering homosynaptic as well as heterosynaptic short-term ionic plasticity at GABAergic synapses in dendrites. Altogether, our results suggest a fundamental role of dendritic spines in shaping Cl‑ diffusion, which could be of relevance in the context of pathological conditions where spine densities and neural excitability are perturbed.
Spectral history correction of microscopic cross sections for the PBR using the slowing down balance
Hudson, N.; Rahnema, F.
2006-07-01
A method has been formulated to account for depletion effects on microscopic cross sections within a Pebble Bed Reactor (PBR) spectral zone without resorting to calls to the spectrum (cross section generation) code or relying upon table interpolation between data at different values of burnup. In this method, infinite medium microscopic cross sections, fine group fission spectra, and modulation factors are pre-computed at selected isotopic states. This fine group information is used with the local spectral zone nuclide densities to generate new cross sections for each spectral zone. The local spectrum used to generate these microscopic cross sections is estimated through the solution to the cell-homogenized, infinite medium slowing down balance equation during the flux calculation. This technique is known as Spectral History Correction (SHC), and it is formulated to specifically account for burnup within a spectral zone. It was found that the SHC technique accurately calculates local broad group microscopic cross sections with local burnup information. Good agreement is obtained with cross sections generated directly by the cross section generator. Encouraging results include improvement in the converged fuel cycle eigenvalue, the power peaking factor, and the flux. It was also found that the method compared favorably to the benchmark problem in terms of the computational speed. (authors)
Okumus, Burak; Landgraf, Dirk; Lai, Ghee Chuan; Bakhsi, Somenath; Arias-Castro, Juan Carlos; Yildiz, Sadik; Huh, Dann; Fernandez-Lopez, Raul; Peterson, Celeste N; Toprak, Erdal; El Karoui, Meriem; Paulsson, Johan
2016-01-01
Many key regulatory proteins in bacteria are present in too low numbers to be detected with conventional methods, which poses a particular challenge for single-cell analyses because such proteins can contribute greatly to phenotypic heterogeneity. Here we develop a microfluidics-based platform that enables single-molecule counting of low-abundance proteins by mechanically slowing-down their diffusion within the cytoplasm of live Escherichia coli (E. coli) cells. Our technique also allows for automated microscopy at high throughput with minimal perturbation to native physiology, as well as viable enrichment/retrieval. We illustrate the method by analysing the control of the master regulator of the E. coli stress response, RpoS, by its adapter protein, SprE (RssB). Quantification of SprE numbers shows that though SprE is necessary for RpoS degradation, it is expressed at levels as low as 3-4 molecules per average cell cycle, and fluctuations in SprE are approximately Poisson distributed during exponential phase with no sign of bursting. PMID:27189321
NASA Astrophysics Data System (ADS)
Okumus, Burak; Landgraf, Dirk; Lai, Ghee Chuan; Bakhsi, Somenath; Arias-Castro, Juan Carlos; Yildiz, Sadik; Huh, Dann; Fernandez-Lopez, Raul; Peterson, Celeste N.; Toprak, Erdal; El Karoui, Meriem; Paulsson, Johan
2016-05-01
Many key regulatory proteins in bacteria are present in too low numbers to be detected with conventional methods, which poses a particular challenge for single-cell analyses because such proteins can contribute greatly to phenotypic heterogeneity. Here we develop a microfluidics-based platform that enables single-molecule counting of low-abundance proteins by mechanically slowing-down their diffusion within the cytoplasm of live Escherichia coli (E. coli) cells. Our technique also allows for automated microscopy at high throughput with minimal perturbation to native physiology, as well as viable enrichment/retrieval. We illustrate the method by analysing the control of the master regulator of the E. coli stress response, RpoS, by its adapter protein, SprE (RssB). Quantification of SprE numbers shows that though SprE is necessary for RpoS degradation, it is expressed at levels as low as 3-4 molecules per average cell cycle, and fluctuations in SprE are approximately Poisson distributed during exponential phase with no sign of bursting.
Zhang, Y. P.; Liu, Yi; Yuan, G. L.; Yang, J. W.; Song, X. Y.; Song, X. M.; Cao, J. Y.; Lei, G. J.; Wei, H. L.; Li, Y. G.; Shi, Z. B.; Li, X.; Yan, L. W.; Yang, Q. W.; Duan, X. R.; Isobe, M.; Collaboration: HL-2A Team
2012-11-15
Physics related to fast ions in magnetically confined fusion plasmas is a very important issue, since these particles will play an important role in future burning plasmas. Indeed, they will act as primary heating source and will sustain the self-ignited condition. To measure the fast ion slowing-down times in a magnetohydrodynamic-quiescent plasmas in different scenarios, very short pulses of a deuterium neutral beam, so-called 'blip,' with duration of about 5 ms were tangentially co-injected into a deuterium plasmas at the HuanLiuqi-2A (commonly referred to as HL-2A) tokamak [L. W. Yan, Nucl. Fusion 51, 094016 (2011)]. The decay rate of 2.45 MeV D-D fusion neutrons produced by beam-plasma reactions following neutral beam termination was measured by means of a {sup 235}U fission chamber. Experimental results were compared with those predicted by a classical slowing-down model. These results show that the fast ions are well confined with a peaked profile and the ions are slowed down classically without significant loss in the HL-2A tokamak. Moreover, it has been observed that during electron cyclotron resonance heating the fast ions have a longer slowing-down time and the neutron emission rate decay time becomes longer.
Fission Fragment Spectroscopy Using a Frisch-Gridded Chamber in RPI's Lead Slowing-Down Spectrometer
NASA Astrophysics Data System (ADS)
Romano, Catherine
2006-10-01
A double sided Frisch-gridded fission chamber for use in RPI's Lead Slowing-Down Spectrometer (LSDS) is being developed at Rensselaer Polytechnic Institute. Placing this fission chamber in the high neutron flux of the LSDS allows measurements of neutron induced fission cross sections, as well as the mass and kinetic energy of the fission fragments of various isotopes. The fission chamber consists of two anodes shielded by Frisch grids on either side of a single cathode. The sample is deposited on a thin polyimide film located in the center of the cathode. Samples are made by dissolving small amounts of actinides in solution, placing the solution on the films and allowing the solution to evaporate. The anode signal and the sum of the anode and grid signals are collected by the data acquisition system. These values are used to calculate the angle of emission of the fission fragments which is then used to determine their energies and masses. RPI's LSDS is a 75 ton, 1.8m cube of lead. The RPI 60MeV Linac creates neutrons through a (γ,n) reaction when the electrons interact with a tantalum target inside the lead spectrometer. The resulting neutron flux is about 4 orders of magnitude larger than an equivalent resolution time-of-flight experiment. The high neutron flux allows for the measurement of isotopes that are not available in large quantities (sub-micrograms) or with small fission cross sections (microbarns). In collaboration with Ezekiel Blain, Zack Goldstein, Yaron Danon and Robert Block at Rensselaer Polytechnic Institute. Funded by Stewardship Science Academic Alliance, National Nuclear Security Agency.
Cosmic-ray slowing down in molecular clouds: Effects of heavy nuclei
NASA Astrophysics Data System (ADS)
Chabot, Marin
2016-01-01
Context. A cosmic ray (CR) spectrum propagated through ISM contains very few low-energy (<100 MeV) particles. Recently, a local CR spectrum, with strong low energy components, has been proposed to be responsible for the over production of H3+ molecule in some molecular clouds. Aims: We aim to explore the effects of the chemical composition of low-energy cosmic rays (CRs) when they slow down in dense molecular clouds without magnetic fields. We considered both ionization and solid material processing rates. Methods: We used galatic CR chemical composition from proton to iron. We propagated two types of CR spectra through a cloud made of H2: those CR spectra with different contents of low energy CRs and those assumed to be initially identical for all CR species. The stopping and range of ions in matter (SRIM) package provided the necessary stopping powers. The ionization rates were computed with cross sections from recent semi-empirical laws, while effective cross sections were parametrized for solid processing rates using a power law of the stopping power (power 1 to 2). Results: The relative contribution to the cloud ionization of proton and heavy CRs was found identical everywhere in the irradiated cloud, no matter which CR spectrum we used. As compared to classical calculations, using protons and high-energy behaviour of ionization processes (Z2 scaling), we reduced absolute values of ionization rates by few a tens of percents but only in the case of spectrum with a high content of low-energy CRs. We found, using the same CR spectrum, the solid material processing rates to be reduced between the outer and inner part of thick cloud by a factor 10 (as in case of the ionization rates) or by a factor 100, depending on the type of process.
García-Pérez, Miguel A.
2014-01-01
Time perception is studied with subjective or semi-objective psychophysical methods. With subjective methods, observers provide quantitative estimates of duration and data depict the psychophysical function relating subjective duration to objective duration. With semi-objective methods, observers provide categorical or comparative judgments of duration and data depict the psychometric function relating the probability of a certain judgment to objective duration. Both approaches are used to study whether subjective and objective time run at the same pace or whether time flies or slows down under certain conditions. We analyze theoretical aspects affecting the interpretation of data gathered with the most widely used semi-objective methods, including single-presentation and paired-comparison methods. For this purpose, a formal model of psychophysical performance is used in which subjective duration is represented via a psychophysical function and the scalar property. This provides the timing component of the model, which is invariant across methods. A decisional component that varies across methods reflects how observers use subjective durations to make judgments and give the responses requested under each method. Application of the model shows that psychometric functions in single-presentation methods are uninterpretable because the various influences on observed performance are inextricably confounded in the data. In contrast, data gathered with paired-comparison methods permit separating out those influences. Prevalent approaches to fitting psychometric functions to data are also discussed and shown to be inconsistent with widely accepted principles of time perception, implicitly assuming instead that subjective time equals objective time and that observed differences across conditions do not reflect differences in perceived duration but criterion shifts. These analyses prompt evidence-based recommendations for best methodological practice in studies on time
Climatic Slow-down of the Pamir-Karakoram-Himalaya Glaciers Over the Last 25 Years
NASA Astrophysics Data System (ADS)
Dehecq, A.; Gourmelen, N.; Trouvé, E.
2015-12-01
Climate warming over the 20th century has caused drastic changes in mountain glaciers globally, and of the Himalayan glaciers in particular. The stakes are high; glaciers and ice caps are the largest contributor to the increase in the mass of the world's oceans, and the Himalayas play a key role in the hydrology of the region, impacting on the economy, food safety and flood risk. Partial monitoring of the Himalayan glaciers has revealed a contrasted picture; while many of the Himalayan glaciers are retreating, in some cases locally stable or advancing glaciers in this region have also been observed. Several studies based on field measurements or remote sensing have shown a dominant slow-down of mountain glaciers globally in response to these changes. But they are restricted to a few glaciers or small regions and none has analysed the dynamic response of glaciers to climate changes at regional scales. Here we present a region-wide analysis of annual glacier flow velocity covering the Pamir-Karakoram-Himalaya region obtained from the analysis of the entire archive of Landsat data. Over 90% of the ice-covered regions, as defined by the Randolph Glacier Inventory, are measured, with precision on the retrieved velocity of the order of 4 m/yr. The change in velocities over the last 25 years will be analysed with reference to regional glacier mass balance and topographic caracteristics. We show that the first order temporal evolution of glacier flow mirrors the pattern of glacier mass balance. We observe a general decrease of ice velocity in regions of known ice mass loss, and a more complex patterns consisting of mixed acceleration and decrease of ice velocity in regions that are known to be affected by stable mass balance and surge-like behavior.
Tao, Ran; Wang, Shitao; Xia, Xiaopeng; Wang, Youhua; Cao, Yi; Huang, Yuejiao; Xu, Xinbao; Liu, Zhongbing; Liu, Peichao; Tang, Xiaohang; Liu, Chun; Shen, Gan; Zhang, Dongmei
2015-08-01
Osteoarthritis (OA) is the most common arthritis and also one of the major causes of joint pain in elderly people. The aim of this study was to investigate the effects of pyrroloquinoline quinone (PQQ) on degenerated-related changes in osteoarthritis (OA). SW1353 cells were stimulated with IL-1β to establish the chondrocyte injury model in vitro. PQQ was administrated into SW1353 cultures 1 h before IL-1β treatment. Amounts of MMP-1, MMP-13, P65, IκBα, ERK, p-ERK, P38, and p-P38 were measured via western blot. The production of NO was determined by Griess reaction assay and reflected by the iNOS level. Meniscal-ligamentous injury (MLI) was performed on 8-week-old rats to establish the OA rat model. PQQ was injected intraperitoneally 3 days before MLI and consecutively until harvest, and the arthritis cartilage degeneration level was assessed. The expressions of MMP-1 and MMP-13 were significantly downregulated after PQQ treatment compared with that in IL-1β alone group. NO production and iNOS expression were decreased by PQQ treatment compared with control group. Amounts of nucleus P65 were upregulated in SW1353 after stimulated with IL-1β, while PQQ significantly inhibited the translocation. In rat OA model, treatment with PQQ markedly decelerated the degeneration of articular cartilage. These findings suggested that PQQ could inhibit OA-related catabolic proteins MMPs expression, NO production, and thus, slow down the articular cartilage degeneration and OA progression. Owing to its beneficial effects, PQQ is expected to be a novel pharmacological application in OA clinical prevention and treatment in the near future. PMID:25687637
Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY11 Status Report
Kulisek, Jonathan A.; Anderson, Kevin K.; Bowyer, Sonya M.; Casella, Andrew M.; Gesh, Christopher J.; Warren, Glen A.
2011-09-30
Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today's confirmatory assay methods. This document is a progress report for FY2011 PNNL analysis and algorithm development. Progress made by PNNL in FY2011 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model, which accounts for self-shielding effects using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the true self-shielding functions of the used fuel assembly models. The potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space was demonstrated. Also, in FY2011, PNNL continued to develop an analytical model. Such efforts included the addition of six more non-fissile absorbers in the analytical shielding function and the non-uniformity of the neutron flux across the LSDS assay chamber. A hybrid analytical-empirical approach was developed to determine the mass of total Pu (sum of the masses of 239Pu, 240Pu, and 241Pu), which is an important quantity in safeguards. Results using this hybrid method were of approximately the same accuracy as the pure
Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY12 Status Report
Kulisek, Jonathan A.; Anderson, Kevin K.; Casella, Andrew M.; Siciliano, Edward R.; Warren, Glen A.
2012-09-28
Executive Summary Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today’s confirmatory methods. This document is a progress report for FY2012 PNNL analysis and algorithm development. Progress made by PNNL in FY2012 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel assemblies. PNNL further refined the semi-empirical model developed in FY2011 based on singular value decomposition (SVD) to numerically account for the effects of self-shielding. The average uncertainty in the Pu mass across the NGSI-64 fuel assemblies was shown to be less than 3% using only six calibration assemblies with a 2% uncertainty in the isotopic masses. When calibrated against the six NGSI-64 fuel assemblies, the algorithm was able to determine the total Pu mass within <2% uncertainty for the 27 diversion cases also developed under NGSI. Two purely empirical algorithms were developed that do not require the use of Pu isotopic fission chambers. The semi-empirical and purely empirical algorithms were successfully tested using MCNPX simulations as well applied to experimental data measured by RPI using their LSDS. The algorithms were able to describe the 235U masses of the RPI measurements with an average uncertainty of 2.3%. Analyses were conducted that provided valuable insight with regard to design requirements (e
A slowing down of proton motion from HPTS to water adsorbed on the MCM-41 surface.
Alarcos, Noemí; Cohen, Boiko; Douhal, Abderrazzak
2016-01-28
We report on the steady-state and femtosecond-nanosecond (fs-ns) behaviour of 8-hydroxypyrene-1,3,6-trisulfonate (pyranine, HPTS) and its interaction with mesoporous silica based materials (MCM-41) in both solid-state and dichloromethane (DCM) suspensions in the absence and presence of water. In the absence of water, HPTS forms aggregates which are characterized by a broad emission spectrum and multiexponential behavior (τsolid-state/DCM = 120 ps, 600 ps, 2.2 ns). Upon interaction with MCM41, the aggregate population is found to be lower, leading to the formation of adsorbed monomers. In the presence of water (1%), HPTS with and without MCM41 materials in DCM suspensions undergoes an excited-state intermolecular proton-transfer (ESPT) reaction in the protonated form (ROH*) producing a deprotonated species (RO(-)*). The long-time emission decays of the ROH* in different systems in the presence of water are multiexponential, and are analysed using the diffusion-assisted geminate recombination model. The obtained proton-transfer and recombination rate constants for HPTS and HPTS/MCM41 complexes in DCM suspensions in the presence of water are kPT = 13 ns(-1), krec = 7.5 Å ns(-1), and kPT = 5.4 ns(-1), krec = 2.2 Å ns(-1), respectively, The slowing down of both processes in the latter case is explained in terms of specific interactions of the dye and of the water molecules with the silica surface. The ultrafast dynamics (fs-regime) of the HPTS/MCM41 complexes in DCM suspensions, without and with water, shows two components which are assigned to intramolecular vibrational-energy relaxation (IVR) (∼120 fs vs. ∼0.8 ps), and vibrational relaxation/cooling (VC), and charge transfer (CT) processes (∼2 ps without water and ∼5 ps with water) of the adsorbed ROH*. Our results provide new knowledge on the interactions and the proton-transfer reaction dynamics of HPTS adsorbed on mesoporous materials. PMID:26705542
NASA Astrophysics Data System (ADS)
Fu, Da-Wei; Cai, Hong-Ling; Li, Shen-Hui; Ye, Qiong; Zhou, Lei; Zhang, Wen; Zhang, Yi; Deng, Feng; Xiong, Ren-Gen
2013-06-01
A supramolecular adduct 4-methoxyanilinium perrhenate 18-crown-6 was synthesized, which undergoes a disorder-order structural phase transition at about 153 K (Tc) due to slowing down of a pendulumlike motion of the 4-methoxyanilinium group upon cooling. Ferroelectric hysteresis loop measurements give a spontaneous polarization of 1.2μC/cm2. Temperature-dependent solid-state nuclear magnetic resonance measurements reveal three kinds of molecular motions existing in the compound: pendulumlike swing of 4-methoxyanilinium cation, rotation of 18-crown-6 ring, and rotation of the methoxyl group. When the temperature decreases, the first two motions are frozen at about 153 K and the methoxyl group becomes rigid at around 126 K. The slowing down or freezing of pendulumlike motion of the cation triggered by temperature decreasing corresponds to the centrosymmetric-to-noncentrosymmetric arrangement of the compound, resulting in the formation of ferroelectricity.
First test experiment to produce the slowed-down RI beam with the momentum-compression mode at RIBF
NASA Astrophysics Data System (ADS)
Sumikama, T.; Ahn, D. S.; Fukuda, N.; Inabe, N.; Kubo, T.; Shimizu, Y.; Suzuki, H.; Takeda, H.; Aoi, N.; Beaumel, D.; Hasegawa, K.; Ideguchi, E.; Imai, N.; Kobayashi, T.; Matsushita, M.; Michimasa, S.; Otsu, H.; Shimoura, S.; Teranishi, T.
2016-06-01
The 82Ge beam has been produced by the in-flight fission reaction of the 238U primary beam with 345 MeV/u at the RIKEN RI beam factory, and slowed down to about 15 MeV/u using the energy degraders. The momentum-compression mode was applied to the second stage of the BigRIPS separator to reduce the momentum spread. The energy was successfully reduced down to 13 ± 2.5 MeV/u as expected. The focus was not optimized at the end of the second stage, therefore the beam size was larger than the expectation. The transmission of the second stage was half of the simulated value mainly due to out of focus. The two-stage separation worked very well for the slowed-down beam with the momentum-compression mode.
Guttal, Vishwesha; Raghavendra, Srinivas; Goel, Nikunj; Hoarau, Quentin
2016-01-01
Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms. PMID:26761792
Hoarau, Quentin
2016-01-01
Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms. PMID:26761792
NASA Astrophysics Data System (ADS)
Ebne Abbasi, Z.; Esfandyari-Kalejahi, A.
2016-07-01
The slowing down as well as the deflection time of test particles in the plasma is studied in the non-extensive statistics. The relevant relations are derived using Fokker Planck equation. It is remarked that the slowing down and deflection times modify considerably in the non-extensive statistics in comparison with Boltzmann Gibbs one. It is found that by decreasing non-extensivity index q ( 1 / 3 < q ≤ 1 which corresponds to plasma with excess super extensive particles), both the slowing down and deflection times will be increased. Also, for q ≥ 1 , i.e., the sub-extensive particles, the same results are obtained by decreasing q. Additionally, the effects of non-extensive distributed particles on the electrical conductivity and diffusion coefficient of plasma are studied. It is shown that plasmas with smaller qs are better conductors in both 1 / 3 < q ≤ 1 and q ≥ 1 . In addition, it is observed that by increasing q, Dreicer field will increase in both super-extensive and sub-extensive particles. Moreover, it is found that the diffusion coefficient across a magnetic field is decreased by decreasing q. Furthermore, our results reduce to the solutions of Maxwellian plasma at the extensive limit q → 1. This research will be helpful in understanding the relaxation times and transport properties of fusion and astrophysical plasmas.
Lead Slowing-Down Spectrometry for Spent Fuel Assay: FY12 Status Report
Warren, Glen A.; Anderson, Kevin K.; Casella, Andrew M.; Danon, Yaron; Devlin, M.; Gavron, A.; Haight, R. C.; Harris, Jason; Imel, G. R.; Kulisek, Jonathan A.; O'Donnell, J. M.; Stewart, T.; Weltz, Adam
2012-10-01
Executive Summary The Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign is supporting a multi-institutional collaboration to study the feasibility of using Lead Slowing Down Spectroscopy (LSDS) to conduct direct, independent and accurate assay of fissile isotopes in used fuel assemblies. The collaboration consists of Pacific Northwest National Laboratory (PNNL), Los Alamos National Laboratory (LANL), Rensselaer Polytechnic Institute (RPI), Idaho State University (ISU). There are three main challenges to implementing LSDS to assay used fuel assemblies. These challenges are the development of an algorithm for interpreting the data with an acceptable accuracy for the fissile masses, the development of suitable detectors for the technique, and the experimental benchmarking of the approach. This report is a summary of the progress in these areas made by the collaboration during FY2012. Significant progress was made on the project in FY2012. Extensive characterization of a “semi-empirical” algorithm was conducted. For example, we studied the impact on the accuracy of this algorithm by the minimization of the calibration set, uncertainties in the calibration masses, and by the choice of time window. Issues such a lead size, number of required neutrons, placement of the neutron source and the impact of cadmium around the detectors were also studied. In addition, new algorithms were developed that do not require the use of plutonium fission chambers. These algorithms were applied to measurement data taken by RPI and shown to determine the 235U mass within 4%. For detectors, a new concept for a fast neutron detector involving 4He recoil from neutron scattering was investigated. The detector has the potential to provide a couple of orders of magnitude more sensitivity than 238U fission chambers. Progress was also made on the more conventional approach of using 232Th fission chambers as fast neutron detectors. For
Slowing-down times and stopping powers for ˜2-MeVμ+ in low-pressure gases
NASA Astrophysics Data System (ADS)
Senba, Masayoshi; Arseneau, Donald J.; Pan, James J.; Fleming, Donald G.
2006-10-01
The times taken by positive muons (μ+) to slow down from initial energies in the range ˜3 to 1MeV , to the energy of the last muonium formation, ≈10eV , at the end of cyclic charge exchange, have been measured in the pure gases H2 , N2 , Ar , and in the gas mixtures Ar-He , Ar-Ne , Ar-CF4 , H2-He , and H2-SF6 , by the muon spin rotation (μSR) technique. At 1atm pressure, these slowing-down times, τSD , in Ar and N2 , vary from ˜14ns at the highest initial energies of 2.8MeV to 6.5ns at 1.6MeV , with much longer times, ˜34ns , seen at this energy in H2 . Similar variations are seen in the gas mixtures, depending also on the total charge and nature of the mixture and consistent with well-established (Bragg) additivity rules. The times τSD could also be used to determine the stopping powers, dE/dx , of the positive muon in N2 , Ar , and H2 , at kinetic energies near 2MeV . The results demonstrate that the μ+ and proton have the same stopping power at the same projectile velocity, as expected from the historic Bethe-Bloch formula, but not previously shown experimentally to our knowledge for the muon in gases at these energies. The energy of the first neutralization collision forming muonium (hydrogen), which initiates a series of charge-exchanging collisions, is also calculated for He , Ne , and Ar . The formalism necessary to describe the stopping power and moderation times, for either muon or proton, in three energy regimes—the Bethe-Bloch, cyclic charge exchange, and thermalization regimes—is developed and discussed in comparison with the experimental measurements reported here, and elsewhere. The slowing-down times through the first two regimes are controlled by the relevant ionization and charge-exchange cross sections, whereas the final thermalization regime is most sensitive to the forwardness of the elastic scattering cross sections. In this regime the slowing-down times (to kT ) at nominal pressures are expected to be ≲100ns .
Edge dislocation slows down oxide ion diffusion in doped CeO2 by segregation of charged defects
NASA Astrophysics Data System (ADS)
Sun, Lixin; Marrocchelli, Dario; Yildiz, Bilge
2015-02-01
Strained oxide thin films are of interest for accelerating oxide ion conduction in electrochemical devices. Although the effect of elastic strain has been uncovered theoretically, the effect of dislocations on the diffusion kinetics in such strained oxides is yet unclear. Here we investigate a 1/2<110>{100} edge dislocation by performing atomistic simulations in 4-12% doped CeO2 as a model fast ion conductor. At equilibrium, depending on the size of the dopant, trivalent cations and oxygen vacancies are found to simultaneously enrich or deplete either in the compressive or in the tensile strain fields around the dislocation. The associative interactions among the point defects in the enrichment zone and the lack of oxygen vacancies in the depletion zone slow down oxide ion transport. This finding is contrary to the fast diffusion of atoms along the dislocations in metals and should be considered when assessing the effects of strain on oxide ion conductivity.
Measurement of low energy neutron spectrum below 10 keV with the slowing down time method
NASA Astrophysics Data System (ADS)
Maekawa, F.; Oyama, Y.
1996-02-01
No general-purpose method of neutron spectrum measurement in the energy region around eV has been established so far. Neutron spectrum measurement in this energy region was attempted by applying the slowing down time (SDT) method, for the first time, inside two types of shield for fusion reactors, type 316 stainless steel (SS316) and SS316/water layered assemblies, incorporating with pulsed neutrons. In the SS316 assembly, neutron spectra below 1 keV were measured with an accuracy less than 10%. Although application of the SDT method was expected very difficult for SS316/water assembly since it contained lightest atoms of hydrogen, the measurement demonstrated that the SDT method was still effective for such shield assembly. The SDT method was also extended to thermal flux measurement in the SS316/water assembly. The present study demonstrated that the SDT method was effective for neutron spectrum measurement in the energy region around eV.
NASA Astrophysics Data System (ADS)
de Filippo, E.; Lanzanó, G.; Amorini, F.; Geraci, E.; Grassi, L.; La Guidara, E.; Lombardo, I.; Politi, G.; Rizzo, F.; Russotto, P.; Volant, C.; Hagmann, S.; Rothard, H.
2011-06-01
The slowing down of fast electrons emitted from insulators [Mylar, polypropylene (PP)] irradiated with swift ion beams (C, O, Kr, Ag, Xe; 20-64 MeV/u) was measured by the time-of-flight method at LNS, Catania and GANIL, Caen. The charge buildup, deduced from both convoy- and binary-encounter electron peak shifts, leads to target material-dependent potentials (6.0 kV for Mylar, 2.8 kV for PP). The number of projectiles needed for charging up (charging-up time constant) is inversely proportional to the electronic energy loss. After a certain time, a sudden decharging occurs. For low beam currents, charging-up time, energy shift corresponding to maximum charge buildup, and time of decharging are regular. For high beam currents, the time intervals become irregular (chaotic).
NASA Astrophysics Data System (ADS)
Fiks, E. I.; Pivovarov, Yu. L.
2015-07-01
Theoretical analysis and representative calculations of angular and spectral distributions of X-ray Transition Radiation (XTR) by Relativistic Heavy Ions (RHI) crossing a radiator are presented taking into account both XTR absorption and RHI slowing-down. The calculations are performed for RHI energies of GSI, FAIR, CERN SPS and LHC and demonstrate the influence of XTR photon absorption as well as RHI slowing-down in a radiator on the appearance/disappearance of interference effects in both angular and spectral distributions of XTR.
1995-03-29
Regulatory reformers in Congress are easing off the accelerator as they recognize that some of their more far-reaching proposals lack sufficient support to win passage. Last week the proposed one-year moratorium on new regulations was set back in the Senate by it main sponsor, Sen. Non Nickles (R., OK), who now seeks to replace it with a more moderate bill. Nickel`s substitute bill would give Congress 45 days after a regulation is issued to decide whether to reject it. It also retroactively allows for review of 80 regulations issued since last November 9, 1994. Asked how his new proposal is superior to a moratorium, which is sharply opposed by the Clinton Administration, Nickles says he thinks it is better because its permanent. The Chemical Manufacturer`s Association (CMA) has not publicly made a regulatory moratorium a top priority, but has quietly supported it by joining with other industry groups lobbying on the issue. A moratorium would halt EPA expansion of the Toxics Release Inventory (TRI) and alloys the delisting of several TRI chemicals.
D-Factor: A Quantitative Model of Application Slow-Down in Multi-Resource Shared Systems
Lim, Seung-Hwan; Huh, Jae-Seok; Kim, Youngjae; Shipman, Galen M; Das, Chita
2012-01-01
Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price - resource contention among jobs increases job completion time. In this paper, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job is characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We also show that the model can be integrated with an existing on-line scheduler to minimize the makespan of workloads.
Measurements of (n,α) cross-section of small samples using a lead-slowing-down-spectrometer
NASA Astrophysics Data System (ADS)
Romano, Catherine; Danon, Yaron; Haight, Robert C.; Wender, Stephen A.; Vieira, David J.; Bond, Evelyn M.; Rundberg, Robert S.; Wilhelmy, Jerry B.; O'Donnell, John M.; Michaudon, Andre F.; Bredeweg, Todd A.; Rochman, Dimitri; Granier, Thierry; Ethvignot, Thierry
2006-06-01
At the Los Alamos Neutron Science Center (LANSCE) a compensated ionization chamber (CIC) was placed in a lead slowing down spectrometer (LSDS) to measure the 6Li(n,α) 3H cross-section as a feasibility test for further work. The LSDS consists of a 1.2 m cube of lead with a tungsten target in the center where spallation neutrons are produced when bombarded with pulses of 800 MeV protons. The resulting neutron flux is of the order of 10 14 n/cm 2 /s which allows the cross-section measurement of samples of the order of 10's of nanograms. The initial experiment measured a 91 μg sample of natural lithium flouride. Cross-section measurements were obtained in the 0.1 eV-2 keV energy range. A 62 μg sample was placed in the chamber with a higher neutron beam intensity, and data was obtained in the 0.1-300 eV range. Adjustments in chamber dimensions and electronic configuration will improve gamma flash compensation at high beam intensity, decrease the dead time, and increase the energy range where data can be obtained. The intense neutron flux will allow the use of a smaller sample.
Progressive slowing down of spin fluctuations in underdoped LaFeAsO1-xFx
NASA Astrophysics Data System (ADS)
Hammerath, F.; Gräfe, U.; Kühne, T.; Kühne, H.; Kuhns, P. L.; Reyes, A. P.; Lang, G.; Wurmehl, S.; Büchner, B.; Carretta, P.; Grafe, H.-J.
2013-09-01
The evolution of low-energy spin dynamics in the iron-based superconductor LaFeAsO1-xFx was studied over a broad doping, temperature, and magnetic field range (x= 0-0.15, T≤ 480 K, μ0H≤ 30 T) by means of 75As nuclear magnetic resonance. An enhanced spin-lattice relaxation rate divided by temperature (T1T)-1 in underdoped superconducting samples (x= 0.045, 0.05, and 0.075) suggests the presence of antiferromagnetic spin fluctuations, which are strongly reduced in optimally doped (x=0.10) and completely absent in overdoped (x=0.15) samples. In contrast to previous analysis, Curie-Weiss fits are shown to be insufficient to describe the data over the whole temperature range. Instead, a Bloembergen-Purcell-Pound (BPP) model is used to describe the occurrence of a peak in (T1T)-1 clearly above the superconducting transition, reflecting a progressive slowing down of the spin fluctuations down to the superconducting phase transition.
NASA Astrophysics Data System (ADS)
Eckmann, Jean-Pierre; Procaccia, Itamar
2008-07-01
The aim of this paper is to discuss some basic notions regarding generic glass-forming systems composed of particles interacting via soft potentials. Excluding explicitly hard-core interaction, we discuss the so-called glass transition in which a supercooled amorphous state is formed, accompanied by a spectacular slowing down of relaxation to equilibrium, when the temperature is changed over a relatively small interval. Using the classical example of a 50-50 binary liquid of N particles with different interaction length scales, we show the following. (i) The system remains ergodic at all temperatures. (ii) The number of topologically distinct configurations can be computed, is temperature independent, and is exponential in N . (iii) Any two configurations in phase space can be connected using elementary moves whose number is polynomially bounded in N , showing that the graph of configurations has the small world property. (iv) The entropy of the system can be estimated at any temperature (or energy), and there is no Kauzmann crisis at any positive temperature. (v) The mechanism for the super-Arrhenius temperature dependence of the relaxation time is explained, connecting it to an entropic squeeze at the glass transition. (vi) There is no Vogel-Fulcher crisis at any finite temperature T>0 .
FOXO/DAF-16 Activation Slows Down Turnover of the Majority of Proteins in C. elegans.
Dhondt, Ineke; Petyuk, Vladislav A; Cai, Huaihan; Vandemeulebroucke, Lieselot; Vierstraete, Andy; Smith, Richard D; Depuydt, Geert; Braeckman, Bart P
2016-09-13
Most aging hypotheses assume the accumulation of damage, resulting in gradual physiological decline and, ultimately, death. Avoiding protein damage accumulation by enhanced turnover should slow down the aging process and extend the lifespan. However, lowering translational efficiency extends rather than shortens the lifespan in C. elegans. We studied turnover of individual proteins in the long-lived daf-2 mutant by combining SILeNCe (stable isotope labeling by nitrogen in Caenorhabditiselegans) and mass spectrometry. Intriguingly, the majority of proteins displayed prolonged half-lives in daf-2, whereas others remained unchanged, signifying that longevity is not supported by high protein turnover. This slowdown was most prominent for translation-related and mitochondrial proteins. In contrast, the high turnover of lysosomal hydrolases and very low turnover of cytoskeletal proteins remained largely unchanged. The slowdown of protein dynamics and decreased abundance of the translational machinery may point to the importance of anabolic attenuation in lifespan extension, as suggested by the hyperfunction theory. PMID:27626670
Smith, Leon E.; Haas, Derek A.; Gavron, Victor A.; Imel, G. R.; Ressler, Jennifer J.; Bowyer, Sonya M.; Danon, Y.; Beller, D.
2009-09-25
Under funding from the Department of Energy Office of Nuclear Energy’s Materials, Protection, Accounting, and Control for Transmutation (MPACT) program (formerly the Advanced Fuel Cycle Initiative Safeguards Campaign), Pacific Northwest National Laboratory (PNNL) and Los Alamos National Laboratory (LANL) are collaborating to study the viability of lead slowing-down spectroscopy (LSDS) for spent-fuel assay. Based on the results of previous simulation studies conducted by PNNL and LANL to estimate potential LSDS performance, a more comprehensive study of LSDS viability has been defined. That study includes benchmarking measurements, development and testing of key enabling instrumentation, and continued study of time-spectra analysis methods. This report satisfies the requirements for a PNNL/LANL deliverable that describes the objectives, plans and contributing organizations for a comprehensive three-year study of LSDS for spent-fuel assay. This deliverable was generated largely during the LSDS workshop held on August 25-26, 2009 at Rensselaer Polytechnic Institute (RPI). The workshop itself was a prominent milestone in the FY09 MPACT project and is also described within this report.
Ghosh, Shampa; Sinha, Jitendra Kumar; Raghunath, Manchala
2016-09-01
DNA damage caused by various sources remains one of the most researched topics in the area of aging and neurodegeneration. Increased DNA damage causes premature aging. Aging is plastic and is characterised by the decline in the ability of a cell/organism to maintain genomic stability. Lifespan can be modulated by various interventions like calorie restriction, a balanced diet of macro and micronutrients or supplementation with nutrients/nutrient formulations such as Amalaki rasayana, docosahexaenoic acid, resveratrol, curcumin, etc. Increased levels of DNA damage in the form of double stranded and single stranded breaks are associated with decreased longevity in animal models like WNIN/Ob obese rats. Erroneous DNA repair can result in accumulation of DNA damage products, which in turn result in premature aging disorders such as Hutchinson-Gilford progeria syndrome. Epigenomic studies of the aging process have opened a completely new arena for research and development of drugs and therapeutic agents. We propose here that agents or interventions that can maintain epigenomic stability and facilitate the DNA repair process can slow down the progress of premature aging, if not completely prevent it. © 2016 IUBMB Life, 68(9):717-721, 2016. PMID:27364681
ERIC Educational Resources Information Center
Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno
2007-01-01
This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…
A method for (n,alpha) and (n,p) cross section measurements using a lead slowing-down spectrometer
NASA Astrophysics Data System (ADS)
Thompson, Jason Tyler
The need for nuclear data comes from several sources including astrophysics, stockpile stewardship, and reactor design. Photodisintegration, neutron capture, and charged particle out reactions on stable or short-lived radioisotopes play crucial roles during stellar evolution and forming solar isotopic abundances whereas these reactions can affect the safety of our national weapons stockpile or criticality and safety calculations for reactors. Although models can be used to predict some of these values, these predictions are only as good as the experimental data that constrains them. For neutron-induced emission of α particles and protons ((n,α) and (n,p) reactions) at energies below 1 MeV, the experimental data is at best scarce and models must rely on extrapolations from unlike situations, (i.e. different reactions, isotopes, and energies) providing ample room for uncertainty. In this work a new method of measuring energy dependent (n,α) and (n,p) cross sections was developed for the energy range of 0.1 eV - ˜100 keV using a lead slowing-down spectrometer (LSDS). The LSDS provides a ˜10 4 neutron flux increase over the more conventionally used time-of-flight (ToF) methods at equivalent beam conditions, allowing for the measurement of small cross sections (µb’s to mb’s) while using small sample masses (µg’s to mg’s). Several detector concepts were designed and tested, including specially constructed Canberra passivated, implanted, planar silicon (PIPS) detectors; and gas-electron-multiplier (GEM) foils. All designs are compensated to minimize γ-flash problems. The GEM detector was found to function satisfactory for (n,α) measurements, but the PIPS detectors were found to be better suited for (n,p) reaction measurements. A digital data acquisition (DAQ) system was programmed such that background can be measured simultaneously with the reaction cross section. Measurements of the 147Sm(n,α)144Nd and 149 Sm(n,α)146Nd reaction cross sections were
Konefał, Adam; Łaciak, Marcin; Dawidowska, Anna; Osewski, Wojciech
2014-12-01
The detailed analysis of nuclear reactions occurring in materials of the door is presented for the typical construction of an entrance door to a room with a slowed down neutron field. The changes in the construction of the door were determined to reduce effectively the level of neutron and gamma radiation in the vicinity of the door in a room adjoining the neutron field room. Optimisation of the door construction was performed with the use of Monte Carlo calculations (GEANT4). The construction proposed in this paper bases on the commonly used inexpensive protective materials such as borax (13.4 cm), lead (4 cm) and stainless steel (0.1 and 0.5 cm on the side of the neutron field room and of the adjoining room, respectively). The improved construction of the door, worked out in the presented studies, can be an effective protection against neutrons with energies up to 1 MeV. PMID:24324249
NASA Astrophysics Data System (ADS)
Latysheva, L. N.; Bergman, A. A.; Sobolevsky, N. M.; Ilić, R. D.
2013-04-01
Lead slowing-down (LSD) spectrometers have a low energy resolution (about 30%), but their luminosity is 103 to 104 times higher than that of time-of-flight (TOF) spectrometers. A high luminosity of LSD spectrometers makes it possible to use them to measure neutron cross section for samples of mass about several micrograms. These features specify a niche for the application of LSD spectrometers in measuring neutron cross sections for elements hardly available in macroscopic amounts—in particular, for actinides. A mathematical simulation of the parameters of SVZ-100 LSD spectrometer of the Institute for Nuclear Research (INR, Moscow) is performed in the present study on the basis of the MCNPX code. It is found that the moderation constant, which is the main parameter of LSD spectrometers, is highly sensitive to the size and shape of detecting volumes in calculations and, hence, to the real size of experimental channels of the LSD spectrometer.
Latysheva, L. N.; Bergman, A. A.; Sobolevsky, N. M.; Ilic, R. D.
2013-04-15
Lead slowing-down (LSD) spectrometers have a low energy resolution (about 30%), but their luminosity is 10{sup 3} to 10{sup 4} times higher than that of time-of-flight (TOF) spectrometers. A high luminosity of LSD spectrometers makes it possible to use them to measure neutron cross section for samples of mass about several micrograms. These features specify a niche for the application of LSD spectrometers in measuring neutron cross sections for elements hardly available in macroscopic amounts-in particular, for actinides. A mathematical simulation of the parameters of SVZ-100 LSD spectrometer of the Institute for Nuclear Research (INR, Moscow) is performed in the present study on the basis of the MCNPX code. It is found that the moderation constant, which is the main parameter of LSD spectrometers, is highly sensitive to the size and shape of detecting volumes in calculations and, hence, to the real size of experimental channels of the LSD spectrometer.
Torrigiani, Patrizia; Bressanin, Daniela; Ruiz, Karina Beatriz; Tadiello, Alice; Trainotti, Livio; Bonghi, Claudio; Ziosi, Vanina; Costa, Guglielmo
2012-09-01
Peach (Prunus persica var. laevis Gray) was chosen to unravel the molecular basis underlying the ability of spermidine (Sd) to influence fruit development and ripening. Field applications of 1 mM Sd on peach fruit at an early developmental stage, 41 days after full bloom (dAFB), i.e. at late stage S1, led to a slowing down of fruit ripening. At commercial harvest (125 dAFB, S4II) Sd-treated fruits showed a reduced ethylene production and flesh softening. The endogenous concentration of free and insoluble conjugated polyamines (PAs) increased (0.3-2.6-fold) 1 day after treatment (short-term response) butsoon it declined to control levels; starting from S3/S4, when soluble conjugated forms increased (up to five-fold relative to controls at ripening), PA levels became more abundant in treated fruits, (long-term response). Real-time reverse transcription-polymerase chain reaction analyses revealed that peaks in transcript levels of fruit developmental marker genes were shifted ahead in accord with a developmental slowing down. At ripening (S4I-S4II) the upregulation of the ethylene biosynthetic genes ACO1 and ACS1 was dramatically counteracted by Sd and this led to a strong downregulation of genes responsible for fruit softening, such as PG and PMEI. Auxin-related gene expression was also altered both in the short term (TRPB) and in the long term (GH3, TIR1 and PIN1), indicating that auxin plays different roles during development and ripening processes. Messenger RNA amounts of other hormone-related ripening-regulated genes, such as NCED and GA2-OX, were strongly downregulated at maturity. Results suggest that Sd interferes with fruit development/ripening by interacting with multiple hormonal pathways. PMID:22409726
NASA Astrophysics Data System (ADS)
Ausloos, Marcel
2015-06-01
Diffusion of knowledge is expected to be huge when agents are open minded. The report concerns a more difficult diffusion case when communities are made of stubborn agents. Communities having markedly different opinions are for example the Neocreationist and Intelligent Design Proponents (IDP), on one hand, and the Darwinian Evolution Defenders (DED), on the other hand. The case of knowledge diffusion within such communities is studied here on a network based on an adjacency matrix built from time ordered selected quotations of agents, whence for inter- and intra-communities. The network is intrinsically directed and not necessarily reciprocal. Thus, the adjacency matrices have complex eigenvalues; the eigenvectors present complex components. A quantification of the slow-down or speed-up effects of information diffusion in such temporal networks, with non-Markovian contact sequences, can be made by comparing the real time dependent (directed) network to its counterpart, the time aggregated (undirected) network, - which has real eigenvalues. In order to do so, small world networks which both contain an odd number of nodes are studied and compared to similar networks with an even number of nodes. It is found that (i) the diffusion of knowledge is more difficult on the largest networks; (ii) the network size influences the slowing-down or speeding-up diffusion process. Interestingly, it is observed that (iii) the diffusion of knowledge is slower in IDP and faster in DED communities. It is suggested that the finding can be "rationalized", if some "scientific quality" and "publication habit" is attributed to the agents, as common sense would guess. This finding offers some opening discussion toward tying scientific knowledge to belief.
Information slows down hierarchy growth
NASA Astrophysics Data System (ADS)
Czaplicka, Agnieszka; Suchecki, Krzysztof; Miñano, Borja; Trias, Miquel; Hołyst, Janusz A.
2014-06-01
We consider models of growing multilevel systems wherein the growth process is driven by rules of tournament selection. A system can be conceived as an evolving tree with a new node being attached to a contestant node at the best hierarchy level (a level nearest to the tree root). The proposed evolution reflects limited information on system properties available to new nodes. It can also be expressed in terms of population dynamics. Two models are considered: a constant tournament (CT) model wherein the number of tournament participants is constant throughout system evolution, and a proportional tournament (PT) model where this number increases proportionally to the growing size of the system itself. The results of analytical calculations based on a rate equation fit well to numerical simulations for both models. In the CT model all hierarchy levels emerge, but the birth time of a consecutive hierarchy level increases exponentially or faster for each new level. The number of nodes at the first hierarchy level grows logarithmically in time, while the size of the last, "worst" hierarchy level oscillates quasi-log-periodically. In the PT model, the occupations of the first two hierarchy levels increase linearly, but worse hierarchy levels either do not emerge at all or appear only by chance in the early stage of system evolution to further stop growing at all. The results allow us to conclude that information available to each new node in tournament dynamics restrains the emergence of new hierarchy levels and that it is the absolute amount of information, not relative, which governs such behavior.
NASA Astrophysics Data System (ADS)
Huhn, Oliver; Rhein, Monika; Hoppema, Mario; van Heuven, Steven
2013-06-01
We use a 27 year long time series of repeated transient tracer observations to investigate the evolution of the ventilation time scales and the related content of anthropogenic carbon (Cant) in deep and bottom water in the Weddell Sea. This time series consists of chlorofluorocarbon (CFC) observations from 1984 to 2008 together with first combined CFC and sulphur hexafluoride (SF6) measurements from 2010/2011 along the Prime Meridian in the Antarctic Ocean and across the Weddell Sea. Applying the Transit Time Distribution (TTD) method we find that all deep water masses in the Weddell Sea have been continually growing older and getting less ventilated during the last 27 years. The decline of the ventilation rate of Weddell Sea Bottom Water (WSBW) and Weddell Sea Deep Water (WSDW) along the Prime Meridian is in the order of 15-21%; the Warm Deep Water (WDW) ventilation rate declined much faster by 33%. About 88-94% of the age increase in WSBW near its source regions (1.8-2.4 years per year) is explained by the age increase of WDW (4.5 years per year). As a consequence of the aging, the Cant increase in the deep and bottom water formed in the Weddell Sea slowed down by 14-21% over the period of observations.
NASA Astrophysics Data System (ADS)
Romano, C.; Danon, Y.; Block, R.; Thompson, J.; Blain, E.; Bond, E.
2010-01-01
A new method of measuring fission fragment mass and energy distributions as a function of incident neutron energy in the range from below 0.1 eV to 1 keV has been developed. The method involves placing a double-sided Frisch-gridded fission chamber in Rensselaer Polytechnic Institute’s lead slowing-down spectrometer (LSDS). The high neutron flux of the LSDS allows for the measurement of the energy-dependent, neutron-induced fission cross sections simultaneously with the mass and kinetic energy of the fission fragments of various small samples. The samples may be isotopes that are not available in large quantities (submicrograms) or with small fission cross sections (microbarns). The fission chamber consists of two anodes shielded by Frisch grids on either side of a single cathode. The sample is located in the center of the cathode and is made by depositing small amounts of actinides on very thin films. The chamber was successfully tested and calibrated using 0.41±0.04 ng of Cf252 and the resulting mass distributions were compared to those of previous work. As a proof of concept, the chamber was placed in the LSDS to measure the neutron-induced fission cross section and fragment mass and energy distributions of 25.3±0.5μg of U235. Changes in the mass distributions as a function of incident neutron energy are evident and are examined using the multimodal fission mode model.
NASA Technical Reports Server (NTRS)
Berger, M. J.; Seltzer, S. M.; Maeda, K.
1972-01-01
The penetration, diffusion and slowing down of electrons in a semi-infinite air medium has been studied by the Monte Carlo method. The results are applicable to the atmosphere at altitudes up to 300 km. Most of the results pertain to monoenergetic electron beams injected into the atmosphere at a height of 300 km, either vertically downwards or with a pitch-angle distribution isotropic over the downward hemisphere. Some results were also obtained for various initial pitch angles between 0 deg and 90 deg. Information has been generated concerning the following topics: (1) the backscattering of electrons from the atmosphere, expressed in terms of backscattering coefficients, angular distributions and energy spectra of reflected electrons, for incident energies T(o) between 2 keV and 2 MeV; (2) energy deposition by electrons as a function of the altitude, down to 80 km, for T(o) between 2 keV and 2 MeV; (3) the corresponding energy depostion by electron-produced bremsstrahlung, down to 30 km; (4) the evolution of the electron flux spectrum as function of the atmospheric depth, for T(o) between 2 keV and 20 keV. Energy deposition results are given for incident electron beams with exponential and power-exponential spectra.
Moore, M.S.; Koehler, P.E.; Michaudon, A.; Schelberg, A. ); Danon, Y.; Block, R.C.; Slovacek, R.E. ); Hoff, R.W.; Lougheed, R.W. )
1990-01-01
In November 1989, we carried out a measurement of the fission cross section of {sup 247}Cm, {sup 250}Cf, and {sup 254}Es on the Rensselaer Intense Neutron Source (RINS) at Rensselaer Polytechnic Institute (RPI). In July 1990, we carried out a second measurement, using the same fission chamber and electronics, in beam geometry at the Los Alamos Neutron Scattering Center (LANSCE) facility. Using the relative count rates observed in the two experiments, and the flux-enhancement factors determined by the RPI group for a lead slowing-down spectrometer compared to beam geometry, we can assess the performance of a spectrometer similar to RINS, driven by the Proton Storage Ring (PSR) at the Los Alamos National Laboratory. With such a spectrometer, we find that is is feasible to make measurements with samples of 1 ng for fission 1 {mu}g for capture, and of isotopes with half-lives of tens of minutes. It is important to note that, while a significant amount of information can be obtained from the low resolution RINS measurement, a definitive determination of average properties, including the level density, requires that the resonance structure be resolved. 12 refs., 5 figs., 3 tabs.
Rat, Dorothea; Schmitt, Ulrich; Tippmann, Frank; Dewachter, Ilse; Theunis, Clara; Wieczerzak, Ewa; Postina, Rolf; van Leuven, Fred; Fahrenholz, Falk; Kojro, Elzbieta
2011-01-01
Pituitary adenylate cyclase-activating polypeptide (PACAP) has neuroprotective and neurotrophic properties and is a potent α-secretase activator. As PACAP peptides and their specific receptor PAC1 are localized in central nervous system areas affected by Alzheimer's disease (AD), this study aims to examine the role of the natural peptide PACAP as a valuable approach in AD therapy. We investigated the effect of PACAP in the brain of an AD transgenic mouse model. The long-term intranasal daily PACAP application stimulated the nonamyloidogenic processing of amyloid precursor protein (APP) and increased expression of the brain-derived neurotrophic factor and of the antiapoptotic Bcl-2 protein. In addition, it caused a strong reduction of the amyloid β-peptide (Aβ) transporter receptor for advanced glycation end products (RAGE) mRNA level. PACAP, by activation of the somatostatin-neprilysin cascade, also enhanced expression of the Aβ-degrading enzyme neprilysin in the mouse brain. Furthermore, daily PAC1-receptor activation via PACAP resulted in an increased mRNA level of both the PAC1 receptor and its ligand PACAP. Our behavioral studies showed that long-term PACAP treatment of APP[V717I]-transgenic mice improved cognitive function in animals. Thus, nasal application of PACAP was effective, and our results indicate that PACAP could be of therapeutic value in treating AD.—Rat, D., Schmitt, U., Tippmann, F., Dewachter, I., Theunis, C., Wieczerzak, E, Postina, R., van Leuven, F., Fahrenholz, F., Kojro, E. Neuropeptide pituitary adenylate cyclase-activating polypeptide (PACAP) slows down Alzheimer's disease-like pathology in amyloid precursor protein-transgenic mice. PMID:21593432
Kobayashi, Katsuhei; Yamamoto, Shuji; Lee, Samyol; Cho, Hyun-Je; Yamana, Hajimu; Moriyama, Hirotake; Fujita, Yoshiaki; Mitsugashira, Toshiaki
2001-11-15
Use is made of a back-to-back type of double fission chamber and an electron linear accelerator-driven lead slowing-down spectrometer to measure the neutron-induced fission cross sections of {sup 229}Th and {sup 231}Pa below 10 keV relative to that of {sup 235}U. A measurement relative to the {sup 10}B(n, {alpha}) reaction is also made using a BF{sub 3} counter at energies below 1 keV and normalized to the absolute value obtained by using the cross section of the {sup 235}U(n,f) reaction between 200 eV and 1 keV.The experimental data of the {sup 229}Th(n,f) reaction, which was measured by Konakhovich et al., show higher cross-section values, especially at energies of 0.1 to 0.4 eV. The data by Gokhberg et al. seem to be lower than the current measurement above 6 keV. Although the evaluated data in JENDL-3.2 are in general agreement with the measurement, the evaluation is higher from 0.25 to 5 eV and lower above 10 eV. The ENDF/B-VI data evaluated above 10 eV are also lower. The current thermal neutron-induced fission cross section at 0.0253 eV is 32.4 {+-} 10.7 b, which is in good agreement with results of Gindler et al., Mughabghab, and JENDL-3.2.The mean value of the {sup 231}Pa(n,f) cross sections between 0.37 and 0.52 eV, which were measured by Leonard and Odegaarden, is close to the current measurement. The evaluated data in ENDF/B-VI are lower below 0.15 eV and higher above {approx}30 eV. The ENDF/B-VI and the JEF-2.2 are extremely higher above 1 keV. The JENDL-3.2 data are in general agreement with the measurement, although they are lower above {approx}100 eV.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318
Melacci, Stefano; Gori, Marco
2013-11-01
Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, because the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given that dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization. PMID:24051728
Melacci, Stefano; Gori, Marco
2013-04-12
Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, since the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given which dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization. PMID:23589591
Sparse representation with kernels.
Gao, Shenghua; Tsang, Ivor Wai-Hung; Chia, Liang-Tien
2013-02-01
Recent research has shown the initial success of sparse coding (Sc) in solving many computer vision tasks. Motivated by the fact that kernel trick can capture the nonlinear similarity of features, which helps in finding a sparse representation of nonlinear features, we propose kernel sparse representation (KSR). Essentially, KSR is a sparse coding technique in a high dimensional feature space mapped by an implicit mapping function. We apply KSR to feature coding in image classification, face recognition, and kernel matrix approximation. More specifically, by incorporating KSR into spatial pyramid matching (SPM), we develop KSRSPM, which achieves a good performance for image classification. Moreover, KSR-based feature coding can be shown as a generalization of efficient match kernel and an extension of Sc-based SPM. We further show that our proposed KSR using a histogram intersection kernel (HIK) can be considered a soft assignment extension of HIK-based feature quantization in the feature coding process. Besides feature coding, comparing with sparse coding, KSR can learn more discriminative sparse codes and achieve higher accuracy for face recognition. Moreover, KSR can also be applied to kernel matrix approximation in large scale learning tasks, and it demonstrates its robustness to kernel matrix approximation, especially when a small fraction of the data is used. Extensive experimental results demonstrate promising results of KSR in image classification, face recognition, and kernel matrix approximation. All these applications prove the effectiveness of KSR in computer vision and machine learning tasks. PMID:23014744
Duff, I.
1994-12-31
This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.
Robotic Intelligence Kernel: Communications
Walton, Mike C.
2009-09-16
The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.
Robotic Intelligence Kernel: Driver
2009-09-16
The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.
Linearized Kernel Dictionary Learning
NASA Astrophysics Data System (ADS)
Golts, Alona; Elad, Michael
2016-06-01
In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.
LeFebvre, W.
1994-08-01
For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.
Calculates Thermal Neutron Scattering Kernel.
1989-11-10
Version 00 THRUSH computes the thermal neutron scattering kernel by the phonon expansion method for both coherent and incoherent scattering processes. The calculation of the coherent part is suitable only for calculating the scattering kernel for heavy water.
Can cooperation slow down emergency evacuations?
NASA Astrophysics Data System (ADS)
Cirillo, Emilio N. M.; Muntean, Adrian
2012-09-01
We study the motion of pedestrians through obscure corridors where the lack of visibility hides the precise position of the exits. Using a lattice model, we explore the effects of cooperation on the overall exit flux (evacuation rate). More precisely, we study the effect of the buddying threshold (of no exclusion per site) on the dynamics of the crowd. In some cases, we note that if the evacuees tend to cooperate and act altruistically, then their collective action tends to favor the occurrence of disasters.
[Demography: can growth be slowed down?].
1990-01-01
The UN Fund for Population Activities report on the status of world population in 1990 is particularly unsettling because it indicates that fertility is not declining as rapidly as had been predicted. The world population of some 5.3 billion is growing by 90-100 million per year. 6 years ago the growth rate appeared to be declining everywhere except in Africa and some regions of South Asia. Hopes that the world population would stabilize at around 10.2 billion by the end of the 21st century now appear unrealistic. Some countries such as the Philippines, India, and Morocco which had some success in slowing growth in the 1960s and 70s have seen a significant deceleration in the decline. Growth rates in several African countries are already 2.7% per year and increasing. It is projected that Africa's population will reach 1.581 billion by 2025. Already there are severe shortages of arable land in some overwhelmingly agricultural countries like Rwanda and Burundi, and malnutrition is widespread on the continent. Between 1979-81 and 1986- 87, cereal production declined in 25 African countries out of 43 for which the Food and Agriculture Organization has data. The urban population of developing countries is increasing at 3.6%/year. It grew from 285 million in 1950 to 1.384 billion today and is projected at 4.050 billion in 2050. Provision of water, electricity, and sanitary services will be very difficult. From 1970-88 the number of urban households without portable water increased from 138 million to 215 million. It is not merely the quality of life that is menaced by constant population growth, but also the very future of the earth as a habitat, because of the degradation of soils and forests and resulting global warming. 6-7 million hectares of agricultural land are believed to be lost to erosion each year. Deforestation is a principal cause of soil erosion. Each year more than 11 million hectares of tropical forest and forested zones are stripped, in addition to some 4.4 million hectares selectively harvested for lumber. Deforestation contributes to global warming and to deterioration of the ozone layer. Consequences of global warming by the middle of the next century may include decertification of entire countries, raising of the level of the oceans, and submersion of certain countries. To avert demographic and ecologic disaster, the geographic and financial access of women in developing countries to contraception should be improved, and some neglected groups such as adolescents should be brought into family planning programs. The condition of women must be improved so that they have access to a source of status other than motherhood. PMID:12283630
Robotic Intelligence Kernel: Architecture
2009-09-16
The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.
NASA Technical Reports Server (NTRS)
Spafford, Eugene H.; Mckendry, Martin S.
1986-01-01
An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.
Robotic Intelligence Kernel: Visualization
2009-09-16
The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.
NASA Astrophysics Data System (ADS)
Brandenburg, Jens; Müller, Jens; Schlueter, John A.
2012-02-01
We investigate the dynamics of correlated charge carriers in the vicinity of the Mott metal-insulator (MI) transition in the quasi-two-dimensional organic charge-transfer salt κ-(D8-BEDT-TTF)2Cu[N(CN)2]Br by means of fluctuation (noise) spectroscopy. The observed 1/f-type fluctuations are quantitatively very well described by a phenomenological model based on the concept of non-exponential kinetics. The main result is a correlation-induced enhancement of the fluctuations accompanied by a substantial shift of spectral weight to low frequencies in the vicinity of the Mott critical endpoint. This sudden slowing down of the electron dynamics, observed here in a pure Mott system, may be a universal feature of MI transitions. Our findings are compatible with an electronic phase separation in the critical region of the phase diagram and offer an explanation for the not yet understood absence of effective mass enhancement when crossing the Mott transition.
Brandenburg, J.; Muller, J.; Schlueter, J. A.
2012-02-01
We investigate the dynamics of correlated charge carriers in the vicinity of the Mott metal-insulator (MI) transition in the quasi-two-dimensional organic charge-transfer salt {kappa}-(D{sub 8}-BEDT-TTF){sub 2}Cu[N(CN){sub 2}]Br by means of fluctuation (noise) spectroscopy. The observed 1/f-type fluctuations are quantitatively very well described by a phenomenological model based on the concept of non-exponential kinetics. The main result is a correlation-induced enhancement of the fluctuations accompanied by a substantial shift of spectral weight to low frequencies in the vicinity of the Mott critical endpoint. This sudden slowing down of the electron dynamics, observed here in a pure Mott system, may be a universal feature of MI transitions. Our findings are compatible with an electronic phase separation in the critical region of the phase diagram and offer an explanation for the not yet understood absence of effective mass enhancement when crossing the Mott transition.
Kernel optimization in discriminant analysis.
You, Di; Hamsici, Onur C; Martinez, Aleix M
2011-03-01
Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results, using a large number of databases and classifiers, demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates. PMID:20820072
MC Kernel: Broadband Waveform Sensitivity Kernels for Seismic Tomography
NASA Astrophysics Data System (ADS)
Stähler, Simon C.; van Driel, Martin; Auer, Ludwig; Hosseini, Kasra; Sigloch, Karin; Nissen-Meyer, Tarje
2016-04-01
We present MC Kernel, a software implementation to calculate seismic sensitivity kernels on arbitrary tetrahedral or hexahedral grids across the whole observable seismic frequency band. Seismic sensitivity kernels are the basis for seismic tomography, since they map measurements to model perturbations. Their calculation over the whole frequency range was so far only possible with approximative methods (Dahlen et al. 2000). Fully numerical methods were restricted to the lower frequency range (usually below 0.05 Hz, Tromp et al. 2005). With our implementation, it's possible to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (www.axisem.info), and thus on spherically symmetric models. The advantage is that frequencies up to 0.2 Hz and higher can be accessed. Since the usage of irregular, adapted grids is an integral part of regularisation in seismic tomography, MC Kernel works in a inversion-grid-centred fashion: A Monte-Carlo integration method is used to project the kernel onto each basis function, which allows to control the desired precision of the kernel estimation. Also, it means that the code concentrates calculation effort on regions of interest without prior assumptions on the kernel shape. The code makes extensive use of redundancies in calculating kernels for different receivers or frequency-pass-bands for one earthquake, to facilitate its usage in large-scale global seismic tomography.
Lee, Myung Hee; Liu, Yufeng
2013-12-01
The continuum regression technique provides an appealing regression framework connecting ordinary least squares, partial least squares and principal component regression in one family. It offers some insight on the underlying regression model for a given application. Moreover, it helps to provide deep understanding of various regression techniques. Despite the useful framework, however, the current development on continuum regression is only for linear regression. In many applications, nonlinear regression is necessary. The extension of continuum regression from linear models to nonlinear models using kernel learning is considered. The proposed kernel continuum regression technique is quite general and can handle very flexible regression model estimation. An efficient algorithm is developed for fast implementation. Numerical examples have demonstrated the usefulness of the proposed technique. PMID:24058224
Kernel Phase and Kernel Amplitude in Fizeau Imaging
NASA Astrophysics Data System (ADS)
Pope, Benjamin J. S.
2016-09-01
Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent fhistory of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.
Bruemmer, David J.
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels.
Jayasumana, Sadeep; Hartley, Richard; Salzmann, Mathieu; Li, Hongdong; Harandi, Mehrtash
2015-12-01
In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels. PMID:26539851
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 8 2011-01-01 2011-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...
Cusp Kernels for Velocity-Changing Collisions
NASA Astrophysics Data System (ADS)
McGuyer, B. H.; Marsland, R., III; Olsen, B. A.; Happer, W.
2012-05-01
We introduce an analytical kernel, the “cusp” kernel, to model the effects of velocity-changing collisions on optically pumped atoms in low-pressure buffer gases. Like the widely used Keilson-Storer kernel [J. Keilson and J. E. Storer, Q. Appl. Math. 10, 243 (1952)QAMAAY0033-569X], cusp kernels are characterized by a single parameter and preserve a Maxwellian velocity distribution. Cusp kernels and their superpositions are more useful than Keilson-Storer kernels, because they are more similar to real kernels inferred from measurements or theory and are easier to invert to find steady-state velocity distributions.
Domain transfer multiple kernel learning.
Duan, Lixin; Tsang, Ivor W; Xu, Dong
2012-03-01
Cross-domain learning methods have shown promising results by leveraging labeled patterns from the auxiliary domain to learn a robust classifier for the target domain which has only a limited number of labeled samples. To cope with the considerable change between feature distributions of different domains, we propose a new cross-domain kernel learning framework into which many existing kernel methods can be readily incorporated. Our framework, referred to as Domain Transfer Multiple Kernel Learning (DTMKL), simultaneously learns a kernel function and a robust classifier by minimizing both the structural risk functional and the distribution mismatch between the labeled and unlabeled samples from the auxiliary and target domains. Under the DTMKL framework, we also propose two novel methods by using SVM and prelearned classifiers, respectively. Comprehensive experiments on three domain adaptation data sets (i.e., TRECVID, 20 Newsgroups, and email spam data sets) demonstrate that DTMKL-based methods outperform existing cross-domain learning and multiple kernel learning methods. PMID:21646679
RTOS kernel in portable electrocardiograph
NASA Astrophysics Data System (ADS)
Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.
2011-12-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
Technology Transfer Automated Retrieval System (TEKTRAN)
Oat (Avena sativa L.) kernels appear to contain much higher polar lipid concentrations than other plant tissues. We have extracted, identified, and quantified polar lipids from 18 oat genotypes grown in replicated plots in three environments in order to determine genotypic or environmental variation...
Accelerating the Original Profile Kernel
Hamp, Tobias; Goldberg, Tatyana; Rost, Burkhard
2013-01-01
One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel. PMID:23825697
Adaptive wiener image restoration kernel
Yuan, Ding
2007-06-05
A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.
Local Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...
The energy-dependent electron loss model for pencil beam dose kernels
NASA Astrophysics Data System (ADS)
Chvetsov, Alexei V.; Sandison, George A.; Yeboah, Collins
2000-10-01
The `monoenergetic' electron loss model was derived in a previous work to account for pathlength straggling in the Fermi-Eyges pencil beam problem. In this paper, we extend this model to account for energy-loss straggling and secondary knock-on electron transport in order to adequately predict a depth dose curve. To model energy-loss straggling, we use a weighted superposition of a discrete number of monoenergetic pencil beams with different initial energies where electrons travel along the depth-energy characteristics in the continuous slowing down approximation (CSDA). The energy straggling spectrum at depth determines the weighting assigned to each monoenergetic pencil beam. Supplemented by a simple transport model for the secondary knock-on electrons, the `energy-dependent' electron loss model predicts both lateral and depth dose distributions from the electron pencil beams in good agreement with Monte Carlo calculations and measurements. The calculation of dose distribution from a pencil beam takes 0.2 s on a Pentium III 500 MHz computer. Being computationally fast, the `energy-dependent' electron loss model can be used for the calculation of 3D energy deposition kernels in dose optimization schemes without using precalculated or measured data.
Kernel Near Principal Component Analysis
MARTIN, SHAWN B.
2002-07-01
We propose a novel algorithm based on Principal Component Analysis (PCA). First, we present an interesting approximation of PCA using Gram-Schmidt orthonormalization. Next, we combine our approximation with the kernel functions from Support Vector Machines (SVMs) to provide a nonlinear generalization of PCA. After benchmarking our algorithm in the linear case, we explore its use in both the linear and nonlinear cases. We include applications to face data analysis, handwritten digit recognition, and fluid flow.
Derivation of aerodynamic kernel functions
NASA Technical Reports Server (NTRS)
Dowell, E. H.; Ventres, C. S.
1973-01-01
The method of Fourier transforms is used to determine the kernel function which relates the pressure on a lifting surface to the prescribed downwash within the framework of Dowell's (1971) shear flow model. This model is intended to improve upon the potential flow aerodynamic model by allowing for the aerodynamic boundary layer effects neglected in the potential flow model. For simplicity, incompressible, steady flow is considered. The proposed method is illustrated by deriving known results from potential flow theory.
Kernel CMAC with improved capability.
Horváth, Gábor; Szabó, Tamás
2007-02-01
The cerebellar model articulation controller (CMAC) has some attractive features, namely fast learning capability and the possibility of efficient digital hardware implementation. Although CMAC was proposed many years ago, several open questions have been left even for today. The most important ones are about its modeling and generalization capabilities. The limits of its modeling capability were addressed in the literature, and recently, certain questions of its generalization property were also investigated. This paper deals with both the modeling and the generalization properties of CMAC. First, a new interpolation model is introduced. Then, a detailed analysis of the generalization error is given, and an analytical expression of this error for some special cases is presented. It is shown that this generalization error can be rather significant, and a simple regularized training algorithm to reduce this error is proposed. The results related to the modeling capability show that there are differences between the one-dimensional (1-D) and the multidimensional versions of CMAC. This paper discusses the reasons of this difference and suggests a new kernel-based interpretation of CMAC. The kernel interpretation gives a unified framework. Applying this approach, both the 1-D and the multidimensional CMACs can be constructed with similar modeling capability. Finally, this paper shows that the regularized training algorithm can be applied for the kernel interpretations too, which results in a network with significantly improved approximation capabilities. PMID:17278566
RKRD: Runtime Kernel Rootkit Detection
NASA Astrophysics Data System (ADS)
Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.
In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.
Visualizing and Interacting with Kernelized Data.
Barbosa, A; Paulovich, F V; Paiva, A; Goldenstein, S; Petronetto, F; Nonato, L G
2016-03-01
Kernel-based methods have experienced a substantial progress in the last years, tuning out an essential mechanism for data classification, clustering and pattern recognition. The effectiveness of kernel-based techniques, though, depends largely on the capability of the underlying kernel to properly embed data in the feature space associated to the kernel. However, visualizing how a kernel embeds the data in a feature space is not so straightforward, as the embedding map and the feature space are implicitly defined by the kernel. In this work, we present a novel technique to visualize the action of a kernel, that is, how the kernel embeds data into a high-dimensional feature space. The proposed methodology relies on a solid mathematical formulation to map kernelized data onto a visual space. Our approach is faster and more accurate than most existing methods while still allowing interactive manipulation of the projection layout, a game-changing trait that other kernel-based projection techniques do not have. PMID:26829242
Nonlinear projection trick in kernel methods: an alternative to the kernel trick.
Kwak, Nojun
2013-12-01
In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach. PMID:24805227
Image texture analysis of crushed wheat kernels
NASA Astrophysics Data System (ADS)
Zayas, Inna Y.; Martin, C. R.; Steele, James L.; Dempster, Richard E.
1992-03-01
The development of new approaches for wheat hardness assessment may impact the grain industry in marketing, milling, and breeding. This study used image texture features for wheat hardness evaluation. Application of digital imaging to grain for grading purposes is principally based on morphometrical (shape and size) characteristics of the kernels. A composite sample of 320 kernels for 17 wheat varieties were collected after testing and crushing with a single kernel hardness characterization meter. Six wheat classes where represented: HRW, HRS, SRW, SWW, Durum, and Club. In this study, parameters which characterize texture or spatial distribution of gray levels of an image were determined and used to classify images of crushed wheat kernels. The texture parameters of crushed wheat kernel images were different depending on class, hardness and variety of the wheat. Image texture analysis of crushed wheat kernels showed promise for use in class, hardness, milling quality, and variety discrimination.
Molecular Hydrodynamics from Memory Kernels.
Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin
2016-04-01
The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t^{-3/2}. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius. PMID:27104730
KERNEL PHASE IN FIZEAU INTERFEROMETRY
Martinache, Frantz
2010-11-20
The detection of high contrast companions at small angular separation appears feasible in conventional direct images using the self-calibration properties of interferometric observable quantities. The friendly notion of closure phase, which is key to the recent observational successes of non-redundant aperture masking interferometry used with adaptive optics, appears to be one example of a wide family of observable quantities that are not contaminated by phase noise. In the high-Strehl regime, soon to be available thanks to the coming generation of extreme adaptive optics systems on ground-based telescopes, and already available from space, closure phase like information can be extracted from any direct image, even taken with a redundant aperture. These new phase-noise immune observable quantities, called kernel phases, are determined a priori from the knowledge of the geometry of the pupil only. Re-analysis of archive data acquired with the Hubble Space Telescope NICMOS instrument using this new kernel-phase algorithm demonstrates the power of the method as it clearly detects and locates with milliarcsecond precision a known companion to a star at angular separation less than the diffraction limit.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...
Code of Federal Regulations, 2013 CFR
2013-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...
Code of Federal Regulations, 2014 CFR
2014-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than...
Corn kernel oil and corn fiber oil
Technology Transfer Automated Retrieval System (TEKTRAN)
Unlike most edible plant oils that are obtained directly from oil-rich seeds by either pressing or solvent extraction, corn seeds (kernels) have low levels of oil (4%) and commercial corn oil is obtained from the corn germ (embryo) which is an oil-rich portion of the kernel. Commercial corn oil cou...
Bayesian Kernel Mixtures for Counts
Canale, Antonio; Dunson, David B.
2011-01-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
Slow Down to Brake: Effects of Tapering Epinephrine on Potassium.
Veerbhadran, Sivaprasad; Nayagam, Asher Ennis; Ramraj, Sandeep; Raghavan, Jaganathan
2016-07-01
Hyperkalemia is not an uncommon complication of cardiac surgical procedures. Intractable hyperkalemia is a difficult situation that can even lead to death. We report on a postoperative case in a patient in whom a sudden decrease of epinephrine led to intractable hyperkalemia and cardiac arrest. We wish to draw the reader's attention to the issue that sudden discontinuation of epinephrine can lead to dangerous hyperkalemia. PMID:27343526
Vitamin E slows down the progression of osteoarthritis
LI, XI; DONG, ZHONGLI; ZHANG, FUHOU; DONG, JUNJIE; ZHANG, YUAN
2016-01-01
Osteoarthritis is a chronic degenerative joint disorder with the characteristics of articular cartilage destruction, subchondral bone alterations and synovitis. Clinical signs and symptoms of osteoarthritis include pain, stiffness, restricted motion and crepitus. It is the major cause of joint dysfunction in developed nations and has enormous social and economic consequences. Current treatments focus on symptomatic relief, however, they lack efficacy in controlling the progression of this disease, which is a leading cause of disability. Vitamin E is safe to use and may delay the progression of osteoarthritis by acting on several aspects of the disease. In this review, how vitamin E may promote the maintenance of skeletal muscle and the regulation of nucleic acid metabolism to delay osteoarthritis progression is explored. In addition, how vitamin E may maintain the function of sex organs and the stability of mast cells, thus conferring a greater resistance to the underlying disease process is also discussed. Finally, the protective effect of vitamin E on the subchondral vascular system, which decreases the reactive remodeling in osteoarthritis, is reviewed. PMID:27347011
Misplaced helix slows down ultrafast pressure-jump protein folding.
Prigozhin, Maxim B; Liu, Yanxin; Wirth, Anna Jean; Kapoor, Shobhna; Winter, Roland; Schulten, Klaus; Gruebele, Martin
2013-05-14
Using a newly developed microsecond pressure-jump apparatus, we monitor the refolding kinetics of the helix-stabilized five-helix bundle protein λ*YA, the Y22W/Q33Y/G46,48A mutant of λ-repressor fragment 6-85, from 3 μs to 5 ms after a 1,200-bar P-drop. In addition to a microsecond phase, we observe a slower 1.4-ms phase during refolding to the native state. Unlike temperature denaturation, pressure denaturation produces a highly reversible helix-coil-rich state. This difference highlights the importance of the denatured initial condition in folding experiments and leads us to assign a compact nonnative helical trap as the reason for slower P-jump-induced refolding. To complement the experiments, we performed over 50 μs of all-atom molecular dynamics P-drop refolding simulations with four different force fields. Two of the force fields yield compact nonnative states with misplaced α-helix content within a few microseconds of the P-drop. Our overall conclusion from experiment and simulation is that the pressure-denatured state of λ*YA contains mainly residual helix and little β-sheet; following a fast P-drop, at least some λ*YA forms misplaced helical structure within microseconds. We hypothesize that nonnative helix at helix-turn interfaces traps the protein in compact nonnative conformations. These traps delay the folding of at least some of the population for 1.4 ms en route to the native state. Based on molecular dynamics, we predict specific mutations at the helix-turn interfaces that should speed up refolding from the pressure-denatured state, if this hypothesis is correct. PMID:23620522
Can Lionel Messi's brain slow down time passing?
Jafari, Sajad; Smith, Leslie Samuel
2016-01-01
It seems that seeing others in slow-motion by heroes does not belong only to movies. When Lionel Messi plays football, you can hardly see anything from him that other players cannot do. Then why he is not stoppable really? It seems the answer may be that opponents do not have enough time to do what they want; because in Messi's neural system, time passes slower. In differential equations that model a single neuron, this speed can be generated by multiplying an equal term in all equations. Or maybe interactions between neurons and the structure of neural networks play this role. PMID:27010676
Does the Speed of Light Slow Down Over Time?
ERIC Educational Resources Information Center
Ebert, Ronald
1997-01-01
The speed of light is a fundamental characteristic of the universe. So many processes are related to and dependent upon it that, if creationist claims were true, the universe would be far different from the way it is now. The speed of light has never been shown to vary based on the direction from which it was measured. (PVD)
Putting Priors in Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
Huang, Lulu; Massa, Lou
2010-01-01
The Kernel Energy Method (KEM) provides a way to calculate the ab-initio energy of very large biological molecules. The results are accurate, and the computational time reduced. However, by use of a list of double kernel interactions a significant additional reduction of computational effort may be achieved, still retaining ab-initio accuracy. A numerical comparison of the indices that name the known double interactions in question, allow one to list higher order interactions having the property of topological continuity within the full molecule of interest. When, that list of interactions is unpacked, as a kernel expansion, which weights the relative importance of each kernel in an expression for the total molecular energy, high accuracy, and a further significant reduction in computational effort results. A KEM molecular energy calculation based upon the HF/STO3G chemical model, is applied to the protein insulin, as an illustration. PMID:21243065
NASA Astrophysics Data System (ADS)
Buick, Otto; Falcon, Pat; Alexander, G.; Siegel, Edward Carl-Ludwig
2013-03-01
Einstein[Dover(03)] critical-slowing-down(CSD)[Pais, Subtle in The Lord; Life & Sci. of Albert Einstein(81)] is Siegel CyberWar denial-of-access(DOA) operations-research queuing theory/pinning/jamming/.../Read [Aikido, Aikibojitsu & Natural-Law(90)]/Aikido(!!!) phase-transition critical-phenomenon via Siegel DIGIT-Physics (Newcomb[Am.J.Math. 4,39(1881)]-{Planck[(1901)]-Einstein[(1905)])-Poincare[Calcul Probabilités(12)-p.313]-Weyl [Goett.Nachr.(14); Math.Ann.77,313 (16)]-{Bose[(24)-Einstein[(25)]-Fermi[(27)]-Dirac[(1927)]}-``Benford''[Proc.Am.Phil.Soc. 78,4,551 (38)]-Kac[Maths.Stat.-Reasoning(55)]-Raimi[Sci.Am. 221,109 (69)...]-Jech[preprint, PSU(95)]-Hill[Proc.AMS 123,3,887(95)]-Browne[NYT(8/98)]-Antonoff-Smith-Siegel[AMS Joint-Mtg.,S.-D.(02)] algebraic-inversion to yield ONLY BOSE-EINSTEIN QUANTUM-statistics (BEQS) with ZERO-digit Bose-Einstein CONDENSATION(BEC) ``INTERSECTION''-BECOME-UNION to Barabasi[PRL 876,5632(01); Rev.Mod.Phys.74,47(02)...] Network /Net/GRAPH(!!!)-physics BEC: Strutt/Rayleigh(1881)-Polya(21)-``Anderson''(58)-Siegel[J.Non-crystalline-Sol.40,453(80)
Kernel map compression for speeding the execution of kernel-based methods.
Arif, Omar; Vela, Patricio A
2011-06-01
The use of Mercer kernel methods in statistical learning theory provides for strong learning capabilities, as seen in kernel principal component analysis and support vector machines. Unfortunately, after learning, the computational complexity of execution through a kernel is of the order of the size of the training set, which is quite large for many applications. This paper proposes a two-step procedure for arriving at a compact and computationally efficient execution procedure. After learning in the kernel space, the proposed extension exploits the universal approximation capabilities of generalized radial basis function neural networks to efficiently approximate and replace the projections onto the empirical kernel map used during execution. Sample applications demonstrate significant compression of the kernel representation with graceful performance loss. PMID:21550884
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... weight of delivery 10,000 10,000 2. Percent of edible kernel weight 53.0 84.0 3. Less weight loss in... 7 Agriculture 8 2013-01-01 2013-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel...
7 CFR 51.2296 - Three-fourths half kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296... STANDARDS) United States Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2296 Three-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more...
UPDATE OF GRAY KERNEL DISEASE OF MACADAMIA - 2006
Technology Transfer Automated Retrieval System (TEKTRAN)
Gray kernel is an important disease of macadamia that affects the quality of kernels with gray discoloration and a permeating, foul odor that can render entire batches of nuts unmarketable. We report on the successful production of gray kernel in raw macadamia kernels artificially inoculated with s...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams;...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will...
KITTEN Lightweight Kernel 0.1 Beta
2007-12-12
The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten providesmore » unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency and scalability than with general purpose OS kernels.« less
Biological sequence classification with multivariate string kernels.
Kuksa, Pavel P
2013-01-01
String kernel-based machine learning methods have yielded great success in practical tasks of structured/sequential data analysis. They often exhibit state-of-the-art performance on many practical tasks of sequence analysis such as biological sequence classification, remote homology detection, or protein superfamily and fold prediction. However, typical string kernel methods rely on the analysis of discrete 1D string data (e.g., DNA or amino acid sequences). In this paper, we address the multiclass biological sequence classification problems using multivariate representations in the form of sequences of features vectors (as in biological sequence profiles, or sequences of individual amino acid physicochemical descriptors) and a class of multivariate string kernels that exploit these representations. On three protein sequence classification tasks, the proposed multivariate representations and kernels show significant 15-20 percent improvements compared to existing state-of-the-art sequence classification methods. PMID:24384708
Biological Sequence Analysis with Multivariate String Kernels.
Kuksa, Pavel P
2013-03-01
String kernel-based machine learning methods have yielded great success in practical tasks of structured/sequential data analysis. They often exhibit state-of-the-art performance on many practical tasks of sequence analysis such as biological sequence classification, remote homology detection, or protein superfamily and fold prediction. However, typical string kernel methods rely on analysis of discrete one-dimensional (1D) string data (e.g., DNA or amino acid sequences). In this work we address the multi-class biological sequence classification problems using multivariate representations in the form of sequences of features vectors (as in biological sequence profiles, or sequences of individual amino acid physico-chemical descriptors) and a class of multivariate string kernels that exploit these representations. On a number of protein sequence classification tasks proposed multivariate representations and kernels show significant 15-20\\% improvements compared to existing state-of-the-art sequence classification methods. PMID:23509193
Variational Dirichlet Blur Kernel Estimation.
Zhou, Xu; Mateos, Javier; Zhou, Fugen; Molina, Rafael; Katsaggelos, Aggelos K
2015-12-01
Blind image deconvolution involves two key objectives: 1) latent image and 2) blur estimation. For latent image estimation, we propose a fast deconvolution algorithm, which uses an image prior of nondimensional Gaussianity measure to enforce sparsity and an undetermined boundary condition methodology to reduce boundary artifacts. For blur estimation, a linear inverse problem with normalization and nonnegative constraints must be solved. However, the normalization constraint is ignored in many blind image deblurring methods, mainly because it makes the problem less tractable. In this paper, we show that the normalization constraint can be very naturally incorporated into the estimation process by using a Dirichlet distribution to approximate the posterior distribution of the blur. Making use of variational Dirichlet approximation, we provide a blur posterior approximation that considers the uncertainty of the estimate and removes noise in the estimated kernel. Experiments with synthetic and real data demonstrate that the proposed method is very competitive to the state-of-the-art blind image restoration methods. PMID:26390458
Weighted Bergman Kernels and Quantization}
NASA Astrophysics Data System (ADS)
Engliš, Miroslav
Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion
TICK: Transparent Incremental Checkpointing at Kernel Level
2004-10-25
TICK is a software package implemented in Linux 2.6 that allows the save and restore of user processes, without any change to the user code or binary. With TICK a process can be suspended by the Linux kernel upon receiving an interrupt and saved in a file. This file can be later thawed in another computer running Linux (potentially the same computer). TICK is implemented as a Linux kernel module, in the Linux version 2.6.5
PET Image Reconstruction Using Kernel Method
Wang, Guobao; Qi, Jinyi
2014-01-01
Image reconstruction from low-count PET projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization (EM) algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4D dynamic PET patient dataset showed promising results. PMID:25095249
PET image reconstruction using kernel method.
Wang, Guobao; Qi, Jinyi
2015-01-01
Image reconstruction from low-count positron emission tomography (PET) projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4-D dynamic PET patient dataset showed promising results. PMID:25095249
Evaluating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Wilton, Donald R.; Champagne, Nathan J.
2008-01-01
Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.
Analog forecasting with dynamics-adapted kernels
NASA Astrophysics Data System (ADS)
Zhao, Zhizhen; Giannakis, Dimitrios
2016-09-01
Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.
Online Sequential Extreme Learning Machine With Kernels.
Scardapane, Simone; Comminiello, Danilo; Scarpiniti, Michele; Uncini, Aurelio
2015-09-01
The extreme learning machine (ELM) was recently proposed as a unifying framework for different families of learning algorithms. The classical ELM model consists of a linear combination of a fixed number of nonlinear expansions of the input vector. Learning in ELM is hence equivalent to finding the optimal weights that minimize the error on a dataset. The update works in batch mode, either with explicit feature mappings or with implicit mappings defined by kernels. Although an online version has been proposed for the former, no work has been done up to this point for the latter, and whether an efficient learning algorithm for online kernel-based ELM exists remains an open problem. By explicating some connections between nonlinear adaptive filtering and ELM theory, in this brief, we present an algorithm for this task. In particular, we propose a straightforward extension of the well-known kernel recursive least-squares, belonging to the kernel adaptive filtering (KAF) family, to the ELM framework. We call the resulting algorithm the kernel online sequential ELM (KOS-ELM). Moreover, we consider two different criteria used in the KAF field to obtain sparse filters and extend them to our context. We show that KOS-ELM, with their integration, can result in a highly efficient algorithm, both in terms of obtained generalization error and training time. Empirical evaluations demonstrate interesting results on some benchmarking datasets. PMID:25561597
Nonparametric entropy estimation using kernel densities.
Lake, Douglas E
2009-01-01
The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation. PMID:19897106
Tile-Compressed FITS Kernel for IRAF
NASA Astrophysics Data System (ADS)
Seaman, R.
2011-07-01
The Flexible Image Transport System (FITS) is a ubiquitously supported standard of the astronomical community. Similarly, the Image Reduction and Analysis Facility (IRAF), developed by the National Optical Astronomy Observatory, is a widely used astronomical data reduction package. IRAF supplies compatibility with FITS format data through numerous tools and interfaces. The most integrated of these is IRAF's FITS image kernel that provides access to FITS from any IRAF task that uses the basic IMIO interface. The original FITS kernel is a complex interface of purpose-built procedures that presents growing maintenance issues and lacks recent FITS innovations. A new FITS kernel is being developed at NOAO that is layered on the CFITSIO library from the NASA Goddard Space Flight Center. The simplified interface will minimize maintenance headaches as well as add important new features such as support for the FITS tile-compressed (fpack) format.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.
Fast Generation of Sparse Random Kernel Graphs
2015-01-01
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most 𝒪(n(logn)2). As a practical example we show how to generate samples of power-law degree distribution graphs with tunable assortativity. PMID:26356296
Experimental study of turbulent flame kernel propagation
Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve
2008-07-15
Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{sub j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)
Full Waveform Inversion Using Waveform Sensitivity Kernels
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
2013-04-01
We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver
Volatile compound formation during argan kernel roasting.
El Monfalouti, Hanae; Charrouf, Zoubida; Giordano, Manuela; Guillaume, Dominique; Kartah, Badreddine; Harhar, Hicham; Gharby, Saïd; Denhez, Clément; Zeppa, Giuseppe
2013-01-01
Virgin edible argan oil is prepared by cold-pressing argan kernels previously roasted at 110 degrees C for up to 25 minutes. The concentration of 40 volatile compounds in virgin edible argan oil was determined as a function of argan kernel roasting time. Most of the volatile compounds begin to be formed after 15 to 25 minutes of roasting. This suggests that a strictly controlled roasting time should allow the modulation of argan oil taste and thus satisfy different types of consumers. This could be of major importance considering the present booming use of edible argan oil. PMID:23472454
Modified wavelet kernel methods for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Hsu, Pai-Hui; Huang, Xiu-Man
2015-10-01
Hyperspectral images have the capability of acquiring images of earth surface with several hundred of spectral bands. Providing such abundant spectral data should increase the abilities in classifying land use/cover type. However, due to the high dimensionality of hyperspectral data, traditional classification methods are not suitable for hyperspectral data classification. The common method to solve this problem is dimensionality reduction by using feature extraction before classification. Kernel methods such as support vector machine (SVM) and multiple kernel learning (MKL) have been successfully applied to hyperspectral images classification. In kernel methods applications, the selection of kernel function plays an important role. The wavelet kernel with multidimensional wavelet functions can find the optimal approximation of data in feature space for classification. The SVM with wavelet kernels (called WSVM) have been also applied to hyperspectral data and improve classification accuracy. In this study, wavelet kernel method combined multiple kernel learning algorithm and wavelet kernels was proposed for hyperspectral image classification. After the appropriate selection of a linear combination of kernel functions, the hyperspectral data will be transformed to the wavelet feature space, which should have the optimal data distribution for kernel learning and classification. Finally, the proposed methods were compared with the existing methods. A real hyperspectral data set was used to analyze the performance of wavelet kernel method. According to the results the proposed wavelet kernel methods in this study have well performance, and would be an appropriate tool for hyperspectral image classification.
Kernel abortion in maize. II. Distribution of /sup 14/C among kernel carboydrates
Hanft, J.M.; Jones, R.J.
1986-06-01
This study was designed to compare the uptake and distribution of /sup 14/C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35/sup 0/C were transferred to (/sup 14/C)sucrose media 10 days after pollination. Kernels cultured at 35/sup 0/C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on (/sup 14/C)sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35/sup 0/C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35/sup 0/C compared to kernels cultured at 30/sup 0/C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35/sup 0/C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30/sup 0/C (89%). Kernels cultured at 35/sup 0/C had a correspondingly higher proportion of /sup 14/C in endosperm fructose, glucose, and sucrose.
Accuracy of Reduced and Extended Thin-Wire Kernels
Burke, G J
2008-11-24
Some results are presented comparing the accuracy of the reduced thin-wire kernel and an extended kernel with exact integration of the 1/R term of the Green's function and results are shown for simple wire structures.
Fabrication of Uranium Oxycarbide Kernels for HTR Fuel
Charles Barnes; CLay Richardson; Scott Nagley; John Hunn; Eric Shaber
2010-10-01
Babcock and Wilcox (B&W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-µm, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B&W produced 425-µm, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B&W also produced 500-µm, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B&W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.
Kernel Partial Least Squares for Nonlinear Regression and Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
Kernel method and linear recurrence system
NASA Astrophysics Data System (ADS)
Hou, Qing-Hu; Mansour, Toufik
2008-06-01
Based on the kernel method, we present systematic methods to solve equation systems on generating functions of two variables. Using these methods, we get the generating functions for the number of permutations which avoid 1234 and 12k(k-1)...3 and permutations which avoid 1243 and 12...k.
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order... of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or brown spot, as defined in the United States Standards for Shelled Almonds, or which has embedded...
INTACT OR UNIT-KERNEL SWEET CORN
This report evaluates process and product modifications in canned and frozen sweet corn manufacture with the objective of reducing the total effluent produced in processing. In particular it evaluates the proposed replacement of process steps that yield cut or whole kernel corn w...
Arbitrary-resolution global sensitivity kernels
NASA Astrophysics Data System (ADS)
Nissen-Meyer, T.; Fournier, A.; Dahlen, F.
2007-12-01
Extracting observables out of any part of a seismogram (e.g. including diffracted phases such as Pdiff) necessitates the knowledge of 3-D time-space wavefields for the Green functions that form the backbone of Fréchet sensitivity kernels. While known for a while, this idea is still computationally intractable in 3-D, facing major simulation and storage issues when high-frequency wavefields are considered at the global scale. We recently developed a new "collapsed-dimension" spectral-element method that solves the 3-D system of elastodynamic equations in a 2-D space, based on exploring symmetry considerations of the seismic-wave radiation patterns. We will present the technical background on the computation of waveform kernels, various examples of time- and frequency-dependent sensitivity kernels and subsequently extracted time-window kernels (e.g. banana- doughnuts). Given the computationally light-weighted 2-D nature, we will explore some crucial parameters such as excitation type, source time functions, frequency, azimuth, discontinuity locations, and phase type, i.e. an a priori view into how, when, and where seismograms carry 3-D Earth signature. A once-and-for-all database of 2-D waveforms for various source depths shall then serve as a complete set of global time-space sensitivity for a given spherically symmetric background model, thereby allowing for tomographic inversions with arbitrary frequencies, observables, and phases.
Application of the matrix exponential kernel
NASA Technical Reports Server (NTRS)
Rohach, A. F.
1972-01-01
A point matrix kernel for radiation transport, developed by the transmission matrix method, has been used to develop buildup factors and energy spectra through slab layers of different materials for a point isotropic source. Combinations of lead-water slabs were chosen for examples because of the extreme differences in shielding properties of these two materials.
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
Applying Single Kernel Sorting Technology to Developing Scab Resistant Lines
Technology Transfer Automated Retrieval System (TEKTRAN)
We are using automated single-kernel near-infrared (SKNIR) spectroscopy instrumentation to sort fusarium head blight (FHB) infected kernels from healthy kernels, and to sort segregating populations by hardness to enhance the development of scab resistant hard and soft wheat varieties. We sorted 3 r...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Tamarind seed kernel powder. 176.350 Section...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Tamarind seed kernel powder. 176.350 Section 176...) INDIRECT FOOD ADDITIVES: PAPER AND PAPERBOARD COMPONENTS Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Tamarind seed kernel powder. 176.350 Section...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
21 CFR 176.350 - Tamarind seed kernel powder.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
Thermomechanical property of rice kernels studied by DMA
Technology Transfer Automated Retrieval System (TEKTRAN)
The thermomechanical property of the rice kernels was investigated using a dynamic mechanical analyzer (DMA). The length change of rice kernel with a loaded constant force along the major axis direction was detected during temperature scanning. The thermomechanical transition occurred in rice kernel...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... § 51.2125 Split or broken kernels. Split or broken kernels means seven-eighths or less of...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 7 2014-01-01 2014-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the...
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 7 2012-01-01 2012-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall...
Dose point kernel for boron-11 decay and the cellular S values in boron neutron capture therapy
Ma Yunzhi; Geng Jinpeng; Gao Song; Bao Shanglian
2006-12-15
The study of the radiobiology of boron neutron capture therapy is based on the cellular level dosimetry of boron-10's thermal neutron capture reaction {sup 10}B(n,{alpha}){sup 7}Li, in which one 1.47 MeV helium-4 ion and one 0.84 MeV lithium-7 ion are spawned. Because of the chemical preference of boron-10 carrier molecules, the dose is heterogeneously distributed in cells. In the present work, the (scaled) dose point kernel of boron-11 decay, called {sup 11}B-DPK, was calculated by GEANT4 Monte Carlo simulation code. The DPK curve drops suddenly at the radius of 4.26 {mu}m, the continuous slowing down approximation (CSDA) range of a lithium-7 ion. Then, after a slight ascending, the curve decreases to near zero when the radius goes beyond 8.20 {mu}m, which is the CSDA range of a 1.47 MeV helium-4 ion. With the DPK data, S values for nuclei and cells with the boron-10 on the cell surface are calculated for different combinations of cell and nucleus sizes. The S value for a cell radius of 10 {mu}m and a nucleus radius of 5 {mu}m is slightly larger than the value published by Tung et al. [Appl. Radiat. Isot. 61, 739-743 (2004)]. This result is potentially more accurate than the published value since it includes the contribution of a lithium-7 ion as well as the alpha particle.
Kernel weights optimization for error diffusion halftoning method
NASA Astrophysics Data System (ADS)
Fedoseev, Victor
2015-02-01
This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.
Chare kernel; A runtime support system for parallel computations
Shu, W. ); Kale, L.V. )
1991-03-01
This paper presents the chare kernel system, which supports parallel computations with irregular structure. The chare kernel is a collection of primitive functions that manage chares, manipulative messages, invoke atomic computations, and coordinate concurrent activities. Programs written in the chare kernel language can be executed on different parallel machines without change. Users writing such programs concern themselves with the creation of parallel actions but not with assigning them to specific processors. The authors describe the design and implementation of the chare kernel. Performance of chare kernel programs on two hypercube machines, the Intel iPSC/2 and the NCUBE, is also given.
Difference image analysis: automatic kernel design using information criteria
NASA Astrophysics Data System (ADS)
Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.
2016-03-01
We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.
A meshfree unification: reproducing kernel peridynamics
NASA Astrophysics Data System (ADS)
Bessa, M. A.; Foster, J. T.; Belytschko, T.; Liu, Wing Kam
2014-06-01
This paper is the first investigation establishing the link between the meshfree state-based peridynamics method and other meshfree methods, in particular with the moving least squares reproducing kernel particle method (RKPM). It is concluded that the discretization of state-based peridynamics leads directly to an approximation of the derivatives that can be obtained from RKPM. However, state-based peridynamics obtains the same result at a significantly lower computational cost which motivates its use in large-scale computations. In light of the findings of this study, an update to the method is proposed such that the limitations regarding application of boundary conditions and the use of non-uniform grids are corrected by using the reproducing kernel approximation.
Wilson Dslash Kernel From Lattice QCD Optimization
Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.; Vaidyanathan, Karthikeyan
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.
Searching and Indexing Genomic Databases via Kernelization
Gagie, Travis; Puglisi, Simon J.
2015-01-01
The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper, we survey the 20-year history of this idea and discuss its relation to kernelization in parameterized complexity. PMID:25710001
Multiple kernel learning for dimensionality reduction.
Lin, Yen-Yu; Liu, Tyng-Luh; Fuh, Chiou-Shann
2011-06-01
In solving complex visual learning tasks, adopting multiple descriptors to more precisely characterize the data has been a feasible way for improving performance. The resulting data representations are typically high-dimensional and assume diverse forms. Hence, finding a way of transforming them into a unified space of lower dimension generally facilitates the underlying tasks such as object recognition or clustering. To this end, the proposed approach (termed MKL-DR) generalizes the framework of multiple kernel learning for dimensionality reduction, and distinguishes itself with the following three main contributions: first, our method provides the convenience of using diverse image descriptors to describe useful characteristics of various aspects about the underlying data. Second, it extends a broad set of existing dimensionality reduction techniques to consider multiple kernel learning, and consequently improves their effectiveness. Third, by focusing on the techniques pertaining to dimensionality reduction, the formulation introduces a new class of applications with the multiple kernel learning framework to address not only the supervised learning problems but also the unsupervised and semi-supervised ones. PMID:20921580
A Fast Reduced Kernel Extreme Learning Machine.
Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua
2016-04-01
In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. PMID:26829605
A Kernel Classification Framework for Metric Learning.
Wang, Faqiang; Zuo, Wangmeng; Zhang, Lei; Meng, Deyu; Zhang, David
2015-09-01
Learning a distance metric from the given training samples plays a crucial role in many machine learning tasks, and various models and optimization algorithms have been proposed in the past decade. In this paper, we generalize several state-of-the-art metric learning methods, such as large margin nearest neighbor (LMNN) and information theoretic metric learning (ITML), into a kernel classification framework. First, doublets and triplets are constructed from the training samples, and a family of degree-2 polynomial kernel functions is proposed for pairs of doublets or triplets. Then, a kernel classification framework is established to generalize many popular metric learning methods such as LMNN and ITML. The proposed framework can also suggest new metric learning methods, which can be efficiently implemented, interestingly, using the standard support vector machine (SVM) solvers. Two novel metric learning methods, namely, doublet-SVM and triplet-SVM, are then developed under the proposed framework. Experimental results show that doublet-SVM and triplet-SVM achieve competitive classification accuracies with state-of-the-art metric learning methods but with significantly less training time. PMID:25347887
Semi-Supervised Kernel Mean Shift Clustering.
Anand, Saket; Mittal, Sushil; Tuzel, Oncel; Meer, Peter
2014-06-01
Mean shift clustering is a powerful nonparametric technique that does not require prior knowledge of the number of clusters and does not constrain the shape of the clusters. However, being completely unsupervised, its performance suffers when the original distance metric fails to capture the underlying cluster structure. Despite recent advances in semi-supervised clustering methods, there has been little effort towards incorporating supervision into mean shift. We propose a semi-supervised framework for kernel mean shift clustering (SKMS) that uses only pairwise constraints to guide the clustering procedure. The points are first mapped to a high-dimensional kernel space where the constraints are imposed by a linear transformation of the mapped points. This is achieved by modifying the initial kernel matrix by minimizing a log det divergence-based objective function. We show the advantages of SKMS by evaluating its performance on various synthetic and real datasets while comparing with state-of-the-art semi-supervised clustering algorithms. PMID:26353281
NASA Astrophysics Data System (ADS)
Pope, Benjamin; Tuthill, Peter; Hinkley, Sasha; Ireland, Michael J.; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D.; Martinache, Frantz
2016-01-01
At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently developed adaptive optics post-processing technique, called kernel-phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here, we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the α Ophiuchi binary system near periastron, using the Palomar 200-Inch Telescope. This is the first case in which kernel phase has been used with a full aperture to resolve a system close to the diffraction limit with ground-based extreme adaptive optics observations. Excellent agreement in astrometric quantities is found between kernel phase and masking, and kernel phase significantly outperforms PSF fitting and bispectral analysis, demonstrating its viability as an alternative to conventional non-redundant masking under appropriate conditions.
Protein interaction sentence detection using multiple semantic kernels
2011-01-01
Background Detection of sentences that describe protein-protein interactions (PPIs) in biomedical publications is a challenging and unresolved pattern recognition problem. Many state-of-the-art approaches for this task employ kernel classification methods, in particular support vector machines (SVMs). In this work we propose a novel data integration approach that utilises semantic kernels and a kernel classification method that is a probabilistic analogue to SVMs. Semantic kernels are created from statistical information gathered from large amounts of unlabelled text using lexical semantic models. Several semantic kernels are then fused into an overall composite classification space. In this initial study, we use simple features in order to examine whether the use of combinations of kernels constructed using word-based semantic models can improve PPI sentence detection. Results We show that combinations of semantic kernels lead to statistically significant improvements in recognition rates and receiver operating characteristic (ROC) scores over the plain Gaussian kernel, when applied to a well-known labelled collection of abstracts. The proposed kernel composition method also allows us to automatically infer the most discriminative kernels. Conclusions The results from this paper indicate that using semantic information from unlabelled text, and combinations of such information, can be valuable for classification of short texts such as PPI sentences. This study, however, is only a first step in evaluation of semantic kernels and probabilistic multiple kernel learning in the context of PPI detection. The method described herein is modular, and can be applied with a variety of feature types, kernels, and semantic models, in order to facilitate full extraction of interacting proteins. PMID:21569604
Multiple kernel learning for sparse representation-based classification.
Shrivastava, Ashish; Patel, Vishal M; Chellappa, Rama
2014-07-01
In this paper, we propose a multiple kernel learning (MKL) algorithm that is based on the sparse representation-based classification (SRC) method. Taking advantage of the nonlinear kernel SRC in efficiently representing the nonlinearities in the high-dimensional feature space, we propose an MKL method based on the kernel alignment criteria. Our method uses a two step training method to learn the kernel weights and sparse codes. At each iteration, the sparse codes are updated first while fixing the kernel mixing coefficients, and then the kernel mixing coefficients are updated while fixing the sparse codes. These two steps are repeated until a stopping criteria is met. The effectiveness of the proposed method is demonstrated using several publicly available image classification databases and it is shown that this method can perform significantly better than many competitive image classification algorithms. PMID:24835226
Small convolution kernels for high-fidelity image restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1991-01-01
An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.
Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.
CHIBANI, OMAR
1999-05-12
Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twice the source particle range.
Scale-invariant Lipatov kernels from t-channel unitarity
Coriano, C.; White, A.R.
1994-11-14
The Lipatov equation can be regarded as a reggeon Bethe-Salpeter equation in which higher-order reggeon interactions give higher-order kernels. Infra-red singular contributions in a general kernel are produced by t-channel nonsense states and the allowed kinematic forms are determined by unitarity. Ward identity and infra-red finiteness gauge invariance constraints then determine the corresponding scale-invariant part of a general higher-order kernel.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
Influence of wheat kernel physical properties on the pulverizing process.
Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula
2014-10-01
The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel. PMID:25328207
A short- time beltrami kernel for smoothing images and manifolds.
Spira, Alon; Kimmel, Ron; Sochen, Nir
2007-06-01
We introduce a short-time kernel for the Beltrami image enhancing flow. The flow is implemented by "convolving" the image with a space dependent kernel in a similar fashion to the solution of the heat equation by a convolution with a Gaussian kernel. The kernel is appropriate for smoothing regular (flat) 2-D images, for smoothing images painted on manifolds, and for simultaneously smoothing images and the manifolds they are painted on. The kernel combines the geometry of the image and that of the manifold into one metric tensor, thus enabling a natural unified approach for the manipulation of both. Additionally, the derivation of the kernel gives a better geometrical understanding of the Beltrami flow and shows that the bilateral filter is a Euclidean approximation of it. On a practical level, the use of the kernel allows arbitrarily large time steps as opposed to the existing explicit numerical schemes for the Beltrami flow. In addition, the kernel works with equal ease on regular 2-D images and on images painted on parametric or triangulated manifolds. We demonstrate the denoising properties of the kernel by applying it to various types of images and manifolds. PMID:17547140
Isolation of bacterial endophytes from germinated maize kernels.
Rijavec, Tomaz; Lapanje, Ales; Dermastia, Marina; Rupnik, Maja
2007-06-01
The germination of surface-sterilized maize kernels under aseptic conditions proved to be a suitable method for isolation of kernel-associated bacterial endophytes. Bacterial strains identified by partial 16S rRNA gene sequencing as Pantoea sp., Microbacterium sp., Frigoribacterium sp., Bacillus sp., Paenibacillus sp., and Sphingomonas sp. were isolated from kernels of 4 different maize cultivars. Genus Pantoea was associated with a specific maize cultivar. The kernels of this cultivar were often overgrown with the fungus Lecanicillium aphanocladii; however, those exhibiting Pantoea growth were never colonized with it. Furthermore, the isolated bacterium strain inhibited fungal growth in vitro. PMID:17668041
A Kernel-based Account of Bibliometric Measures
NASA Astrophysics Data System (ADS)
Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji
The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.
Optimized Derivative Kernels for Gamma Ray Spectroscopy
Vlachos, D. S.; Kosmas, O. T.; Simos, T. E.
2007-12-26
In gamma ray spectroscopy, the photon detectors measure the number of photons with energy that lies in an interval which is called a channel. This accumulation of counts produce a measuring function that its deviation from the ideal one may produce high noise in the unfolded spectrum. In order to deal with this problem, the ideal accumulation function is interpolated with the use of special designed derivative kernels. Simulation results are presented which show that this approach is very effective even in spectra with low statistics.
Oil point pressure of Indian almond kernels
NASA Astrophysics Data System (ADS)
Aregbesola, O.; Olatunde, G.; Esuola, S.; Owolarafe, O.
2012-07-01
The effect of preprocessing conditions such as moisture content, heating temperature, heating time and particle size on oil point pressure of Indian almond kernel was investigated. Results showed that oil point pressure was significantly (P < 0.05) affected by above mentioned parameters. It was also observed that oil point pressure reduced with increase in heating temperature and heating time for both coarse and fine particles. Furthermore, an increase in moisture content resulted in increased oil point pressure for coarse particles while there was a reduction in oil point pressure with increase in moisture content for fine particles.
Verification of Chare-kernel programs
Bhansali, S.; Kale, L.V. )
1989-01-01
Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.
Prediction of kernel density of corn using single-kernel near infrared spectroscopy
Technology Transfer Automated Retrieval System (TEKTRAN)
Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...
Linear and kernel methods for multi- and hypervariate change detection
NASA Astrophysics Data System (ADS)
Nielsen, Allan A.; Canty, Morton J.
2010-10-01
The iteratively re-weighted multivariate alteration detection (IR-MAD) algorithm may be used both for unsuper- vised change detection in multi- and hyperspectral remote sensing imagery as well as for automatic radiometric normalization of multi- or hypervariate multitemporal image sequences. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA), kernel MAF and kernel MNF analyses handle nonlinearities by implicitly transforming data into high (even innite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In image analysis the Gram matrix is often prohibitively large (its size is the number of pixels in the image squared). In this case we may sub-sample the image and carry out the kernel eigenvalue analysis on a set of training data samples only. To obtain a transformed version of the entire image we then project all pixels, which we call the test data, mapped nonlinearly onto the primal eigenvectors. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric normalization and kernel PCA/MAF/MNF transformations have been written
Scientific Computing Kernels on the Cell Processor
Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine
2007-04-04
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.
Stable Local Volatility Calibration Using Kernel Splines
NASA Astrophysics Data System (ADS)
Coleman, Thomas F.; Li, Yuying; Wang, Cheng
2010-09-01
We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.
Transcriptome analysis of Ginkgo biloba kernels
He, Bing; Gu, Yincong; Xu, Meng; Wang, Jianwen; Cao, Fuliang; Xu, Li-an
2015-01-01
Ginkgo biloba is a dioecious species native to China with medicinally and phylogenetically important characteristics; however, genomic resources for this species are limited. In this study, we performed the first transcriptome sequencing for Ginkgo kernels at five time points using Illumina paired-end sequencing. Approximately 25.08-Gb clean reads were obtained, and 68,547 unigenes with an average length of 870 bp were generated by de novo assembly. Of these unigenes, 29,987 (43.74%) were annotated in publicly available plant protein database. A total of 3,869 genes were identified as significantly differentially expressed, and enrichment analysis was conducted at different time points. Furthermore, metabolic pathway analysis revealed that 66 unigenes were responsible for terpenoid backbone biosynthesis, with up to 12 up-regulated unigenes involved in the biosynthesis of ginkgolide and bilobalide. Differential gene expression analysis together with real-time PCR experiments indicated that the synthesis of bilobalide may have interfered with the ginkgolide synthesis process in the kernel. These data can remarkably expand the existing transcriptome resources of Ginkgo, and provide a valuable platform to reveal more on developmental and metabolic mechanisms of this species. PMID:26500663
Delimiting Areas of Endemism through Kernel Interpolation
Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971
Aligning Biomolecular Networks Using Modular Graph Kernels
NASA Astrophysics Data System (ADS)
Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant
Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.
Technology Transfer Automated Retrieval System (TEKTRAN)
Maize kernel density impacts milling quality of the grain due to kernel hardness. Harder kernels are correlated with higher test weight and are more resistant to breakage during harvest and transport. Softer kernels, in addition to being susceptible to mechanical damage, are also prone to pathogen ...
Introduction to Kernel Methods: Classification of Multivariate Data
NASA Astrophysics Data System (ADS)
Fauvel, M.
2016-05-01
In this chapter, kernel methods are presented for the classification of multivariate data. An introduction example is given to enlighten the main idea of kernel methods. Then emphasis is done on the Support Vector Machine. Structural risk minimization is presented, and linear and non-linear SVM are described. Finally, a full example of SVM classification is given on simulated hyperspectral data.
Comparison of Kernel Equating and Item Response Theory Equating Methods
ERIC Educational Resources Information Center
Meng, Yu
2012-01-01
The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…
High speed sorting of Fusarium-damaged wheat kernels
Technology Transfer Automated Retrieval System (TEKTRAN)
Recent studies have found that resistance to Fusarium fungal infection can be inherited in wheat from one generation to another. However, there is not yet available a cost effective method to separate Fusarium-damaged wheat kernels from undamaged kernels so that wheat breeders can take advantage of...
Covariant Perturbation Expansion of Off-Diagonal Heat Kernel
NASA Astrophysics Data System (ADS)
Gou, Yu-Zi; Li, Wen-Du; Zhang, Ping; Dai, Wu-Sheng
2016-07-01
Covariant perturbation expansion is an important method in quantum field theory. In this paper an expansion up to arbitrary order for off-diagonal heat kernels in flat space based on the covariant perturbation expansion is given. In literature, only diagonal heat kernels are calculated based on the covariant perturbation expansion.
Evidence-Based Kernels: Fundamental Units of Behavioral Influence
ERIC Educational Resources Information Center
Embry, Dennis D.; Biglan, Anthony
2008-01-01
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior-influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of…
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
7 CFR 981.60 - Determination of kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
Integrating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Champagne, Nathan J.; Wilton, Donald R.
2008-01-01
A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form
Polynomial Kernels for Hard Problems on Disk Graphs
NASA Astrophysics Data System (ADS)
Jansen, Bart
Kernelization is a powerful tool to obtain fixed-parameter tractable algorithms. Recent breakthroughs show that many graph problems admit small polynomial kernels when restricted to sparse graph classes such as planar graphs, bounded-genus graphs or H-minor-free graphs. We consider the intersection graphs of (unit) disks in the plane, which can be arbitrarily dense but do exhibit some geometric structure. We give the first kernelization results on these dense graph classes. Connected Vertex Cover has a kernel with 12k vertices on unit-disk graphs and with 3k 2 + 7k vertices on disk graphs with arbitrary radii. Red-Blue Dominating Set parameterized by the size of the smallest color class has a linear-vertex kernel on planar graphs, a quadratic-vertex kernel on unit-disk graphs and a quartic-vertex kernel on disk graphs. Finally we prove that H -Matching on unit-disk graphs has a linear-vertex kernel for every fixed graph H.
Optimal Bandwidth Selection in Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Häggström, Jenny; Wiberg, Marie
2014-01-01
The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…
Evidence-based Kernels: Fundamental Units of Behavioral Influence
Biglan, Anthony
2008-01-01
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600
Sugar uptake into kernels of tunicate tassel-seed maize
Thomas, P.A.; Felker, F.C.; Crawford, C.G. )
1990-05-01
A maize (Zea mays L.) strain expressing both the tassel-seed (Ts-5) and tunicate (Tu) characters was developed which produces glume-covered kernels on the tassel, often born on 7-10 mm pedicels. Vigorous plants produce up to 100 such kernels interspersed with additional sessile kernels. This floral unit provides a potentially valuable experimental system for studying sugar uptake into developing maize seeds. When detached kernels (with glumes and pedicel intact) are placed in incubation solution, fluid flows up the pedicel and into the glumes, entering the pedicel apoplast near the kernel base. The unusual anatomical features of this maize strain permit experimental access to the pedicel apoplast with much less possibility of kernel base tissue damage than with kernels excised from the cob. ({sup 14}C)Fructose incorporation into soluble and insoluble fractions of endosperm increased for 8 days. Endosperm uptake of sucrose, fructose, and D-glucose was significantly greater than that of L-glucose. Fructose uptake was significantly inhibited by CCCP, DNP, and PCMBS. These results suggest the presence of an active, non-diffusion component of sugar transport in maize kernels.
Direct Measurement of Wave Kernels in Time-Distance Helioseismology
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.
2006-01-01
Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.
A Robustness Testing Campaign for IMA-SP Partitioning Kernels
NASA Astrophysics Data System (ADS)
Grixti, Stephen; Lopez Trecastro, Jorge; Sammut, Nicholas; Zammit-Mangion, David
2015-09-01
With time and space partitioned architectures becoming increasingly appealing to the European space sector, the dependability of partitioning kernel technology is a key factor to its applicability in European Space Agency projects. This paper explores the potential of the data type fault model, which injects faults through the Application Program Interface, in partitioning kernel robustness testing. This fault injection methodology has been tailored to investigate its relevance in uncovering vulnerabilities within partitioning kernels and potentially contributing towards fault removal campaigns within this domain. This is demonstrated through a robustness testing case study of the XtratuM partitioning kernel for SPARC LEON3 processors. The robustness campaign exposed a number of vulnerabilities in XtratuM, exhibiting the potential benefits of using such a methodology for the robustness assessment of partitioning kernels.
OSKI: A Library of Automatically Tuned Sparse Matrix Kernels
Vuduc, R; Demmel, J W; Yelick, K A
2005-07-19
The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.
Technology Transfer Automated Retrieval System (TEKTRAN)
The current US corn grading system accounts for the portion of damaged kernels, which is measured by time-consuming and inaccurate visual inspection. Near infrared spectroscopy (NIRS), a non-destructive and fast analytical method, was tested as a tool for discriminating corn kernels with heat and f...
Technology Transfer Automated Retrieval System (TEKTRAN)
The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. The choice of methodology was based on the principle that many biological materials exhibit fluorescenc...
Modified kernel-based nonlinear feature extraction.
Ma, J.; Perkins, S. J.; Theiler, J. P.; Ahalt, S.
2002-01-01
Feature Extraction (FE) techniques are widely used in many applications to pre-process data in order to reduce the complexity of subsequent processes. A group of Kernel-based nonlinear FE ( H E ) algorithms has attracted much attention due to their high performance. However, a serious limitation that is inherent in these algorithms -- the maximal number of features extracted by them is limited by the number of classes involved -- dramatically degrades their flexibility. Here we propose a modified version of those KFE algorithms (MKFE), This algorithm is developed from a special form of scatter-matrix, whose rank is not determined by the number of classes involved, and thus breaks the inherent limitation in those KFE algorithms. Experimental results suggest that MKFE algorithm is .especially useful when the training set is small.
Privacy preserving RBF kernel support vector machine.
Li, Haoran; Xiong, Li; Ohno-Machado, Lucila; Jiang, Xiaoqian
2014-01-01
Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data. PMID:25013805
Point-Kernel Shielding Code System.
1982-02-17
Version 00 QAD-BSA is a three-dimensional, point-kernel shielding code system based upon the CCC-48/QAD series. It is designed to calculate photon dose rates and heating rates using exponential attenuation and infinite medium buildup factors. Calculational provisions include estimates of fast neutron penetration using data computed by the moments method. Included geometry routines can describe complicated source and shield geometries. An internal library contains data for many frequently used structural and shielding materials, enabling the codemore » to solve most problems with only source strengths and problem geometry required as input. This code system adapts especially well to problems requiring multiple sources and sources with asymmetrical geometry. In addition to being edited separately, the total interaction rates from many sources may be edited at each detector point. Calculated photon interaction rates agree closely with those obtained using QAD-P5A.« less
Kernel density estimation using graphical processing unit
NASA Astrophysics Data System (ADS)
Sunarko, Su'ud, Zaki
2015-09-01
Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.
Labeled Graph Kernel for Behavior Analysis.
Zhao, Ruiqi; Martinez, Aleix M
2016-08-01
Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data. PMID:26415154
The flare kernel in the impulsive phase
NASA Technical Reports Server (NTRS)
Dejager, C.
1986-01-01
The impulsive phase of a flare is characterized by impulsive bursts of X-ray and microwave radiation, related to impulsive footpoint heating up to 50 or 60 MK, by upward gas velocities (150 to 400 km/sec) and by a gradual increase of the flare's thermal energy content. These phenomena, as well as non-thermal effects, are all related to the impulsive energy injection into the flare. The available observations are also quantitatively consistent with a model in which energy is injected into the flare by beams of energetic electrons, causing ablation of chromospheric gas, followed by convective rise of gas. Thus, a hole is burned into the chromosphere; at the end of impulsive phase of an average flare the lower part of that hole is situated about 1800 km above the photosphere. H alpha and other optical and UV line emission is radiated by a thin layer (approx. 20 km) at the bottom of the flare kernel. The upward rising and outward streaming gas cools down by conduction in about 45 s. The non-thermal effects in the initial phase are due to curtailing of the energy distribution function by escape of energetic electrons. The single flux tube model of a flare does not fit with these observations; instead we propose the spaghetti-bundle model. Microwave and gamma-ray observations suggest the occurrence of dense flare knots of approx. 800 km diameter, and of high temperature. Future observations should concentrate on locating the microwave/gamma-ray sources, and on determining the kernel's fine structure and the related multi-loop structure of the flaring area.
Gaussian kernel width optimization for sparse Bayesian learning.
Mohsenzadeh, Yalda; Sheikhzadeh, Hamid
2015-04-01
Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters. PMID:25794377
Classification of maize kernels using NIR hyperspectral imaging.
Williams, Paul J; Kucheryavskiy, Sergey
2016-10-15
NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual kernels and did not give acceptable results because of high misclassification. However by using a predefined threshold and classifying entire kernels based on the number of correctly predicted pixels, improved results were achieved (sensitivity and specificity of 0.75 and 0.97). Object-wise classification was performed using two methods for feature extraction - score histograms and mean spectra. The model based on score histograms performed better for hard kernel classification (sensitivity and specificity of 0.93 and 0.97), while that of mean spectra gave better results for medium kernels (sensitivity and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale. PMID:27173544
Effects of sample size on KERNEL home range estimates
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
1999-01-01
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
Initial-state splitting kernels in cold nuclear matter
NASA Astrophysics Data System (ADS)
Ovanesyan, Grigory; Ringer, Felix; Vitev, Ivan
2016-09-01
We derive medium-induced splitting kernels for energetic partons that undergo interactions in dense QCD matter before a hard-scattering event at large momentum transfer Q2. Working in the framework of the effective theory SCETG, we compute the splitting kernels beyond the soft gluon approximation. We present numerical studies that compare our new results with previous findings. We expect the full medium-induced splitting kernels to be most relevant for the extension of initial-state cold nuclear matter energy loss phenomenology in both p+A and A+A collisions.
Machine learning algorithms for damage detection: Kernel-based approaches
NASA Astrophysics Data System (ADS)
Santos, Adam; Figueiredo, Eloi; Silva, M. F. M.; Sales, C. S.; Costa, J. C. W. A.
2016-02-01
This paper presents four kernel-based algorithms for damage detection under varying operational and environmental conditions, namely based on one-class support vector machine, support vector data description, kernel principal component analysis and greedy kernel principal component analysis. Acceleration time-series from an array of accelerometers were obtained from a laboratory structure and used for performance comparison. The main contribution of this study is the applicability of the proposed algorithms for damage detection as well as the comparison of the classification performance between these algorithms and other four ones already considered as reliable approaches in the literature. All proposed algorithms revealed to have better classification performance than the previous ones.
Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.
1999-05-12
Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twicemore » the source particle range.« less
Bridging the gap between the KERNEL and RT-11
Hendra, R.G.
1981-06-01
A software package is proposed to allow users of the PL-11 language, and the LSI-11 KERNEL in general, to use their PL-11 programs under RT-11. Further, some general purpose extensions to the KERNEL are proposed that facilitate some number conversions and strong manipulations. A Floating Point Package of procedures to allow full use of the hardware floating point capability of the LSI-11 computers is proposed. Extensions to the KERNEL that allow a user to read, write and delete disc files in the manner of RT-11 is also proposed. A device directory listing routine is also included.
Kernel simplex growing algorithm for hyperspectral endmember extraction
NASA Astrophysics Data System (ADS)
Zhao, Liaoying; Zheng, Junpeng; Li, Xiaorun; Wang, Lijiao
2014-01-01
In order to effectively extract endmembers for hyperspectral imagery where linear mixing model may not be appropriate due to multiple scattering effects, this paper extends the simplex growing algorithm (SGA) to its kernel version. A new simplex volume formula without dimension reduction is used in SGA to form a new simplex growing algorithm (NSGA). The original data are nonlinearly mapped into a high-dimensional space where the scatters can be ignored. To avoid determining complex nonlinear mapping, a kernel function is used to extend the NSGA to kernel NSGA (KNSGA). Experimental results of simulated and real data prove that the proposed KNSGA approach outperforms SGA and NSGA.
Bilinear analysis for kernel selection and nonlinear feature extraction.
Yang, Shu; Yan, Shuicheng; Zhang, Chao; Tang, Xiaoou
2007-09-01
This paper presents a unified criterion, Fisher + kernel criterion (FKC), for feature extraction and recognition. This new criterion is intended to extract the most discriminant features in different nonlinear spaces, and then, fuse these features under a unified measurement. Thus, FKC can simultaneously achieve nonlinear discriminant analysis and kernel selection. In addition, we present an efficient algorithm Fisher + kernel analysis (FKA), which utilizes the bilinear analysis, to optimize the new criterion. This FKA algorithm can alleviate the ill-posed problem existed in traditional kernel discriminant analysis (KDA), and usually, has no singularity problem. The effectiveness of our proposed algorithm is validated by a series of face-recognition experiments on several different databases. PMID:18220192
A kernel adaptive algorithm for quaternion-valued inputs.
Paul, Thomas K; Ogunfunmi, Tokunbo
2015-10-01
The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations. PMID:25594982
Inheritance of Kernel Color in Corn: Explanations and Investigations.
ERIC Educational Resources Information Center
Ford, Rosemary H.
2000-01-01
Offers a new perspective on traditional problems in genetics on kernel color in corn, including information about genetic regulation, metabolic pathways, and evolution of genes. (Contains 15 references.) (ASK)
Intelligent classification methods of grain kernels using computer vision analysis
NASA Astrophysics Data System (ADS)
Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo
2011-06-01
In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.
Kernel-based Linux emulation for Plan 9.
Minnich, Ronald G.
2010-09-01
CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9. In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.
Constructing Bayesian formulations of sparse kernel learning methods.
Cawley, Gavin C; Talbot, Nicola L C
2005-01-01
We present here a simple technique that simplifies the construction of Bayesian treatments of a variety of sparse kernel learning algorithms. An incomplete Cholesky factorisation is employed to modify the dual parameter space, such that the Gaussian prior over the dual model parameters is whitened. The regularisation term then corresponds to the usual weight-decay regulariser, allowing the Bayesian analysis to proceed via the evidence framework of MacKay. There is in addition a useful by-product associated with the incomplete Cholesky factorisation algorithm, it also identifies a subset of the training data forming an approximate basis for the entire dataset in the kernel-induced feature space, resulting in a sparse model. Bayesian treatments of the kernel ridge regression (KRR) algorithm, with both constant and heteroscedastic (input dependent) variance structures, and kernel logistic regression (KLR) are provided as illustrative examples of the proposed method, which we hope will be more widely applicable. PMID:16085387
Hairpin Vortex Dynamics in a Kernel Experiment
NASA Astrophysics Data System (ADS)
Meng, H.; Yang, W.; Sheng, J.
1998-11-01
A surface-mounted trapezoidal tab is known to shed hairpin-like vortices and generate a pair of counter-rotating vortices in its wake. Such a flow serves as a kernel experiment for studying the dynamics of these vortex structures. Created by and scaled with the tab, the vortex structures are more orderly and larger than those in the natural wall turbulence and thus suitable for measurement by Particle Image Velocimetry (PIV) and visualization by Planar Laser Induced Fluorescence (PLIF). Time-series PIV provides insight into the evolution, self-enhancement, regeneration, and interaction of hairpin vortices, as well as interactions of the hairpins with the pressure-induced counter-rotating vortex pair (CVP). The topology of the wake structure indicates that the hairpin "heads" are formed from lifted shear-layer instability and "legs" from stretching by the CVP, which passes the energy to the hairpins. The CVP diminishes after one tab height, while the hairpins persist until 10 20 tab heights downstream. It is concluded that the lift-up of the near-surface viscous fluids is the key to hairpin vortex dynamics. Whether from the pumping action of the CVP or the ejection by an existing hairpin, the 3D lift-up of near-surface vorticity contributes to the increase of hairpin vortex strength and creation of secondary hairpins. http://www.mne.ksu.edu/ meng/labhome.html
Kernel MAD Algorithm for Relative Radiometric Normalization
NASA Astrophysics Data System (ADS)
Bai, Yang; Tang, Ping; Hu, Changmiao
2016-06-01
The multivariate alteration detection (MAD) algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA) which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA). The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1) data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.
Kernel spectral clustering with memory effect
NASA Astrophysics Data System (ADS)
Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.
2013-05-01
Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.
SCAP. Point Kernel Single or Albedo Scatter
Disney, R.K.; Bevan, S.E.
1982-08-05
SCAP solves for radiation transport in complex geometries using the single or albedo-scatter point kernel method. The program is designed to calculate the neutron or gamma-ray radiation level at detector points located within or outside a complex radiation scatter source geometry or a user-specified discrete scattering volume. The geometry is described by zones bounded by intersecting quadratic surfaces with an arbitrary maximum number of boundary surfaces per zone. The anisotropic point sources are described as point-wise energy dependent distributions of polar angles on a meridian; isotropic point sources may be specified also. The attenuation function for gamma rays is an exponential function on the primary source leg and the scatter leg with a buildup factor approximation to account for multiple scatter on the scatter leg. The neutron attenuation function is an exponential function using neutron removal cross sections on the primary source leg and scatter leg. Line or volumetric sources can be represented as distributions of isotropic point sources, with uncollided line-of-sight attenuation and buildup calculated between each source point and the detector point.
Local Kernel for Brains Classification in Schizophrenia
NASA Astrophysics Data System (ADS)
Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.
In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.
Temporal-kernel recurrent neural networks.
Sutskever, Ilya; Hinton, Geoffrey
2010-03-01
A Recurrent Neural Network (RNN) is a powerful connectionist model that can be applied to many challenging sequential problems, including problems that naturally arise in language and speech. However, RNNs are extremely hard to train on problems that have long-term dependencies, where it is necessary to remember events for many timesteps before using them to make a prediction. In this paper we consider the problem of training RNNs to predict sequences that exhibit significant long-term dependencies, focusing on a serial recall task where the RNN needs to remember a sequence of characters for a large number of steps before reconstructing it. We introduce the Temporal-Kernel Recurrent Neural Network (TKRNN), which is a variant of the RNN that can cope with long-term dependencies much more easily than a standard RNN, and show that the TKRNN develops short-term memory that successfully solves the serial recall task by representing the input string with a stable state of its hidden units. PMID:19932002
Phoneme recognition with kernel learning algorithms
NASA Astrophysics Data System (ADS)
Namarvar, Hassan H.; Berger, Theodore W.
2004-10-01
An isolated phoneme recognition system is proposed using time-frequency domain analysis and support vector machines (SVMs). The TIMIT corpus which contains a total of 6300 sentences, ten sentences spoken by each of 630 speakers from eight major dialect regions of the United States, was used in this experiment. Provided time-aligned phonetic transcription was used to extract phonemes from speech samples. A 55-output classifier system was designed corresponding to 55 classes of phonemes and trained with the kernel learning algorithms. The training dataset was extracted from clean training samples. A portion of the database, i.e., 65338 samples of training dataset, was used to train the system. The performance of the system on the training dataset was 76.4%. The whole test dataset of the TIMIT corpus was used to test the generalization of the system. All samples, i.e., 55655 samples of the test dataset, were used to test the system. The performance of the system on the test dataset was 45.3%. This approach is currently under development to extend the algorithm for continuous phoneme recognition. [Work supported in part by grants from DARPA, NASA, and ONR.
Nonlinear stochastic system identification of skin using volterra kernels.
Chen, Yi; Hunter, Ian W
2013-04-01
Volterra kernel stochastic system identification is a technique that can be used to capture and model nonlinear dynamics in biological systems, including the nonlinear properties of skin during indentation. A high bandwidth and high stroke Lorentz force linear actuator system was developed and used to test the mechanical properties of bulk skin and underlying tissue in vivo using a non-white input force and measuring an output position. These short tests (5 s) were conducted in an indentation configuration normal to the skin surface and in an extension configuration tangent to the skin surface. Volterra kernel solution methods were used including a fast least squares procedure and an orthogonalization solution method. The practical modifications, such as frequency domain filtering, necessary for working with low-pass filtered inputs are also described. A simple linear stochastic system identification technique had a variance accounted for (VAF) of less than 75%. Representations using the first and second Volterra kernels had a much higher VAF (90-97%) as well as a lower Akaike information criteria (AICc) indicating that the Volterra kernel models were more efficient. The experimental second Volterra kernel matches well with results from a dynamic-parameter nonlinearity model with fixed mass as a function of depth as well as stiffness and damping that increase with depth into the skin. A study with 16 subjects showed that the kernel peak values have mean coefficients of variation (CV) that ranged from 3 to 8% and showed that the kernel principal components were correlated with location on the body, subject mass, body mass index (BMI), and gender. These fast and robust methods for Volterra kernel stochastic system identification can be applied to the characterization of biological tissues, diagnosis of skin diseases, and determination of consumer product efficacy. PMID:23264003
The Weighted Super Bergman Kernels Over the Supermatrix Spaces
NASA Astrophysics Data System (ADS)
Feng, Zhiming
2015-12-01
The purpose of this paper is threefold. Firstly, using Howe duality for , we obtain integral formulas of the super Schur functions with respect to the super standard Gaussian distributions. Secondly, we give explicit expressions of the super Szegö kernels and the weighted super Bergman kernels for the Cartan superdomains of type I. Thirdly, combining these results, we obtain duality relations of integrals over the unitary groups and the Cartan superdomains, and the marginal distributions of the weighted measure.
Simple randomized algorithms for online learning with kernels.
He, Wenwu; Kwok, James T
2014-12-01
In online learning with kernels, it is vital to control the size (budget) of the support set because of the curse of kernelization. In this paper, we propose two simple and effective stochastic strategies for controlling the budget. Both algorithms have an expected regret that is sublinear in the horizon. Experimental results on a number of benchmark data sets demonstrate encouraging performance in terms of both efficacy and efficiency. PMID:25108150
Resummed memory kernels in generalized system-bath master equations
Mavros, Michael G.; Van Voorhis, Troy
2014-08-07
Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.
Resummed memory kernels in generalized system-bath master equations
NASA Astrophysics Data System (ADS)
Mavros, Michael G.; Van Voorhis, Troy
2014-08-01
Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.
Resummed memory kernels in generalized system-bath master equations.
Mavros, Michael G; Van Voorhis, Troy
2014-08-01
Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics. PMID:25106575
Sparse kernel learning with LASSO and Bayesian inference algorithm.
Gao, Junbin; Kwan, Paul W; Shi, Daming
2010-03-01
Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two separate recent papers [Gao, J., Antolovich, M., & Kwan, P. H. (2008). L1 LASSO and its Bayesian inference. In W. Wobcke, & M. Zhang (Eds.), Lecture notes in computer science: Vol. 5360 (pp. 318-324); Wang, G., Yeung, D. Y., & Lochovsky, F. (2007). The kernel path in kernelized LASSO. In International conference on artificial intelligence and statistics (pp. 580-587). San Juan, Puerto Rico: MIT Press]. This paper is concerned with learning kernels under the LASSO formulation via adopting a generative Bayesian learning and inference approach. A new robust learning algorithm is proposed which produces a sparse kernel model with the capability of learning regularized parameters and kernel hyperparameters. A comparison with state-of-the-art methods for constructing sparse regression models such as the relevance vector machine (RVM) and the local regularization assisted orthogonal least squares regression (LROLS) is given. The new algorithm is also demonstrated to possess considerable computational advantages. PMID:19604671
Enzymatic treatment of peanut kernels to reduce allergen levels.
Yu, Jianmei; Ahmedna, Mohamed; Goktepe, Ipek; Cheng, Hsiaopo; Maleki, Soheila
2011-08-01
This study investigated the use of enzymatic treatment to reduce peanut allergens in peanut kernels as affected by processing conditions. Two major peanut allergens, Ara h 1 and Ara h 2, were used as indicators of process effectiveness. Enzymatic treatment effectively reduced Ara h 1 and Ara h 2 in roasted peanut kernels by up to 100% under optimal conditions. For instance, treatment of roasted peanut kernels with α-chymotrypsin and trypsin for 1-3h significantly increased the solubility of peanut protein while reducing Ara h 1 and Ara h 2 in peanut kernel extracts by 100% and 98%, respectively, based on ELISA readings. Ara h 1 and Ara h 2 levels in peanut protein extracts were inversely correlated with protein solubility in roasted peanut. Blanching of kernels enhanced the effectiveness of enzyme treatment in roasted peanuts but not in raw peanuts. The optimal concentration of enzyme was determined by response surface to be in the range of 0.1-0.2%. No consistent results were obtained for raw peanut kernels since Ara h 1 and Ara h 2 increased in peanut protein extracts under some treatment conditions and decreased in others. PMID:25214091
Integrodifference equations in patchy landscapes : I. Dispersal Kernels.
Musgrave, Jeffrey; Lutscher, Frithjof
2014-09-01
What is the effect of individual movement behavior in patchy landscapes on redistribution kernels? To answer this question, we derive a number of redistribution kernels from a random walk model with patch dependent diffusion, settling, and mortality rates. At the interface of two patch types, we integrate recent results on individual behavior at the interface. In general, these interface conditions result in the probability density function of the random walker being discontinuous at an interface. We show that the dispersal kernel can be characterized as the Green's function of a second-order differential operator. Using this characterization, we illustrate the kind of (discontinuous) dispersal kernels that result from our approach, using three scenarios. First, we assume that dispersal distance is small compared to patch size, so that a typical disperser crosses at most one interface during the dispersal phase. Then we consider a single bounded patch and generate kernels that will be useful to study the critical patch size problem in our sequel paper. Finally, we explore dispersal kernels in a periodic landscape and study the dependence of certain dispersal characteristics on model parameters. PMID:23907527
An Ensemble Approach to Building Mercer Kernels with Prior Information
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2005-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.
Fast O1 bilateral filtering using trigonometric range kernels.
Chaudhury, Kunal Narayan; Sage, Daniel; Unser, Michael
2011-12-01
It is well known that spatial averaging can be realized (in space or frequency domain) using algorithms whose complexity does not scale with the size or shape of the filter. These fast algorithms are generally referred to as constant-time or O(1) algorithms in the image-processing literature. Along with the spatial filter, the edge-preserving bilateral filter involves an additional range kernel. This is used to restrict the averaging to those neighborhood pixels whose intensity are similar or close to that of the pixel of interest. The range kernel operates by acting on the pixel intensities. This makes the averaging process nonlinear and computationally intensive, particularly when the spatial filter is large. In this paper, we show how the O(1) averaging algorithms can be leveraged for realizing the bilateral filter in constant time, by using trigonometric range kernels. This is done by generalizing the idea presented by Porikli, i.e., using polynomial kernels. The class of trigonometric kernels turns out to be sufficiently rich, allowing for the approximation of the standard Gaussian bilateral filter. The attractive feature of our approach is that, for a fixed number of terms, the quality of approximation achieved using trigonometric kernels is much superior to that obtained by Porikli using polynomials. PMID:21659022
Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.
Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K
2016-03-01
Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005 ). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens ( 2014 ) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically. PMID:26735744
Fouss, François; Francoisse, Kevin; Yen, Luh; Pirotte, Alain; Saerens, Marco
2012-07-01
This paper presents a survey as well as an empirical comparison and evaluation of seven kernels on graphs and two related similarity matrices, that we globally refer to as "kernels on graphs" for simplicity. They are the exponential diffusion kernel, the Laplacian exponential diffusion kernel, the von Neumann diffusion kernel, the regularized Laplacian kernel, the commute-time (or resistance-distance) kernel, the random-walk-with-restart similarity matrix, and finally, a kernel first introduced in this paper (the regularized commute-time kernel) and two kernels defined in some of our previous work and further investigated in this paper (the Markov diffusion kernel and the relative-entropy diffusion matrix). The kernel-on-graphs approach is simple and intuitive. It is illustrated by applying the nine kernels to a collaborative-recommendation task, viewed as a link prediction problem, and to a semisupervised classification task, both on several databases. The methods compute proximity measures between nodes that help study the structure of the graph. Our comparisons suggest that the regularized commute-time and the Markov diffusion kernels perform best on the investigated tasks, closely followed by the regularized Laplacian kernel. PMID:22497802
Enzyme Activities of Starch and Sucrose Pathways and Growth of Apical and Basal Maize Kernels 1
Ou-Lee, Tsai-Mei; Setter, Tim Lloyd
1985-01-01
Apical kernels of maize (Zea mays L.) ears have smaller size and lower growth rates than basal kernels. To improve our understanding of this difference, the developmental patterns of starch-synthesis-pathway enzyme activities and accumulation of sugars and starch was determined in apical- and basal-kernel endosperm of greenhouse-grown maize (cultivar Cornell 175) plants. Plants were synchronously pollinated, kernels were sampled from apical and basal ear positions throughout kernel development, and enzyme activities were measured in crude preparations. Several factors were correlated with the higher dry matter accumulation rate and larger mature kernel size of basal-kernel endosperm. During the period of cell expansion (7 to 19 days after pollination), the activity of insoluble (acid) invertase and sucose concentration in endosperm of basal kernels exceeded that in apical kernels. Soluble (alkaline) invertase was also high during this stage but was the same in endosperm of basal and apical kernels, while glucose concentration was higher in apical-kernel endosperm. During the period of maximal starch synthesis, the activities of sucrose synthase, ADP-Glc-pyrophosphorylase, and insoluble (granule-bound) ADP-Glc-starch synthase were higher in endosperm of basal than apical kernels. Soluble ADP-Glc-starch synthase, which was maximal during the early stage before starch accumulated, was the same in endosperm from apical and basal kernels. It appeared that differences in metabolic potential between apical and basal kernels were established at an early stage in kernel development. PMID:16664503
Kar, Arindam; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita; Kundu, Mahantapas
2012-01-01
In this paper a nonlinear Gabor Wavelet Transform (GWT) discriminant feature extraction approach for enhanced face recognition is proposed. Firstly, the low-energized blocks from Gabor wavelet transformed images are extracted. Secondly, the nonlinear discriminating features are analyzed and extracted from the selected low-energized blocks by the generalized Kernel Discriminative Common Vector (KDCV) method. The KDCV method is extended to include cosine kernel function in the discriminating method. The KDCV with the cosine kernels is then applied on the extracted low-energized discriminating feature vectors to obtain the real component of a complex quantity for face recognition. In order to derive positive kernel discriminative vectors, we apply only those kernel discriminative eigenvectors that are associated with nonzero eigenvalues. The feasibility of the low-energized Gabor-block-based generalized KDCV method with cosine kernel function models has been successfully tested for classification using the L(1), L(2) distance measures; and the cosine similarity measure on both frontal and pose-angled face recognition. Experimental results on the FRAV2D and the FERET database demonstrate the effectiveness of this new approach. PMID:23365559
Volcano clustering determination: Bivariate Gauss vs. Fisher kernels
NASA Astrophysics Data System (ADS)
Cañón-Tapia, Edgardo
2013-05-01
Underlying many studies of volcano clustering is the implicit assumption that vent distribution can be studied by using kernels originally devised for distribution in plane surfaces. Nevertheless, an important change in topology in the volcanic context is related to the distortion that is introduced when attempting to represent features found on the surface of a sphere that are being projected into a plane. This work explores the extent to which different topologies of the kernel used to study the spatial distribution of vents can introduce significant changes in the obtained density functions. To this end, a planar (Gauss) and a spherical (Fisher) kernels are mutually compared. The role of the smoothing factor in these two kernels is also explored with some detail. The results indicate that the topology of the kernel is not extremely influential, and that either type of kernel can be used to characterize a plane or a spherical distribution with exactly the same detail (provided that a suitable smoothing factor is selected in each case). It is also shown that there is a limitation on the resolution of the Fisher kernel relative to the typical separation between data that can be accurately described, because data sets with separations lower than 500 km are considered as a single cluster using this method. In contrast, the Gauss kernel can provide adequate resolutions for vent distributions at a wider range of separations. In addition, this study also shows that the numerical value of the smoothing factor (or bandwidth) of both the Gauss and Fisher kernels has no unique nor direct relationship with the relevant separation among data. In order to establish the relevant distance, it is necessary to take into consideration the value of the respective smoothing factor together with a level of statistical significance at which the contributions to the probability density function will be analyzed. Based on such reference level, it is possible to create a hierarchy of
Gabor-based kernel PCA with fractional power polynomial models for face recognition.
Liu, Chengjun
2004-05-01
This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power
Local coding based matching kernel method for image classification.
Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong
2014-01-01
This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method. PMID:25119982
Thermal-to-visible face recognition using multiple kernel learning
NASA Astrophysics Data System (ADS)
Hu, Shuowen; Gurram, Prudhvi; Kwon, Heesung; Chan, Alex L.
2014-06-01
Recognizing faces acquired in the thermal spectrum from a gallery of visible face images is a desired capability for the military and homeland security, especially for nighttime surveillance and intelligence gathering. However, thermal-tovisible face recognition is a highly challenging problem, due to the large modality gap between thermal and visible imaging. In this paper, we propose a thermal-to-visible face recognition approach based on multiple kernel learning (MKL) with support vector machines (SVMs). We first subdivide the face into non-overlapping spatial regions or blocks using a method based on coalitional game theory. For comparison purposes, we also investigate uniform spatial subdivisions. Following this subdivision, histogram of oriented gradients (HOG) features are extracted from each block and utilized to compute a kernel for each region. We apply sparse multiple kernel learning (SMKL), which is a MKLbased approach that learns a set of sparse kernel weights, as well as the decision function of a one-vs-all SVM classifier for each of the subjects in the gallery. We also apply equal kernel weights (non-sparse) and obtain one-vs-all SVM models for the same subjects in the gallery. Only visible images of each subject are used for MKL training, while thermal images are used as probe images during testing. With subdivision generated by game theory, we achieved Rank-1 identification rate of 50.7% for SMKL and 93.6% for equal kernel weighting using a multimodal dataset of 65 subjects. With uniform subdivisions, we achieved a Rank-1 identification rate of 88.3% for SMKL, but 92.7% for equal kernel weighting.
Botta, F.; Mairani, A.; Battistoni, G.; Cremonesi, M.; Di Dia, A.; Fasso, A.; Ferrari, A.; Ferrari, M.; Paganelli, G.; Pedroli, G.; Valente, M.
2011-07-15
Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10{sup -3} MeV) and for beta emitting isotopes commonly used for therapy ({sup 89}Sr, {sup 90}Y, {sup 131}I, {sup 153}Sm, {sup 177}Lu, {sup 186}Re, and {sup 188}Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8{center_dot}R{sub CSDA} and 0.9{center_dot}R{sub CSDA} for monoenergetic electrons (R{sub CSDA} being the continuous slowing down approximation range) and within 0.8{center_dot}X{sub 90} and 0.9{center_dot}X{sub 90} for isotopes (X{sub 90} being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9{center_dot}R{sub CSDA} and 0.9{center_dot}X{sub 90} for electrons and isotopes, respectively. Results: Concerning monoenergetic electrons
Botta, F; Di Dia, A; Pedroli, G; Mairani, A; Battistoni, G; Fasso, A; Ferrari, A; Ferrari, M; Paganelli, G; Valente, M
2011-06-01
The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one.Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10–3 MeV) and for beta emitting isotopes commonly used for therapy (89Sr, 90Y, 131I, 153Sm, 177Lu, 186Re, and 188Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8·RCSDA and 0.9·RCSDA for monoenergetic electrons (RCSDA being the continuous slowing down approximation range) and within 0.8·X90 and 0.9·X90 for isotopes (X90 being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9·RCSDA and 0.9·X90 for electrons and isotopes, respectively.Results: Concerning monoenergetic electrons, within 0.8·RCSDA (where 90%–97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8
The Kernel Adaptive Autoregressive-Moving-Average Algorithm.
Li, Kan; Príncipe, José C
2016-02-01
In this paper, we present a novel kernel adaptive recurrent filtering algorithm based on the autoregressive-moving-average (ARMA) model, which is trained with recurrent stochastic gradient descent in the reproducing kernel Hilbert spaces. This kernelized recurrent system, the kernel adaptive ARMA (KAARMA) algorithm, brings together the theories of adaptive signal processing and recurrent neural networks (RNNs), extending the current theory of kernel adaptive filtering (KAF) using the representer theorem to include feedback. Compared with classical feedforward KAF methods, the KAARMA algorithm provides general nonlinear solutions for complex dynamical systems in a state-space representation, with a deferred teacher signal, by propagating forward the hidden states. We demonstrate its capabilities to provide exact solutions with compact structures by solving a set of benchmark nondeterministic polynomial-complete problems involving grammatical inference. Simulation results show that the KAARMA algorithm outperforms equivalent input-space recurrent architectures using first- and second-order RNNs, demonstrating its potential as an effective learning solution for the identification and synthesis of deterministic finite automata. PMID:25935049
Boundary conditions for gas flow problems from anisotropic scattering kernels
NASA Astrophysics Data System (ADS)
To, Quy-Dong; Vu, Van-Huyen; Lauriat, Guy; Léonard, Céline
2015-10-01
The paper presents an interface model for gas flowing through a channel constituted of anisotropic wall surfaces. Using anisotropic scattering kernels and Chapman Enskog phase density, the boundary conditions (BCs) for velocity, temperature, and discontinuities including velocity slip and temperature jump at the wall are obtained. Two scattering kernels, Dadzie and Méolans (DM) kernel, and generalized anisotropic Cercignani-Lampis (ACL) are examined in the present paper, yielding simple BCs at the wall fluid interface. With these two kernels, we rigorously recover the analytical expression for orientation dependent slip shown in our previous works [Pham et al., Phys. Rev. E 86, 051201 (2012) and To et al., J. Heat Transfer 137, 091002 (2015)] which is in good agreement with molecular dynamics simulation results. More important, our models include both thermal transpiration effect and new equations for the temperature jump. While the same expression depending on the two tangential accommodation coefficients is obtained for slip velocity, the DM and ACL temperature equations are significantly different. The derived BC equations associated with these two kernels are of interest for the gas simulations since they are able to capture the direction dependent slip behavior of anisotropic interfaces.
Input space versus feature space in kernel-based methods.
Schölkopf, B; Mika, S; Burges, C C; Knirsch, P; Müller, K R; Rätsch, G; Smola, A J
1999-01-01
This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the Kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data. PMID:18252603
Phase discontinuity predictions using a machine-learning trained kernel.
Sawaf, Firas; Groves, Roger M
2014-08-20
Phase unwrapping is one of the key steps of interferogram analysis, and its accuracy relies primarily on the correct identification of phase discontinuities. This can be especially challenging for inherently noisy phase fields, such as those produced through shearography and other speckle-based interferometry techniques. We showed in a recent work how a relatively small 10×10 pixel kernel was trained, through machine learning methods, for predicting the locations of phase discontinuities within noisy wrapped phase maps. We describe here how this kernel can be applied in a sliding-window fashion, such that each pixel undergoes 100 phase-discontinuity examinations--one test for each of its possible positions relative to its neighbors within the kernel's extent. We explore how the resulting predictions can be accumulated, and aggregated through a voting system, and demonstrate that the reliability of this method outperforms processing the image by segmenting it into more conventional 10×10 nonoverlapping tiles. When used in this way, we demonstrate that our 10×10 pixel kernel is large enough for effective processing of full-field interferograms. Avoiding, thus, the need for substantially more formidable computational resources which otherwise would have been necessary for training a kernel of a significantly larger size. PMID:25321117
Multiple kernel sparse representations for supervised and unsupervised learning.
Thiagarajan, Jayaraman J; Ramamurthy, Karthikeyan Natesan; Spanias, Andreas
2014-07-01
In complex visual recognition tasks, it is typical to adopt multiple descriptors, which describe different aspects of the images, for obtaining an improved recognition performance. Descriptors that have diverse forms can be fused into a unified feature space in a principled manner using kernel methods. Sparse models that generalize well to the test data can be learned in the unified kernel space, and appropriate constraints can be incorporated for application in supervised and unsupervised learning. In this paper, we propose to perform sparse coding and dictionary learning in the multiple kernel space, where the weights of the ensemble kernel are tuned based on graph-embedding principles such that class discrimination is maximized. In our proposed algorithm, dictionaries are inferred using multiple levels of 1D subspace clustering in the kernel space, and the sparse codes are obtained using a simple levelwise pursuit scheme. Empirical results for object recognition and image clustering show that our algorithm outperforms existing sparse coding based approaches, and compares favorably to other state-of-the-art methods. PMID:24833593
Bivariate discrete beta Kernel graduation of mortality data.
Mazza, Angelo; Punzo, Antonio
2015-07-01
Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors. PMID:25084764
[Utilizable value of wild economic plant resource--acron kernel].
He, R; Wang, K; Wang, Y; Xiong, T
2000-04-01
Peking whites breeding hens were selected. Using true metabolizable energy method (TME) to evaluate the available nutritive value of acorn kernel, while maize and rice were used as control. The results showed that the contents of gross energy (GE), apparent metabolizable energy (AME), true metabolizable energy (TME) and crude protein (CP) in the acorn kernel were 16.53 mg/kg-1, 11.13 mg.kg-1, 11.66 mg.kg-1 and 10.63%, respectively. The apparent availability and true availability of crude protein were 45.55% and 49.83%. The gross content of 17 amino acids, essential amino acids and semiessential amino acids were 9.23% and 4.84%. The true availability of amino acid and the content of true available amino acid were 60.85% and 6.09%. The contents of tannin and hydrocyanic acid were 4.55% and 0.98% in acorn kernel. The available nutritive value of acorn kernel is similar to maize or slightly lower, but slightly higher than that of rice. Acorn kernel is a wild economic plant resource to exploit and utilize but it contains higher tannin and hydrocyanic acid. PMID:11767593
Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Theodoridis, Sergios
2008-12-01
Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.
Kernel Manifold Alignment for Domain Adaptation.
Tuia, Devis; Camps-Valls, Gustau
2016-01-01
The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors' knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational
Kernel Manifold Alignment for Domain Adaptation
Tuia, Devis; Camps-Valls, Gustau
2016-01-01
The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors’ knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational
Improved Online Support Vector Machines Spam Filtering Using String Kernels
NASA Astrophysics Data System (ADS)
Amayri, Ola; Bouguila, Nizar
A major bottleneck in electronic communications is the enormous dissemination of spam emails. Developing of suitable filters that can adequately capture those emails and achieve high performance rate become a main concern. Support vector machines (SVMs) have made a large contribution to the development of spam email filtering. Based on SVMs, the crucial problems in email classification are feature mapping of input emails and the choice of the kernels. In this paper, we present thorough investigation of several distance-based kernels and propose the use of string kernels and prove its efficiency in blocking spam emails. We detail a feature mapping variants in text classification (TC) that yield improved performance for the standard SVMs in filtering task. Furthermore, to cope for realtime scenarios we propose an online active framework for spam filtering.
Recurrent kernel machines: computing with infinite echo state networks.
Hermans, Michiel; Schrauwen, Benjamin
2012-01-01
Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks. PMID:21851278
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562
Kernel weighted joint collaborative representation for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Du, Qian; Li, Wei
2015-05-01
Collaborative representation classifier (CRC) has been applied to hyperspectral image classification, which intends to use all the atoms in a dictionary to represent a testing pixel for label assignment. However, some atoms that are very dissimilar to the testing pixel should not participate in the representation, or their contribution should be very little. The regularized version of CRC imposes strong penalty to prevent dissimilar atoms with having large representation coefficients. To utilize spatial information, the weighted sum of local spatial neighbors is considered as a joint spatial-spectral feature, which is actually for regularized CRC-based classification. This paper proposes its kernel version to further improve classification accuracy, which can be higher than those from the traditional support vector machine with composite kernel and the kernel version of sparse representation classifier.
A method of smoothed particle hydrodynamics using spheroidal kernels
NASA Technical Reports Server (NTRS)
Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.
1995-01-01
We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562
Kernel approximate Bayesian computation in population genetic inferences.
Nakagome, Shigeki; Fukumizu, Kenji; Mano, Shuhei
2013-12-01
Approximate Bayesian computation (ABC) is a likelihood-free approach for Bayesian inferences based on a rejection algorithm method that applies a tolerance of dissimilarity between summary statistics from observed and simulated data. Although several improvements to the algorithm have been proposed, none of these improvements avoid the following two sources of approximation: 1) lack of sufficient statistics: sampling is not from the true posterior density given data but from an approximate posterior density given summary statistics; and 2) non-zero tolerance: sampling from the posterior density given summary statistics is achieved only in the limit of zero tolerance. The first source of approximation can be improved by adding a summary statistic, but an increase in the number of summary statistics could introduce additional variance caused by the low acceptance rate. Consequently, many researchers have attempted to develop techniques to choose informative summary statistics. The present study evaluated the utility of a kernel-based ABC method [Fukumizu, K., L. Song and A. Gretton (2010): "Kernel Bayes' rule: Bayesian inference with positive definite kernels," arXiv, 1009.5736 and Fukumizu, K., L. Song and A. Gretton (2011): "Kernel Bayes' rule. Advances in Neural Information Processing Systems 24." In: J. Shawe-Taylor and R. S. Zemel and P. Bartlett and F. Pereira and K. Q. Weinberger, (Eds.), pp. 1549-1557., NIPS 24: 1549-1557] for complex problems that demand many summary statistics. Specifically, kernel ABC was applied to population genetic inference. We demonstrate that, in contrast to conventional ABCs, kernel ABC can incorporate a large number of summary statistics while maintaining high performance of the inference. PMID:24150124
Broadband Waveform Sensitivity Kernels for Large-Scale Seismic Tomography
NASA Astrophysics Data System (ADS)
Nissen-Meyer, T.; Stähler, S. C.; van Driel, M.; Hosseini, K.; Auer, L.; Sigloch, K.
2015-12-01
Seismic sensitivity kernels, i.e. the basis for mapping misfit functionals to structural parameters in seismic inversions, have received much attention in recent years. Their computation has been conducted via ray-theory based approaches (Dahlen et al., 2000) or fully numerical solutions based on the adjoint-state formulation (e.g. Tromp et al., 2005). The core problem is the exuberant computational cost due to the large number of source-receiver pairs, each of which require solutions to the forward problem. This is exacerbated in the high-frequency regime where numerical solutions become prohibitively expensive. We present a methodology to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (abstract ID# 77891, www.axisem.info), and thus on spherically symmetric models. As a consequence of this method's numerical efficiency even in high-frequency regimes, kernels can be computed in a time- and frequency-dependent manner, thus providing the full generic mapping from perturbed waveform to perturbed structure. Such waveform kernels can then be used for a variety of misfit functions, structural parameters and refiltered into bandpasses without recomputing any wavefields. A core component of the kernel method presented here is the mapping from numerical wavefields to inversion meshes. This is achieved by a Monte-Carlo approach, allowing for convergent and controllable accuracy on arbitrarily shaped tetrahedral and hexahedral meshes. We test and validate this accuracy by comparing to reference traveltimes, show the projection onto various locally adaptive inversion meshes and discuss computational efficiency for ongoing tomographic applications in the range of millions of observed body-wave data between periods of 2-30s.
Single aflatoxin contaminated corn kernel analysis with fluorescence hyperspectral image
NASA Astrophysics Data System (ADS)
Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Ononye, Ambrose; Brown, Robert L.; Cleveland, Thomas E.
2010-04-01
Aflatoxins are toxic secondary metabolites of the fungi Aspergillus flavus and Aspergillus parasiticus, among others. Aflatoxin contaminated corn is toxic to domestic animals when ingested in feed and is a known carcinogen associated with liver and lung cancer in humans. Consequently, aflatoxin levels in food and feed are regulated by the Food and Drug Administration (FDA) in the US, allowing 20 ppb (parts per billion) limits in food and 100 ppb in feed for interstate commerce. Currently, aflatoxin detection and quantification methods are based on analytical tests including thin-layer chromatography (TCL) and high performance liquid chromatography (HPLC). These analytical tests require the destruction of samples, and are costly and time consuming. Thus, the ability to detect aflatoxin in a rapid, nondestructive way is crucial to the grain industry, particularly to corn industry. Hyperspectral imaging technology offers a non-invasive approach toward screening for food safety inspection and quality control based on its spectral signature. The focus of this paper is to classify aflatoxin contaminated single corn kernels using fluorescence hyperspectral imagery. Field inoculated corn kernels were used in the study. Contaminated and control kernels under long wavelength ultraviolet excitation were imaged using a visible near-infrared (VNIR) hyperspectral camera. The imaged kernels were chemically analyzed to provide reference information for image analysis. This paper describes a procedure to process corn kernels located in different images for statistical training and classification. Two classification algorithms, Maximum Likelihood and Binary Encoding, were used to classify each corn kernel into "control" or "contaminated" through pixel classification. The Binary Encoding approach had a slightly better performance with accuracy equals to 87% or 88% when 20 ppb or 100 ppb was used as classification threshold, respectively.
A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896
Source identity and kernel functions for Inozemtsev-type systems
Langmann, Edwin; Takemura, Kouichi
2012-08-15
The Inozemtsev Hamiltonian is an elliptic generalization of the differential operator defining the BC{sub N} trigonometric quantum Calogero-Sutherland model, and its eigenvalue equation is a natural many-variable generalization of the Heun differential equation. We present kernel functions for Inozemtsev Hamiltonians and Chalykh-Feigin-Veselov-Sergeev-type deformations thereof. Our main result is a solution of a heat-type equation for a generalized Inozemtsev Hamiltonian which is the source of all these kernel functions. Applications are given, including a derivation of simple exact eigenfunctions and eigenvalues of the Inozemtsev Hamiltonian.
FUV Continuum in Flare Kernels Observed by IRIS
NASA Astrophysics Data System (ADS)
Daw, Adrian N.; Kowalski, Adam; Allred, Joel C.; Cauzzi, Gianna
2016-05-01
Fits to Interface Region Imaging Spectrograph (IRIS) spectra observed from bright kernels during the impulsive phase of solar flares are providing long-sought constraints on the UV/white-light continuum emission. Results of fits of continua plus numerous atomic and molecular emission lines to IRIS far ultraviolet (FUV) spectra of bright kernels are presented. Constraints on beam energy and cross sectional area are provided by cotemporaneous RHESSI, FERMI, ROSA/DST, IRIS slit-jaw and SDO/AIA observations, allowing for comparison of the observed IRIS continuum to calculations of non-thermal electron beam heating using the RADYN radiative-hydrodynamic loop model.
An information theoretic approach of designing sparse kernel adaptive filters.
Liu, Weifeng; Park, Il; Principe, José C
2009-12-01
This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented. PMID:19923047
Iris Image Blur Detection with Multiple Kernel Learning
NASA Astrophysics Data System (ADS)
Pan, Lili; Xie, Mei; Mao, Ling
In this letter, we analyze the influence of motion and out-of-focus blur on both frequency spectrum and cepstrum of an iris image. Based on their characteristics, we define two new discriminative blur features represented by Energy Spectral Density Distribution (ESDD) and Singular Cepstrum Histogram (SCH). To merge the two features for blur detection, a merging kernel which is a linear combination of two kernels is proposed when employing Support Vector Machine. Extensive experiments demonstrate the validity of our method by showing the improved blur detection performance on both synthetic and real datasets.
Source identity and kernel functions for Inozemtsev-type systems
NASA Astrophysics Data System (ADS)
Langmann, Edwin; Takemura, Kouichi
2012-08-01
The Inozemtsev Hamiltonian is an elliptic generalization of the differential operator defining the BCN trigonometric quantum Calogero-Sutherland model, and its eigenvalue equation is a natural many-variable generalization of the Heun differential equation. We present kernel functions for Inozemtsev Hamiltonians and Chalykh-Feigin-Veselov-Sergeev-type deformations thereof. Our main result is a solution of a heat-type equation for a generalized Inozemtsev Hamiltonian which is the source of all these kernel functions. Applications are given, including a derivation of simple exact eigenfunctions and eigenvalues of the Inozemtsev Hamiltonian.
Smith, Kevin W; Cain, Fred W; Talbot, Geoff
2004-08-25
Palm kernel stearin and hydrogenated palm kernel stearin can be used to prepare compound chocolate bars or coatings. The objective of this study was to characterize the chemical composition, polymorphism, and melting behavior of the bloom that develops on bars of compound chocolate prepared using these fats. Bars were stored for 1 year at 15, 20, or 25 degrees C. At 15 and 20 degrees C the bloom was enriched in cocoa butter triacylglycerols, with respect to the main fat phase, whereas at 25 degrees C the enrichment was with palm kernel triacylglycerols. The bloom consisted principally of solid fat and was sharper melting than was the fat in the chocolate. Polymorphic transitions from the initial beta' phase to the beta phase accompanied the formation of bloom at all temperatures. PMID:15315397
Logarithmic radiative effect of water vapor and spectral kernels
NASA Astrophysics Data System (ADS)
Bani Shahabadi, Maziar; Huang, Yi
2014-05-01
Radiative kernels have become a useful tool in climate analysis. A set of spectral kernels is calculated using a moderate resolution atmospheric transmission code MODTRAN and implemented in diagnosing spectrally decomposed global outgoing longwave radiation (OLR) changes. It is found that the effect of water vapor on the OLR is in proportion to the logarithm of its concentration. Spectral analysis discloses that this logarithmic dependency mainly results from water vapor absorption bands (0-560 cm-1 and 1250-1850 cm-1), while in the window region (800-1250 cm-1), the effect scales more linearly to its concentration. The logarithmic and linear effects in the respective spectral regions are validated by the calculations of a benchmark line-by-line radiative transfer model LBLRTM. The analysis based on LBLRTM-calculated second-order kernels shows that the nonlinear (logarithmic) effect results from the damping of the OLR sensitivity to layer-wise water vapor perturbation by both intra- and inter-layer effects. Given that different scaling approaches suit different spectral regions, it is advisable to apply the kernels in a hybrid manner in diagnosing the water vapor radiative effect. Applying logarithmic scaling in the water vapor absorption bands where absorption is strong and linear scaling in the window region where absorption is weak can generally constrain the error to within 10% of the overall OLR change for up to eightfold water vapor perturbations.
PERI - Auto-tuning Memory Intensive Kernels for Multicore
Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H
2008-06-24
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.
Multiobjective optimization for model selection in kernel methods in regression.
You, Di; Benitez-Quiroz, Carlos Fabian; Martinez, Aleix M
2014-10-01
Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-versus-variance tradeoff. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a tradeoff between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition, and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared with methods in the state of the art. PMID:25291740
Wheat kernel black point and fumonisin contamination by Fusarium proliferatum
Technology Transfer Automated Retrieval System (TEKTRAN)
Fusarium proliferatum is a major cause of maize ear rot and fumonisin contamination and also can cause wheat kernel black point disease. The primary objective of this study was to characterize nine F. proliferatum strains from wheat from Nepal for ability to cause black point and fumonisin contamin...
Enzymatic treatment of peanut kernels to reduce allergen levels
Technology Transfer Automated Retrieval System (TEKTRAN)
This study investigated the use of enzymatic treatment to reduce peanut allergens in peanut kernel by processing conditions, such as, pretreatment with heat and proteolysis at different enzyme concentrations and treatment times. Two major peanut allergens, Ara h 1 and Ara h 2, were used as indicator...
Notes on a storage manager for the Clouds kernel
NASA Technical Reports Server (NTRS)
Pitts, David V.; Spafford, Eugene H.
1986-01-01
The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.
Microwave moisture meter for in-shell peanut kernels
Technology Transfer Automated Retrieval System (TEKTRAN)
. A microwave moisture meter built with off-the-shelf components was developed, calibrated and tested in the laboratory and in the field for nondestructive and instantaneous in-shell peanut kernel moisture content determination from dielectric measurements on unshelled peanut pod samples. The meter ...
Matrix kernels for MEG and EEG source localization and imaging
Mosher, J.C.; Lewis, P.S.; Leahy, R.M.
1994-12-31
The most widely used model for electroencephalography (EEG) and magnetoencephalography (MEG) assumes a quasi-static approximation of Maxwell`s equations and a piecewise homogeneous conductor model. Both models contain an incremental field element that linearly relates an incremental source element (current dipole) to the field or voltage at a distant point. The explicit form of the field element is dependent on the head modeling assumptions and sensor configuration. Proper characterization of this incremental element is crucial to the inverse problem. The field element can be partitioned into the product of a vector dependent on sensor characteristics and a matrix kernel dependent only on head modeling assumptions. We present here the matrix kernels for the general boundary element model (BEM) and for MEG spherical models. We show how these kernels are easily interchanged in a linear algebraic framework that includes sensor specifics such as orientation and gradiometer configuration. We then describe how this kernel is easily applied to ``gain`` or ``transfer`` matrices used in multiple dipole and source imaging models.
Classification of oat and groat kernels using NIR hyperspectral imaging.
Serranti, Silvia; Cesare, Daniela; Marini, Federico; Bonifazi, Giuseppe
2013-01-15
An innovative procedure to classify oat and groat kernels based on coupling hyperspectral imaging (HSI) in the near infrared (NIR) range (1006-1650 nm) and chemometrics was designed, developed and validated. According to market requirements, the amount of groat, that is the hull-less oat kernels, is one of the most important quality characteristics of oats. Hyperspectral images of oat and groat samples have been acquired by using a NIR spectral camera (Specim, Finland) and the resulting data hypercubes were analyzed applying Principal Component Analysis (PCA) for exploratory purposes and Partial Least Squares-Discriminant Analysis (PLS-DA) to build the classification models to discriminate the two kernel typologies. Results showed that it is possible to accurately recognize oat and groat single kernels by HSI (prediction accuracy was almost 100%). The study demonstrated also that good classification results could be obtained using only three wavelengths (1132, 1195 and 1608 nm), selected by means of a bootstrap-VIP procedure, allowing to speed up the classification processing for industrial applications. The developed objective and non-destructive method based on HSI can be utilized for quality control purposes and/or for the definition of innovative sorting logics of oat grains. PMID:23200388
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND STANDARDS FOR CERTAIN...
Estimating Filtering Errors Using the Peano Kernel Theorem
Jerome Blair
2009-02-20
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.
Estimating Filtering Errors Using the Peano Kernel Theorem
Jerome Blair
2008-03-01
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.
Stereotype Measurement and the "Kernel of Truth" Hypothesis.
ERIC Educational Resources Information Center
Gordon, Randall A.
1989-01-01
Describes a stereotype measurement suitable for classroom demonstration. Illustrates C. McCauley and C. L. Stitt's diagnostic ratio measure and examines the validity of the "kernel of truth" hypothesis. Uses this as a starting point for class discussion. Reports results and gives suggestions for discussion of related concepts. (Author/NL)
Popping the Kernel Modeling the States of Matter
ERIC Educational Resources Information Center
Hitt, Austin; White, Orvil; Hanson, Debbie
2005-01-01
This article discusses how to use popcorn to engage students in model building and to teach them about the nature of matter. Popping kernels is a simple and effective method to connect the concepts of heat, motion, and volume with the different phases of matter. Before proceeding with the activity the class should discuss the nature of scientific…
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 7 2013-01-01 2013-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD ADMINISTRATION (FEDERAL GRAIN INSPECTION SERVICE), DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND STANDARDS FOR CERTAIN...
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
7 CFR 981.61 - Redetermination of kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... SERVICE (MARKETING AGREEMENTS AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS... weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds... for almonds on which the obligation has been assumed by another handler. The redetermined...
Music emotion detection using hierarchical sparse kernel machines.
Chin, Yu-Hao; Lin, Chang-Hong; Siahaan, Ernestasia; Wang, Jia-Ching
2014-01-01
For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA) is implemented to reduce the dimension. The acoustical features are utilized to generate the first-level decision vector, which is a vector with each element being a significant value of an emotion. The significant values of eight main emotional classes are utilized in this paper. To calculate the significant value of an emotion, we construct its 2-class SVM with calm emotion as the global (non-target) side of the SVM. The probability distributions of the adopted acoustical features are calculated and the probability product kernel is applied in the first-level SVMs to obtain first-level decision vector feature. In the second level of the hierarchical system, we merely construct a 2-class relevance vector machine (RVM) with happiness as the target side and other emotions as the background side of the RVM. The first-level decision vector is used as the feature with conventional radial basis function kernel. The happiness verification threshold is built on the probability value. In the experimental results, the detection error tradeoff (DET) curve shows that the proposed system has a good performance on verifying if a music clip reveals happiness emotion. PMID:24729748
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Kernel color classification. 51.1403 Section 51.1403 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Kernel color classification. 51.1403 Section 51.1403 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...
Online multiple kernel similarity learning for visual search.
Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin
2014-03-01
Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in content-based image retrieval (CBIR). Despite their successes, most existing methods on distance metric learning are limited in two aspects. First, they usually assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multimodal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel similarity learning framework for learning kernel-based proximity functions which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel online multiple kernel similarity (OMKS) learning method which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets in which encouraging results show that OMKS outperforms the state-of-the-art techniques significantly. PMID:24457509
PERI - auto-tuning memory-intensive kernels for multicore
NASA Astrophysics Data System (ADS)
Williams, S.; Datta, K.; Carter, J.; Oliker, L.; Shalf, J.; Yelick, K.; Bailey, D.
2008-07-01
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to sparse matrix vector multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the high-performance computing literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4× improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.
High-Speed Tracking with Kernelized Correlation Filters.
Henriques, João F; Caseiro, Rui; Martins, Pedro; Batista, Jorge
2015-03-01
The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies-any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new kernelized correlation filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call dual correlation filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source. PMID:26353263
Multiobjective Optimization for Model Selection in Kernel Methods in Regression
You, Di; Benitez-Quiroz, C. Fabian; Martinez, Aleix M.
2016-01-01
Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-vs-variance trade-off. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a trade-off between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared to methods in the state of the art. PMID:25291740
Prediction: Design of experiments based on approximating covariance kernels
Fedorov, V.
1998-11-01
Using Mercer`s expansion to approximate the covariance kernel of an observed random function the authors transform the prediction problem to the regression problem with random parameters. The latter one is considered in the framework of convex design theory. First they formulate results in terms of the regression model with random parameters, then present the same results in terms of the original problem.
Chebyshev moment problems: Maximum entropy and kernel polynomial methods
Silver, R.N.; Roeder, H.; Voter, A.F.; Kress, J.D.
1995-12-31
Two Chebyshev recursion methods are presented for calculations with very large sparse Hamiltonians, the kernel polynomial method (KPM) and the maximum entropy method (MEM). They are applicable to physical properties involving large numbers of eigenstates such as densities of states, spectral functions, thermodynamics, total energies for Monte Carlo simulations and forces for tight binding molecular dynamics. this paper emphasizes efficient algorithms.
Acetolactate Synthase Activity in Developing Maize (Zea mays L.) Kernels
Muhitch, Michael J.
1988-01-01
Acetolactate synthase (EC 4.1.3.18) activity was examined in maize (Zea mays L.) endosperm and embryos as a function of kernel development. When assayed using unpurified homogenates, embryo acetolactate synthase activity appeared less sensitive to inhibition by leucine + valine and by the imidazolinone herbicide imazapyr than endosperm acetolactate synthase activity. Evidence is presented to show that pyruvate decarboxylase contributes to apparent acetolactate synthase activity in crude embryo extracts and a modification of the acetolactate synthase assay is proposed to correct for the presence of pyruvate decarboxylase in unpurified plant homogenates. Endosperm acetolactate synthase activity increased rapidly during early kernel development, reaching a maximum of 3 micromoles acetoin per hour per endosperm at 25 days after pollination. In contrast, embryo activity was low in young kernels and steadily increased throughout development to a maximum activity of 0.24 micromole per hour per embryo by 45 days after pollination. The sensitivity of both endosperm and embryo acetolactate synthase activities to feedback inhibition by leucine + valine did not change during kernel development. The results are compared to those found for other enzymes of nitrogen metabolism and discussed with respect to the potential roles of the embryo and endosperm in providing amino acids for storage protein synthesis. PMID:16665871
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125... STANDARDS) United States Standards for Grades of Shelled Almonds Definitions § 51.2125 Split or broken... pass through a round opening 8/64 of an inch (3.2 mm) in diameter....
Metabolite identification through multiple kernel learning on fragmentation trees
Shen, Huibin; Dührkop, Kai; Böcker, Sebastian; Rousu, Juho
2014-01-01
Motivation: Metabolite identification from tandem mass spectrometric data is a key task in metabolomics. Various computational methods have been proposed for the identification of metabolites from tandem mass spectra. Fragmentation tree methods explore the space of possible ways in which the metabolite can fragment, and base the metabolite identification on scoring of these fragmentation trees. Machine learning methods have been used to map mass spectra to molecular fingerprints; predicted fingerprints, in turn, can be used to score candidate molecular structures. Results: Here, we combine fragmentation tree computations with kernel-based machine learning to predict molecular fingerprints and identify molecular structures. We introduce a family of kernels capturing the similarity of fragmentation trees, and combine these kernels using recently proposed multiple kernel learning approaches. Experiments on two large reference datasets show that the new methods significantly improve molecular fingerprint prediction accuracy. These improvements result in better metabolite identification, doubling the number of metabolites ranked at the top position of the candidates list. Contact: huibin.shen@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931979
Classification of Microarray Data Using Kernel Fuzzy Inference System
Kumar Rath, Santanu
2014-01-01
The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature selection method. Kernel functions are used to map original data points into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function ϕ through a mathematical process called the kernel trick. This paper also presents a comparative study for classification using K-FIS along with support vector machine (SVM) for different set of features (genes). Performance parameters available in the literature such as precision, recall, specificity, F-measure, ROC curve, and accuracy are considered to analyze the efficiency of the classification model. From the proposed approach, it is apparent that K-FIS model obtains similar results when compared with SVM model. This is an indication that the proposed approach relies on kernel function.
Low Cost Real-Time Sorting of in Shell Pistachio Nuts from Kernels
Technology Transfer Automated Retrieval System (TEKTRAN)
A high speed sorter for separating pistachio nuts with (in shell) and without (kernels) shells is reported. Testing indicates 95% accuracy in removing kernels from the in shell stream with no false positive results out of 1000 kernels tested. Testing with 1000 each of in shell, shell halves, and ker...
Technology Transfer Automated Retrieval System (TEKTRAN)
An automated NIR system was used over a two-month storage period to detect single wheat kernels that contained live or dead internal rice weevils at various stages of growth. Correct classification of sound kernels and kernels containing live pupae, large larvae, medium-sized larvae, and small larv...
Size distributions of different orders of kernels within the oat spikelet
Technology Transfer Automated Retrieval System (TEKTRAN)
Oat kernel size uniformity is of interest to the oat milling industry because of the importance of kernel size in the dehulling process. Previous studies have indicated that oat kernel size distributions fit a bimodal better than a normal distribution. Here we have demonstrated by spikelet dissectio...
Technology Transfer Automated Retrieval System (TEKTRAN)
The Perten Single Kernel Characterization System (SKCS) is the current reference method to determine single wheat kernel texture. However, the SKCS calibration method is based on bulk samples, and there is no method to determine the measurement error on single kernel hardness. The objective of thi...
Automated Single-Kernel Sorting to Select for Quality Traits in Wheat Breeding Lines
Technology Transfer Automated Retrieval System (TEKTRAN)
An automated single kernel near-infrared system was used to select kernels to enhance the end-use quality of hard red wheat breeder samples. Twenty breeding populations and advanced lines were sorted for hardness index, protein content, and kernel color. To determine if the phenotypic sorting was b...
Genome Mapping of Kernel Characteristics in Hard Red Spring Wheat Breeding Lines
Technology Transfer Automated Retrieval System (TEKTRAN)
Kernel characteristics, particularly kernel weight, kernel size, and grain protein content, are important components of grain yield and quality in wheat. Development of high performing wheat cultivars, with high grain yield and quality, is a major focus in wheat breeding programs worldwide. Here, we...
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
Single-kernel NIR analysis for evaluating wheat samples for fusarium head blight resistance
Technology Transfer Automated Retrieval System (TEKTRAN)
A method to estimate bulk deoxynivalenol (DON) content of wheat grain samples using single kernel DON levels estimated by a single kernel near infrared (SKNIR) system combined with single kernel weights is described. This method estimated bulk DON levels in 90% of 160 grain samples within 6.7 ppm DO...
FFBSKAT: fast family-based sequence kernel association test.
Svishcheva, Gulnara R; Belonogova, Nadezhda M; Axenovich, Tatiana I
2014-01-01
The kernel machine-based regression is an efficient approach to region-based association analysis aimed at identification of rare genetic variants. However, this method is computationally complex. The running time of kernel-based association analysis becomes especially long for samples with genetic (sub) structures, thus increasing the need to develop new and effective methods, algorithms, and software packages. We have developed a new R-package called fast family-based sequence kernel association test (FFBSKAT) for analysis of quantitative traits in samples of related individuals. This software implements a score-based variance component test to assess the association of a given set of single nucleotide polymorphisms with a continuous phenotype. We compared the performance of our software with that of two existing software for family-based sequence kernel association testing, namely, ASKAT and famSKAT, using the Genetic Analysis Workshop 17 family sample. Results demonstrate that FFBSKAT is several times faster than other available programs. In addition, the calculations of the three-compared software were similarly accurate. With respect to the available analysis modes, we combined the advantages of both ASKAT and famSKAT and added new options to empower FFBSKAT users. The FFBSKAT package is fast, user-friendly, and provides an easy-to-use method to perform whole-exome kernel machine-based regression association analysis of quantitative traits in samples of related individuals. The FFBSKAT package, along with its manual, is available for free download at http://mga.bionet.nsc.ru/soft/FFBSKAT/. PMID:24905468
Kernel-based least squares policy iteration for reinforcement learning.
Xu, Xin; Hu, Dewen; Lu, Xicheng
2007-07-01
In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating
A locally adaptive kernel regression method for facies delineation
NASA Astrophysics Data System (ADS)
Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.
2015-12-01
Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.
Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.
2014-01-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435
Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K
2015-05-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. PMID:25791435
Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.
Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I
2016-03-01
The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor. PMID:27021084
Hypercapnia slows down proliferation and apoptosis of human bone marrow promyeloblasts.
Hamad, Mouna; Irhimeh, Mohammad R; Abbas, Ali
2016-09-01
Stem cells are being applied in increasingly diverse fields of research and therapy; as such, growing and culturing them in scalable quantities would be a huge advantage for all concerned. Gas mixtures containing 5 % CO2 are a typical concentration for the in vitro culturing of cells. The effect of varying the CO2 concentration on promyeloblast KG-1a cells was investigated in this paper. KG-1a cells are characterized by high expression of CD34 surface antigen, which is an important clinical surface marker for human hematopoietic stem cells (HSCs) transplantation. KG-1a cells were cultured in three CO2 concentrations (1, 5 and 15 %). Cells were batch-cultured and analyzed daily for viability, size, morphology, proliferation, and apoptosis using flow cytometry. No considerable differences were noted in KG-1a cell morphological properties at all three CO2 levels as they retained their myeloblast appearance. Calculated population doubling time increased with an increase in CO2 concentration. Enhanced cell proliferation was seen in cells cultured in hypercapnic conditions, in contrast to significantly decreased proliferation in hypocapnic populations. Flow cytometry analysis revealed that apoptosis was significantly (p = 0.0032) delayed in hypercapnic cultures, in parallel to accelerated apoptosis in hypocapnic ones. These results, which to the best of our knowledge are novel, suggest that elevated levels of CO2 are favored for the enhanced proliferation of bone marrow (BM) progenitor cells such as HSCs. PMID:27194031
Treatment of pregnant rats with oleoyl-estrone slows down pup fat deposition after weaning
García-Peláez, Beatriz; Vilà, Ruth; Remesar, Xavier
2008-01-01
Background In rats, oral oleoyl-estrone (OE) decreases food intake and body lipid content. The aim of this study was to determine whether OE treatment affects the energy metabolism of pregnant rats and eventually, of their pups; i.e. changes in normal growth patterns and the onset of obesity after weaning. Methods Pregnant Wistar rats were treated with daily intragastric gavages of OE in 0.2 ml sunflower oil from days 11 to 21 of pregnancy (i.e. 10 nmol oleoyl-estrone/g/day). Control animals received only the vehicle. Plasma and hormone metabolites were determined together with variations in cellularity of adipose tissue. Results Treatment decreased food intake and lowered weight gain during late pregnancy, mainly because of reduced adipose tissue accumulation in different sites. OE-treated pregnant rats' metabolic pattern after delivery was similar to that of controls. Neonates from OE-treated rats weighed the same as those from controls. They also maintained the same growth rate up to weaning, but pups from OE-treated rats slowed their growth rate afterwards, despite only limited differences in metabolite concentrations. Conclusion The OE influences on pup growth can be partially buffered by maternal lipid mobilization during the second half of pregnancy. This maternal metabolic "imprinting" may condition the eventual accumulation of adipose tissue after weaning, and its effects can affect the regulation of body weight up to adulthood. PMID:18570654
What's the Rush?: Slowing down Our "Hurried" Approach to Infant and Toddler Development
ERIC Educational Resources Information Center
Bonnett, Tina
2012-01-01
What high expectations people place on their infants and toddlers who are just beginning to understand this great big world and all of its complexities! In an attempt to ensure that growth and learning occur, the fundamental needs of infants and toddlers are often pushed aside as people rush the young child to achieve the next developmental…
Slow Down! The Importance of Repetition, Planning, and Recycling in Language Teaching.
ERIC Educational Resources Information Center
Brown, Steven
This paper argues that there is converging evidence for the pedagogical value of planning, repeating, and recycling activities in the language classroom. The paper is divided into three parts. Part one reviews field research done on this topic in Britain and finds some support for the proposition that planning fosters more complex language use and…
Fried, Eiko I.
2015-01-01
Major depression (MD) is a highly heterogeneous diagnostic category. Diverse symptoms such as sad mood, anhedonia, and fatigue are routinely added to an unweighted sum-score, and cutoffs are used to distinguish between depressed participants and healthy controls. Researchers then investigate outcome variables like MD risk factors, biomarkers, and treatment response in such samples. These practices presuppose that (1) depression is a discrete condition, and that (2) symptoms are interchangeable indicators of this latent disorder. Here I review these two assumptions, elucidate their historical roots, show how deeply engrained they are in psychological and psychiatric research, and document that they contrast with evidence. Depression is not a consistent syndrome with clearly demarcated boundaries, and depression symptoms are not interchangeable indicators of an underlying disorder. Current research practices lump individuals with very different problems into one category, which has contributed to the remarkably slow progress in key research domains such as the development of efficacious antidepressants or the identification of biomarkers for depression. The recently proposed network framework offers an alternative to the problematic assumptions. MD is not understood as a distinct condition, but as heterogeneous symptom cluster that substantially overlaps with other syndromes such as anxiety disorders. MD is not framed as an underlying disease with a number of equivalent indicators, but as a network of symptoms that have direct causal influence on each other: insomnia can cause fatigue which then triggers concentration and psychomotor problems. This approach offers new opportunities for constructing an empirically based classification system and has broad implications for future research. PMID:25852621
Bezerra, Márcio Almeida; da Silva Nery, Cybelle; de Castro Silveira, Patrícia Verçoza; de Mesquita, Gabriel Nunes; de Gomes Figueiredo, Thainá; Teixeira, Magno Felipe Holanda Barboza Inácio; de Moraes, Silvia Regina Arruda
2016-01-01
Summary Background the complications caused by diabetes increase fragility in the muscle-tendon system, resulting in degeneration and easier rupture. To avoid this issue, therapies that increase the metabolism of glucose by the body, with physical activity, have been used after the confirmation of diabetes. We evaluate the biomechanical behavior of the calcaneal tendon and the metabolic parameters in rats induced to experimental diabetes and submitted to pre- and post-induction exercise. Methods 54-male-Wistar rats were randomly divided into four groups: Control Group (CG), Swimming Group (SG), Diabetic Group (DG), and Diabetic Swimming Group (DSG). The trained groups were submitted to swimming exercise, while unexercised groups remained restricted to the cages. Metabolic and biomechanical parameters were assessed. Results the clinical parameters of DSG showed no change due to exercise protocol. The tendon analysis of the DSG showed increased values for the elastic modulus (p<0.01) and maximum tension (p<0.001) and lowest value for transverse area (p<0.001) when compared to the SG, however it showed no difference when compared to DG. Conclusion the homogeneous values presented by the tendons of the DG and DSG show that physical exercise applied in the pre- and post-induction wasn’t enough to promote a protective effect against the tendinopathy process, but prevent the progress of degeneration. PMID:27331036
Relativistic and Slowing Down: The Flow in the Hotspots of Powerful Radio Galaxies and Quasars
NASA Technical Reports Server (NTRS)
Kazanas, D.
2003-01-01
The 'hotspots' of powerful radio galaxies (the compact, high brightness regions, where the jet flow collides with the intergalactic medium (IGM)) have been imaged in radio, optical and recently in X-ray frequencies. We propose a scheme that unifies their, at first sight, disparate broad band (radio to X-ray) spectral properties. This scheme involves a relativistic flow upstream of the hotspot that decelerates to the sub-relativistic speed of its inferred advance through the IGM and it is viewed at different angles to its direction of motion, as suggested by two independent orientation estimators (the presence or not of broad emission lines in their optical spectra and the core-to-extended radio luminosity). This scheme, besides providing an account of the hotspot spectral properties with jet orientation, it also suggests that the large-scale jets remain relativistic all the way to the hotspots.
Exercise: the lifelong supplement for healthy ageing and slowing down the onset of frailty.
Viña, Jose; Rodriguez-Mañas, Leocadio; Salvador-Pascual, Andrea; Tarazona-Santabalbina, Francisco José; Gomez-Cabrera, Mari Carmen
2016-04-15
The beneficial effects of exercise have been well recognized for over half a century. Dr Jeremy Morris's pioneering studies in the fifties showed a striking difference in cardiovascular disease between the drivers and conductors on the double-decker buses in London. These studies sparked off a vast amount of research on the effects of exercise in health, and the general consensus is that exercise contributes to improved outcomes and treatment for several diseases including osteoporosis, diabetes, depression and atherosclerosis. Evidence of the beneficial effects of exercise is reviewed here. One way of highlighting the impact of exercise on disease is to consider it from the perspective of good practice. However, the intensity, duration, frequency (dosage) and counter indications of the exercise should be taken into consideration to individually tailor the exercise programme. An important case of the beneficial effect of exercise is that of ageing. Ageing is characterized by a loss of homeostatic mechanisms, on many occasions leading to the development of frailty, and hence frailty is one of the major geriatric syndromes and exercise is very useful to mitigate, or at least delay, it. Since exercise is so effective in reducing frailty, we would like to propose that exercise be considered as a supplement to other treatments. People all over the world have been taking nutritional supplements in the hopes of improving their health. We would like to think of exercise as a physiological supplement not only for treating diseases, but also for improving healthy ageing. PMID:26872560
Moving Clocks Do Not Always Appear to Slow Down: Don't Neglect the Doppler Effect
NASA Astrophysics Data System (ADS)
Wang, Frank
2013-03-01
In popular accounts of the time dilation effect in Einstein's special relativity, one often encounters the statement that moving clocks run slow. For instance, in the acclaimed PBS program "NOVA," Professor Brian Greene says, "[I]f I walk toward that guy… he'll perceive my watch ticking slower." Also in his earlier piece for The New York Times,2 he writes that "if from your perspective someone is moving, you will see time elapsing slower for him than it does for you. Everything he does … will appear in slow motion." We need to be care- ful with this kind of description, because sometimes authors neglect to consider the finite time of signal exchange between the two individuals when they observe each other. This article points out that when two individuals approach each other, everything will actually appear in fast motion—a manifestation of the relativistic Doppler effect.3
Moving Clocks Do Not Always Appear to Slow down: Don't Neglect the Doppler Effect
ERIC Educational Resources Information Center
Wang, Frank
2013-01-01
In popular accounts of the time dilation effect in Einstein's special relativity, one often encounters the statement that moving clocks run slow. For instance, in the acclaimed PBS program "NOVA," Professor Brian Greene says, "[I]f I walk toward that guy... he'll perceive my watch ticking slower." Also in his earlier piece for The New York Times,…
He, Monica M
2016-04-01
The relationship between short-term macroeconomic growth and temporary mortality increases remains strongest for motor vehicle (MV) crashes. In this paper, I investigate the mechanisms that explain falling MV fatality rates during the recent Great Recession. Using U.S. state-level panel data from 2003 to 2013, I first estimate the relationship between unemployment and MV fatality rate and then decompose it into risk and exposure factors for different types of MV crashes. Results reveal a significant 2.9 percent decrease in MV fatality rate for each percentage point increase in unemployment rate. This relationship is almost entirely explained by changes in the risk of driving rather than exposure to the amount of driving and is particularly robust for crashes involving large commercial trucks, multiple vehicles, and speeding cars. These findings provide evidence suggesting traffic patterns directly related to economic activity lead to higher risk of MV fatality rates when the economy improves. PMID:26967529
Cluster Concept Dynamics Leading to Creative Ideas Without Critical Slowing Down
NASA Astrophysics Data System (ADS)
Goldenberg, Y.; Solomon, S.; Mazursky, D.
We present algorithmic procedures for generating systematically ideas and solutions to problems which are perceived as creative. Our method consists of identifying and characterizing the most creative ideas among a vast pool. We show that they fall within a few large classes (archetypes) which share the same conceptual structure (Macros). We prescribe well defined abstract algorithms which can act deterministically on arbitrary given objects. Each algorithm generates ideas with the same conceptual structure characteristic to one of the Macros. The resulting new ideas turn out to be perceived as highly creative. We support our claims by experiments in which senior advertising professionals graded advertisement ideas produced by our method according to their creativity. The marks (grade 4.6±0.2 on a 1-7 scale) obtained by laymen applying our algorithms (after being instructed for only two hours) were significantly better than the marks obtained by advertising professionals using standard methods (grade 3.6±0.2)). The method, which is currently taught in USA, Europe, and Israel and used by advertising agencies in Britain and Israel has received formal international recognition.
Strongly confined fluids: Diverging time scales and slowing down of equilibration
NASA Astrophysics Data System (ADS)
Schilling, Rolf
2016-06-01
The Newtonian dynamics of strongly confined fluids exhibits a rich behavior. Its confined and unconfined degrees of freedom decouple for confinement length L →0 . In that case and for a slit geometry the intermediate scattering functions Sμ ν(q ,t ) simplify, resulting for (μ ,ν )≠(0 ,0 ) in a Knudsen-gas-like behavior of the confined degrees of freedom, and otherwise in S∥(q ,t ) , describing the structural relaxation of the unconfined ones. Taking the coupling into account we prove that the energy fluctuations relax exponentially. For smooth potentials the relaxation times diverge as L-3 and L-4, respectively, for the confined and unconfined degrees of freedom. The strength of the L-3 divergence can be calculated analytically. It depends on the pair potential and the two-dimensional pair distribution function. Experimental setups are suggested to test these predictions.
Spin relaxation in antiferromagnetic Fe–Fe dimers slowed down by anisotropic DyIII ions
Klöwer, Frederik; Lan, Yanhua; Clérac, Rodolphe; Wolny, Juliusz A; Schünemann, Volker; Anson, Christopher E
2013-01-01
Summary By using Mössbauer spectroscopy in combination with susceptibility measurements it was possible to identify the supertransferred hyperfine field through the oxygen bridges between DyIII and FeIII in a {Fe4Dy2} coordination cluster. The presence of the dysprosium ions provides enough magnetic anisotropy to “block” the hyperfine field that is experienced by the iron nuclei. This has resulted in magnetic spectra with internal hyperfine fields of the iron nuclei of about 23 T. The set of data permitted us to conclude that the direction of the anisotropy in lanthanide nanosize molecular clusters is associated with the single ion and crystal field contributions and 57Fe Mössbauer spectroscopy may be informative with regard to the the anisotropy not only of the studied isotope, but also of elements interacting with this isotope. PMID:24367750
Small Crowders Slow Down Kinesin-1 Stepping by Hindering Motor Domain Diffusion
NASA Astrophysics Data System (ADS)
Sozański, Krzysztof; Ruhnow, Felix; Wiśniewska, Agnieszka; Tabaka, Marcin; Diez, Stefan; Hołyst, Robert
2015-11-01
The dimeric motor protein kinesin-1 moves processively along microtubules against forces of up to 7 pN. However, the mechanism of force generation is still debated. Here, we point to the crucial importance of diffusion of the tethered motor domain for the stepping of kinesin-1: small crowders stop the motor at a viscosity of 5 m P a .s —corresponding to a hydrodynamic load in the sub-fN (˜1 0-4 pN ) range—whereas large crowders have no impact even at viscosities above 100 m P a .s . This indicates that the scale-dependent, effective viscosity experienced by the tethered motor domain is a key factor determining kinesin's functionality. Our results emphasize the role of diffusion in the kinesin-1 stepping mechanism and the general importance of the viscosity scaling paradigm in nanomechanics.
Critical Slowing Down in Time-to-Extinction: An Example of Critical Phenomena in Ecology
NASA Technical Reports Server (NTRS)
Gandhi, Amar; Levin, Simon; Orszag, Steven
1998-01-01
We study a model for two competing species that explicitly accounts for effects due to discreteness, stochasticity and spatial extension of populations. The two species are equally preferred by the environment and do better when surrounded by others of the same species. We observe that the final outcome depends on the initial densities (uniformly distributed in space) of the two species. The observed phase transition is a continuous one and key macroscopic quantities like the correlation length of clusters and the time-to-extinction diverge at a critical point. Away from the critical point, the dynamics can be described by a mean-field approximation. Close to the critical point, however, there is a crossover to power-law behavior because of the gross mismatch between the largest and smallest scales in the system. We have developed a theory based on surface effects, which is in good agreement with the observed behavior. The course-grained reaction-diffusion system obtained from the mean-field dynamics agrees well with the particle system.
Carvalheiro, Luísa Gigante; Kunin, William E; Keil, Petr; Aguirre-Gutiérrez, Jesus; Ellis, Willem Nicolaas; Fox, Richard; Groom, Quentin; Hennekens, Stephan; Landuyt, Wouter; Maes, Dirk; Meutter, Frank; Michez, Denis; Rasmont, Pierre; Ode, Baudewijn; Potts, Simon Geoffrey; Reemer, Menno; Roberts, Stuart Paul Masson; Schaminée, Joop; WallisDeVries, Michiel F; Biesmeijer, Jacobus Christiaan
2013-01-01
Concern about biodiversity loss has led to increased public investment in conservation. Whereas there is a widespread perception that such initiatives have been unsuccessful, there are few quantitative tests of this perception. Here, we evaluate whether rates of biodiversity change have altered in recent decades in three European countries (Great Britain, Netherlands and Belgium) for plants and flower visiting insects. We compared four 20-year periods, comparing periods of rapid land-use intensification and natural habitat loss (1930–1990) with a period of increased conservation investment (post-1990). We found that extensive species richness loss and biotic homogenisation occurred before 1990, whereas these negative trends became substantially less accentuated during recent decades, being partially reversed for certain taxa (e.g. bees in Great Britain and Netherlands). These results highlight the potential to maintain or even restore current species assemblages (which despite past extinctions are still of great conservation value), at least in regions where large-scale land-use intensification and natural habitat loss has ceased. PMID:23692632
Criticality in the slowed-down boiling crisis at zero gravity
NASA Astrophysics Data System (ADS)
Charignon, T.; Lloveras, P.; Chatain, D.; Truskinovsky, L.; Vives, E.; Beysens, D.; Nikolayev, V. S.
2015-05-01
Boiling crisis is a transition between nucleate and film boiling. It occurs at a threshold value of the heat flux from the heater called CHF (critical heat flux). Usually, boiling crisis studies are hindered by the high CHF and short transition duration (below 1 ms). Here we report on experiments in hydrogen near its liquid-vapor critical point, in which the CHF is low and the dynamics slow enough to be resolved. As under such conditions the surface tension is very small, the experiments are carried out in the reduced gravity to preserve the conventional bubble geometry. Weightlessness is created artificially in two-phase hydrogen by compensating gravity with magnetic forces. We were able to reveal the fractal structure of the contour of the percolating cluster of the dry areas at the heater that precedes the boiling crisis. We provide a direct statistical analysis of dry spot areas that confirms the boiling crisis at zero gravity as a scale-free phenomenon. It was observed that, in agreement with theoretical predictions, saturated boiling CHF tends to zero (within the precision of our thermal control system) in zero gravity, which suggests that the boiling crisis may be observed at any heat flux provided the experiment lasts long enough.
Criticality in the slowed-down boiling crisis at zero gravity.
Charignon, T; Lloveras, P; Chatain, D; Truskinovsky, L; Vives, E; Beysens, D; Nikolayev, V S
2015-05-01
Boiling crisis is a transition between nucleate and film boiling. It occurs at a threshold value of the heat flux from the heater called CHF (critical heat flux). Usually, boiling crisis studies are hindered by the high CHF and short transition duration (below 1 ms). Here we report on experiments in hydrogen near its liquid-vapor critical point, in which the CHF is low and the dynamics slow enough to be resolved. As under such conditions the surface tension is very small, the experiments are carried out in the reduced gravity to preserve the conventional bubble geometry. Weightlessness is created artificially in two-phase hydrogen by compensating gravity with magnetic forces. We were able to reveal the fractal structure of the contour of the percolating cluster of the dry areas at the heater that precedes the boiling crisis. We provide a direct statistical analysis of dry spot areas that confirms the boiling crisis at zero gravity as a scale-free phenomenon. It was observed that, in agreement with theoretical predictions, saturated boiling CHF tends to zero (within the precision of our thermal control system) in zero gravity, which suggests that the boiling crisis may be observed at any heat flux provided the experiment lasts long enough. PMID:26066249
A compensatory algorithm for the slow-down effect on constant-time-separation approaches
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
1991-01-01
In seeking methods to improve airport capacity, the question arose as to whether an electronic display could provide information which would enable the pilot to be responsible for self-separation under instrument conditions to allow for the practical implementation of reduced separation, multiple glide path approaches. A time based, closed loop algorithm was developed and simulator validated for in-trail (one aircraft behind the other) approach and landing. The algorithm was designed to reduce the effects of approach speed reduction prior to landing for the trailing aircraft as well as the dispersion of the interarrival times. The operational task for the validation was an instrument approach to landing while following a single lead aircraft on the same approach path. The desired landing separation was 60 seconds for these approaches. An open loop algorithm, previously developed, was used as a basis for comparison. The results showed that relative to the open loop algorithm, the closed loop one could theoretically provide for a 6 pct. increase in runway throughput. Also, the use of the closed loop algorithm did not affect the path tracking performance and pilot comments indicated that the guidance from the closed loop algorithm would be acceptable from an operational standpoint. From these results, it is concluded that by using a time based, closed loop spacing algorithm, precise interarrival time intervals may be achievable with operationally acceptable pilot workload.
REAC technology and hyaluron synthase 2, an interesting network to slow down stem cell senescence.
Maioli, Margherita; Rinaldi, Salvatore; Pigliaru, Gianfranco; Santaniello, Sara; Basoli, Valentina; Castagna, Alessandro; Fontani, Vania; Ventura, Carlo
2016-01-01
Hyaluronic acid (HA) plays a fundamental role in cell polarity and hydrodynamic processes, affording significant modulation of proliferation, migration, morphogenesis and senescence, with deep implication in the ability of stem cells to execute their differentiating plans. The Radio Electric Asymmetric Conveyer (REAC) technology is aimed to optimize the ions fluxes at the molecular level in order to optimize the molecular mechanisms driving cellular asymmetry and polarization. Here, we show that treatment with 4-methylumbelliferone (4-MU), a potent repressor of type 2 HA synthase and endogenous HA synthesis, dramatically antagonized the ability of REAC to recover the gene and protein expression of Bmi1, Oct4, Sox2, and Nanog in ADhMSCs that had been made senescent by prolonged culture up to the 30(th) passage. In senescent ADhMSCs, 4-MU also counteracted the REAC ability to rescue the gene expression of TERT, and the associated resumption of telomerase activity. Hence, the anti-senescence action of REAC is largely dependent upon the availability of endogenous HA synthesis. Endogenous HA and HA-binding proteins with REAC technology create an interesting network that acts on the modulation of cell polarity and intracellular environment. This suggests that REAC technology is effective on an intracellular niche level of stem cell regulation. PMID:27339908
Being "Lazy" and Slowing Down: Toward Decolonizing Time, Our Body, and Pedagogy
ERIC Educational Resources Information Center
Shahjahan, Riyad A.
2015-01-01
In recent years, scholars have critiqued norms of neoliberal higher education (HE) by calling for embodied and anti-oppressive teaching and learning. Implicit in these accounts, but lacking elaboration, is a concern with reformulating the notion of "time" and temporalities of academic life. Employing a coloniality perspective, this…
Analysis of a Scenario for Chaotic Quantal Slowing Down of Inspiration
2013-01-01
On exposure to opiates, preparations from rat brain stems have been observed to continue to produce regular expiratory signals, but to fail to produce some inspiratory signals. The numbers of expirations between two successive inspirations form an apparently random sequence. Here, we propose an explanation based on the qualitative theory of dynamical systems. A relatively simple scenario for the dynamics of interaction between the generators of expiratory and inspiratory signals produces pseudo-random behaviour of the type observed. PMID:24040967
Slowing down after a mild traumatic brain injury: a strategy to improve cognitive task performance?
Ozen, Lana J; Fernandes, Myra A
2012-01-01
Long-term persistent attention and memory difficulties following a mild traumatic brain injury (TBI) often go undetected on standard neuropsychological tests, despite complaints by mild TBI individuals. We conducted a visual Repetition Detection working memory task to digits, in which we manipulated task difficulty by increasing cognitive load, to identify subtle deficits long after a mild TBI. Twenty-six undergraduate students with a self-report of one mild TBI, which occurred at least 6 months prior, and 31 non-head-injured controls took part in the study. Participants were not informed until study completion that the study's purpose was to examine cognitive changes following a mild TBI, to reduce the influence of "diagnosis threat" on performance. Neuropsychological tasks did not differentiate the groups, though mild TBI participants reported higher state anxiety levels. On our working memory task, the mild TBI group took significantly longer to accurately detect repeated targets on our task, suggesting that slowed information processing is a long-term consequence of mild TBI. Accuracy was comparable in the low-load condition and, unexpectedly, mild TBI performance surpassed that of controls in the high-load condition. Temporal analysis of target identification suggested a strategy difference between groups: mild TBI participants made a significantly greater number of accurate responses following the target's offset, and significantly fewer erroneous distracter responses prior to target onset, compared with controls. Results suggest that long after a mild TBI, high-functioning young adults invoke a strategy of delaying their identification of targets in order to maintain, and facilitate, accuracy on cognitively demanding tasks. PMID:22068441
The Job Market in 2000: Slowing down as the Year Ended.
ERIC Educational Resources Information Center
Martel, Jennifer L.; Langdon, David S.
2001-01-01
As the unemployment rate edged down to a 31-year low, the job market entered an unprecedented 10th year of expansion, though job growth slowed, especially in construction and service industries. The labor market improved for minority workers, who slightly closed the unemployment rate gap with white workers. (Contains 102 notes and references.)…
ERIC Educational Resources Information Center
Leon, Sharon M.
2008-01-01
Humanities teachers in higher education strive to locate and implement pedagogical approaches that allow our students to deepen their inquiry, to make significant intellectual connections, and to carry those questions and insights across the curriculum. Digital storytelling is one of those pedagogical approaches. Digital storytelling can create an…
REAC technology and hyaluron synthase 2, an interesting network to slow down stem cell senescence
Maioli, Margherita; Rinaldi, Salvatore; Pigliaru, Gianfranco; Santaniello, Sara; Basoli, Valentina; Castagna, Alessandro; Fontani, Vania; Ventura, Carlo
2016-01-01
Hyaluronic acid (HA) plays a fundamental role in cell polarity and hydrodynamic processes, affording significant modulation of proliferation, migration, morphogenesis and senescence, with deep implication in the ability of stem cells to execute their differentiating plans. The Radio Electric Asymmetric Conveyer (REAC) technology is aimed to optimize the ions fluxes at the molecular level in order to optimize the molecular mechanisms driving cellular asymmetry and polarization. Here, we show that treatment with 4-methylumbelliferone (4-MU), a potent repressor of type 2 HA synthase and endogenous HA synthesis, dramatically antagonized the ability of REAC to recover the gene and protein expression of Bmi1, Oct4, Sox2, and Nanog in ADhMSCs that had been made senescent by prolonged culture up to the 30th passage. In senescent ADhMSCs, 4-MU also counteracted the REAC ability to rescue the gene expression of TERT, and the associated resumption of telomerase activity. Hence, the anti-senescence action of REAC is largely dependent upon the availability of endogenous HA synthesis. Endogenous HA and HA-binding proteins with REAC technology create an interesting network that acts on the modulation of cell polarity and intracellular environment. This suggests that REAC technology is effective on an intracellular niche level of stem cell regulation. PMID:27339908
Niss, Kristine; Dalle-Ferrier, Cécile; Giordano, Valentina M; Monaco, Giulio; Frick, Bernhard; Alba-Simionesco, Christiane
2008-11-21
We present an extensive analysis of the proposed relationship [T. Scopigno et al., Science 302, 849 (2003)] between the fragility of glass-forming liquids and the nonergodicity factor as measured by inelastic x-ray scattering. We test the robustness of the correlation through the investigation of the relative change under pressure of the speed of sound, nonergodicity factor, and broadening of the acoustic exitations of a molecular glass former, cumene, and of a polymer, polyisobutylene. For polyisobutylene, we also perform a similar study by varying its molecular weight. Moreover, we have included new results on liquids presenting an exceptionally high fragility index m under ambient conditions. We show that the linear relation, proposed by Scopigno et al. [Science 302, 849 (2003)] between fragility, measured in the liquid state, and the slope alpha of the inverse nonergodicity factor as a function of T/T(g), measured in the glassy state, is not verified when increasing the data base. In particular, while there is still a trend in the suggested direction at atmospheric pressure, its consistency is not maintained by introducing pressure as an extra control parameter modifying the fragility: whatever is the variation in the isobaric fragility, the inverse nonergodicity factor increases or remains constant within the error bars, and one observes a systematic increase in the slope alpha when the temperature is scaled by T(g)(P). To avoid any particular aspects that might cause the relation to fail, we have replaced the fragility by other related properties often evoked, e.g., thermodynamic fragility, for the understanding of its concept. Moreover, we find, as previously proposed by two of us [K. Niss and C. Alba-Simionesco, Phys. Rev. B 74, 024205 (2006)], that the nonergodicity factor evaluated at the glass transition qualitatively reflects the effect of density on the relaxation time even though in this case no clear quantitative correlations appear. PMID:19026072
Technology Transfer Automated Retrieval System (TEKTRAN)
Gray kernel is an important disease of macadamia that affects the quality of kernels, causing gray discoloration and a permeating, foul odor. Gray kernel symptoms were produced in raw, in-shell kernels of three cultivars of macadamia that were inoculated with strains of Enterobacter cloacae. Koch’...
FRIT characterized hierarchical kernel memory arrangement for multiband palmprint recognition
NASA Astrophysics Data System (ADS)
Kisku, Dakshina R.; Gupta, Phalguni; Sing, Jamuna K.
2015-10-01
In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.
Effective face recognition using bag of features with additive kernels
NASA Astrophysics Data System (ADS)
Yang, Shicai; Bebis, George; Chu, Yongjie; Zhao, Lindu
2016-01-01
In past decades, many techniques have been used to improve face recognition performance. The most common and well-studied ways are to use the whole face image to build a subspace based on the reduction of dimensionality. Differing from methods above, we consider face recognition as an image classification problem. The face images of the same person are considered to fall into the same category. Each category and each face image could be both represented by a simple pyramid histogram. Spatial dense scale-invariant feature transform features and bag of features method are used to build categories and face representations. In an effort to make the method more efficient, a linear support vector machine solver, Pegasos, is used for the classification in the kernel space with additive kernels instead of nonlinear SVMs. Our experimental results demonstrate that the proposed method can achieve very high recognition accuracy on the ORL, YALE, and FERET databases.
Some physical properties of ginkgo nuts and kernels
NASA Astrophysics Data System (ADS)
Ch'ng, P. E.; Abdullah, M. H. R. O.; Mathai, E. J.; Yunus, N. A.
2013-12-01
Some data of the physical properties of ginkgo nuts at a moisture content of 45.53% (±2.07) (wet basis) and of their kernels at 60.13% (± 2.00) (wet basis) are presented in this paper. It consists of the estimation of the mean length, width, thickness, the geometric mean diameter, sphericity, aspect ratio, unit mass, surface area, volume, true density, bulk density, and porosity measures. The coefficient of static friction for nuts and kernels was determined by using plywood, glass, rubber, and galvanized steel sheet. The data are essential in the field of food engineering especially dealing with design and development of machines, and equipment for processing and handling agriculture products.
Analyzing Sparse Dictionaries for Online Learning With Kernels
NASA Astrophysics Data System (ADS)
Honeine, Paul
2015-12-01
Many signal processing and machine learning methods share essentially the same linear-in-the-parameter model, with as many parameters as available samples as in kernel-based machines. Sparse approximation is essential in many disciplines, with new challenges emerging in online learning with kernels. To this end, several sparsity measures have been proposed in the literature to quantify sparse dictionaries and constructing relevant ones, the most prolific ones being the distance, the approximation, the coherence and the Babel measures. In this paper, we analyze sparse dictionaries based on these measures. By conducting an eigenvalue analysis, we show that these sparsity measures share many properties, including the linear independence condition and inducing a well-posed optimization problem. Furthermore, we prove that there exists a quasi-isometry between the parameter (i.e., dual) space and the dictionary's induced feature space.
Semi-supervised kernel learning based optical image recognition
NASA Astrophysics Data System (ADS)
Li, Jun-Bao; Yang, Zhi-Ming; Yu, Yang; Sun, Zhen
2012-08-01
This paper is to propose semi-supervised kernel learning based optical image recognition, called Semi-supervised Graph-based Global and Local Preserving Projection (SGGLPP) through integrating graph construction with the specific DR process into one unified framework. SGGLPP preserves not only the positive and negative constraints but also the local and global structure of the data in the low dimensional space. In SGGLPP, the intrinsic and cost graphs are constructed using the positive and negative constraints from side-information and k nearest neighbor criterion from unlabeled samples. Moreover, kernel trick is applied to extend SGGLPP called KSGGLPP by on the performance of nonlinear feature extraction. Experiments are implemented on UCI database and two real image databases to testify the feasibility and performance of the proposed algorithm.
Heat kernel for flat generalized Laplacians with anisotropic scaling
NASA Astrophysics Data System (ADS)
Mamiya, A.; Pinzul, A.
2014-06-01
We calculate the closed analytic form of the solution of heat kernel equation for the anisotropic generalizations of flat Laplacian. We consider a UV as well as UV/IR interpolating generalizations. In all cases, the result can be expressed in terms of Fox-Wright psi-functions. We perform different consistency checks, analytically reproducing some of the previous numerical or qualitative results, such as spectral dimension flow. Our study should be considered as a first step towards the construction of a heat kernel for curved Hořava-Lifshitz geometries, which is an essential ingredient in the spectral action approach to the construction of the Hořava-Lifshitz gravity.
Born Sensitivity Kernels in Spherical Geometry for Meridional Flows
NASA Astrophysics Data System (ADS)
Jackiewicz, Jason; Boening, Vincent; Roth, Markus; Kholikov, Shukur
2016-05-01
Measuring meridional flows deep in the solar convection zone is challenging because of their small amplitudes compared to other background signals. Typically such inferences are made using a ray theory that is best suited for slowly-varying flows. The implementation of finite-frequency Born theory has been shown to be more accurate for modeling flows of complex spatial structure in the near-surface region. Only until recently were such functions available in spherical geometry, which is necessary for applications to meridional flows. Here we compare these sensitivity kernels with corresponding ray kernels in a forward and inverse problem using numerical simulations. We show that they are suitable for inverting travel-time measurements and are more sensitive to small-scale variations of deep circulations.
Undersampled dynamic magnetic resonance imaging using kernel principal component analysis.
Wang, Yanhua; Ying, Leslie
2014-01-01
Compressed sensing (CS) is a promising approach to accelerate dynamic magnetic resonance imaging (MRI). Most existing CS methods employ linear sparsifying transforms. The recent developments in non-linear or kernel-based sparse representations have been shown to outperform the linear transforms. In this paper, we present an iterative non-linear CS dynamic MRI reconstruction framework that uses the kernel principal component analysis (KPCA) to exploit the sparseness of the dynamic image sequence in the feature space. Specifically, we apply KPCA to represent the temporal profiles of each spatial location and reconstruct the images through a modified pre-image problem. The underlying optimization algorithm is based on variable splitting and fixed-point iteration method. Simulation results show that the proposed method outperforms conventional CS method in terms of aliasing artifact reduction and kinetic information preservation. PMID:25570262
Polynomial Kernels for 3-Leaf Power Graph Modification Problems
NASA Astrophysics Data System (ADS)
Bessy, Stéphane; Paul, Christophe; Perez, Anthony
A graph G = (V,E) is a 3-leaf power iff there exists a tree T the leaf set of which is V and such that (u,v) ∈ E iff u and v are at distance at most 3 in T. The 3-leaf power edge modification problems, i.e. edition (also known as the CLOSEST 3-LEAF POWER), completion and edge-deletion are FPT when parameterized by the size of the edge set modification. However, a polynomial kernel was known for none of these three problems. For each of them, we provide a kernel with O(k 3) vertices that can be computed in linear time. We thereby answer an open question first mentioned by Dom, Guo, Hüffner and Niedermeier [9].
Heat kernel expansion in the background field formalism
NASA Astrophysics Data System (ADS)
Barvinsky, Andrei O.
2015-06-01
Heat kernel expansion and background field formalism represent the combination of two calculational methods within the functional approach to quantum field theory. This approach implies construction of generating functionals for matrix elements and expectation values of physical observables. These are functionals of arbitrary external sources or the mean field of a generic configuration -- the background field. Exact calculation of quantum effects on a generic background is impossible. However, a special integral (proper time) representation for the Green's function of the wave operator -- the propagator of the theory -- and its expansion in the ultraviolet and infrared limits of respectively short and late proper time parameter allow one to construct approximations which are valid on generic background fields. Current progress of quantum field theory, its renormalization properties, model building in unification of fundamental physical interactions and QFT applications in high energy physics, gravitation and cosmology critically rely on efficiency of the heat kernel expansion and background field formalism.
SIFT fusion of kernel eigenfaces for face recognition
NASA Astrophysics Data System (ADS)
Kisku, Dakshina R.; Tistarelli, Massimo; Gupta, Phalguni; Sing, Jamuna K.
2015-10-01
In this paper, we investigate an application that integrates holistic appearance based method and feature based method for face recognition. The automatic face recognition system makes use of multiscale Kernel PCA (Principal Component Analysis) characterized approximated face images and reduced the number of invariant SIFT (Scale Invariant Feature Transform) keypoints extracted from face projected feature space. To achieve higher variance in the inter-class face images, we compute principal components in higher-dimensional feature space to project a face image onto some approximated kernel eigenfaces. As long as feature spaces retain their distinctive characteristics, reduced number of SIFT points are detected for a number of principal components and keypoints are then fused using user-dependent weighting scheme and form a feature vector. The proposed method is tested on ORL face database, and the efficacy of the system is proved by the test results computed using the proposed algorithm.
Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework
Guo, Qing; Dong, Fangmin; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi
2016-01-01
A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images. PMID:27148597
Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications
Jones, Terry R
2011-01-01
This paper describes a kernel scheduling algorithm that is based on co-scheduling principles and that is intended for parallel applications running on 1000 cores or more where inter-node scalability is key. Experimental results for a Linux implementation on a Cray XT5 machine are presented.1 The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.
Cassane diterpenes from the seed kernels of Caesalpinia sappan.
Nguyen, Hai Xuan; Nguyen, Nhan Trung; Dang, Phu Hoang; Thi Ho, Phuoc; Nguyen, Mai Thanh Thi; Van Can, Mao; Dibwe, Dya Fita; Ueda, Jun-Ya; Awale, Suresh
2016-02-01
Eight structurally diverse cassane diterpenes named tomocins A-H were isolated from the seed kernels of Vietnamese Caesalpinia sappan Linn. Their structures were determined by extensive NMR and CD spectroscopic analysis. Among the isolated compounds, tomocin A, phanginin A, F, and H exhibited mild preferential cytotoxicity against PANC-1 human pancreatic cancer cells under nutrition-deprived condition without causing toxicity in normal nutrient-rich conditions. PMID:26769396
Realistic dispersion kernels applied to cohabitation reaction dispersion equations
NASA Astrophysics Data System (ADS)
Isern, Neus; Fort, Joaquim; Pérez-Losada, Joaquim
2008-10-01
We develop front spreading models for several jump distance probability distributions (dispersion kernels). We derive expressions for a cohabitation model (cohabitation of parents and children) and a non-cohabitation model, and apply them to the Neolithic using data from real human populations. The speeds that we obtain are consistent with observations of the Neolithic transition. The correction due to the cohabitation effect is up to 38%.
Instantaneous Bethe-Salpeter kernel for the lightest pseudoscalar mesons
NASA Astrophysics Data System (ADS)
Lucha, Wolfgang; Schöberl, Franz F.
2016-05-01
Starting from a phenomenologically successful, numerical solution of the Dyson-Schwinger equation that governs the quark propagator, we reconstruct in detail the interaction kernel that has to enter the instantaneous approximation to the Bethe-Salpeter equation to allow us to describe the lightest pseudoscalar mesons as quark-antiquark bound states exhibiting the (almost) masslessness necessary for them to be interpretable as the (pseudo) Goldstone bosons related to the spontaneous chiral symmetry breaking of quantum chromodynamics.
Deproteinated palm kernel cake-derived oligosaccharides: A preliminary study
NASA Astrophysics Data System (ADS)
Fan, Suet Pin; Chia, Chin Hua; Fang, Zhen; Zakaria, Sarani; Chee, Kah Leong
2014-09-01
Preliminary study on microwave-assisted hydrolysis of deproteinated palm kernel cake (DPKC) to produce oligosaccharides using succinic acid was performed. Three important factors, i.e., temperature, acid concentration and reaction time, were selected to carry out the hydrolysis processes. Results showed that the highest yield of DPKC-derived oligosaccharides can be obtained at a parameter 170 °C, 0.2 N SA and 20 min of reaction time.
Benchmarking NWP Kernels on Multi- and Many-core Processors
NASA Astrophysics Data System (ADS)
Michalakes, J.; Vachharajani, M.
2008-12-01
Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.
Equilibrium studies of copper ion adsorption onto palm kernel fibre.
Ofomaja, Augustine E
2010-07-01
The equilibrium sorption of copper ions from aqueous solution using a new adsorbent, palm kernel fibre, has been studied. Palm kernel fibre is obtained in large amounts as a waste product of palm oil production. Batch equilibrium studies were carried out and system variables such as solution pH, sorbent dose, and sorption temperature were varied. The equilibrium sorption data was then analyzed using the Langmuir, Freundlich, Dubinin-Radushkevich (D-R) and Temkin isotherms. The fit of these isotherm models to the equilibrium sorption data was determined, using the linear coefficient of determination, r(2), and the non-linear Chi-square, chi(2) error analysis. The results revealed that sorption was pH dependent and increased with increasing solution pH above the pH(PZC) of the palm kernel fibre with an optimum dose of 10g/dm(3). The equilibrium data were found to fit the Langmuir isotherm model best, with a monolayer capacity of 3.17 x 10(-4)mol/g at 339K. The sorption equilibrium constant, K(a), increased with increasing temperature, indicating that bond strength between sorbate and sorbent increased with temperature and sorption was endothermic. This was confirmed by the increase in the values of the Temkin isotherm constant, B(1), with increasing temperature. The Dubinin-Radushkevich (D-R) isotherm parameter, free energy, E, was in the range of 15.7-16.7kJ/mol suggesting that the sorption mechanism was ion exchange. Desorption studies showed that a high percentage of the copper was desorbed from the adsorbent using acid solutions (HCl, HNO(3) and CH(3)COOH) and the desorption percentage increased with acid concentration. The thermodynamics of the copper ions/palm kernel fibre system indicate that the process is spontaneous and endothermic. PMID:20346574
Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism
Jones, Terry R
2012-01-01
This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.
Initial Kernel Timing Using a Simple PIM Performance Model
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Block, Gary L.; Springer, Paul L.; Sterling, Thomas; Brockman, Jay B.; Callahan, David
2005-01-01
This presentation will describe some initial results of paper-and-pencil studies of 4 or 5 application kernels applied to a processor-in-memory (PIM) system roughly similar to the Cascade Lightweight Processor (LWP). The application kernels are: * Linked list traversal * Sun of leaf nodes on a tree * Bitonic sort * Vector sum * Gaussian elimination The intent of this work is to guide and validate work on the Cascade project in the areas of compilers, simulators, and languages. We will first discuss the generic PIM structure. Then, we will explain the concepts needed to program a parallel PIM system (locality, threads, parcels). Next, we will present a simple PIM performance model that will be used in the remainder of the presentation. For each kernel, we will then present a set of codes, including codes for a single PIM node, and codes for multiple PIM nodes that move data to threads and move threads to data. These codes are written at a fairly low level, between assembly and C, but much closer to C than to assembly. For each code, we will present some hand-drafted timing forecasts, based on the simple PIM performance model. Finally, we will conclude by discussing what we have learned from this work, including what programming styles seem to work best, from the point-of-view of both expressiveness and performance.
Fast metabolite identification with Input Output Kernel Regression
Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho
2016-01-01
Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628
Noise Level Estimation for Model Selection in Kernel PCA Denoising.
Varon, Carolina; Alzate, Carlos; Suykens, Johan A K
2015-11-01
One of the main challenges in unsupervised learning is to find suitable values for the model parameters. In kernel principal component analysis (kPCA), for example, these are the number of components, the kernel, and its parameters. This paper presents a model selection criterion based on distance distributions (MDDs). This criterion can be used to find the number of components and the σ(2) parameter of radial basis function kernels by means of spectral comparison between information and noise. The noise content is estimated from the statistical moments of the distribution of distances in the original dataset. This allows for a type of randomization of the dataset, without actually having to permute the data points or generate artificial datasets. After comparing the eigenvalues computed from the estimated noise with the ones from the input dataset, information is retained and maximized by a set of model parameters. In addition to the model selection criterion, this paper proposes a modification to the fixed-size method and uses the incomplete Cholesky factorization, both of which are used to solve kPCA in large-scale applications. These two approaches, together with the model selection MDD, were tested in toy examples and real life applications, and it is shown that they outperform other known algorithms. PMID:25608316
Hyperspectral-imaging-based techniques applied to wheat kernels characterization
NASA Astrophysics Data System (ADS)
Serranti, Silvia; Cesare, Daniela; Bonifazi, Giuseppe
2012-05-01
Single kernels of durum wheat have been analyzed by hyperspectral imaging (HSI). Such an approach is based on the utilization of an integrated hardware and software architecture able to digitally capture and handle spectra as an image sequence, as they results along a pre-defined alignment on a surface sample properly energized. The study was addressed to investigate the possibility to apply HSI techniques for classification of different types of wheat kernels: vitreous, yellow berry and fusarium-damaged. Reflectance spectra of selected wheat kernels of the three typologies have been acquired by a laboratory device equipped with an HSI system working in near infrared field (1000-1700 nm). The hypercubes were analyzed applying principal component analysis (PCA) to reduce the high dimensionality of data and for selecting some effective wavelengths. Partial least squares discriminant analysis (PLS-DA) was applied for classification of the three wheat typologies. The study demonstrated that good classification results were obtained not only considering the entire investigated wavelength range, but also selecting only four optimal wavelengths (1104, 1384, 1454 and 1650 nm) out of 121. The developed procedures based on HSI can be utilized for quality control purposes or for the definition of innovative sorting logics of wheat.
KNBD: A Remote Kernel Block Server for Linux
NASA Technical Reports Server (NTRS)
Becker, Jeff
1999-01-01
I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.
Predicting activity approach based on new atoms similarity kernel function.
Abu El-Atta, Ahmed H; Moussa, M I; Hassanien, Aboul Ella
2015-07-01
Drug design is a high cost and long term process. To reduce time and costs for drugs discoveries, new techniques are needed. Chemoinformatics field implements the informational techniques and computer science like machine learning and graph theory to discover the chemical compounds properties, such as toxicity or biological activity. This is done through analyzing their molecular structure (molecular graph). To overcome this problem there is an increasing need for algorithms to analyze and classify graph data to predict the activity of molecules. Kernels methods provide a powerful framework which combines machine learning with graph theory techniques. These kernels methods have led to impressive performance results in many several chemoinformatics problems like biological activity prediction. This paper presents a new approach based on kernel functions to solve activity prediction problem for chemical compounds. First we encode all atoms depending on their neighbors then we use these codes to find a relationship between those atoms each other. Then we use relation between different atoms to find similarity between chemical compounds. The proposed approach was compared with many other classification methods and the results show competitive accuracy with these methods. PMID:26117822
Reproducing Kernels in Harmonic Spaces and Their Numerical Implementation
NASA Astrophysics Data System (ADS)
Nesvadba, Otakar
2010-05-01
In harmonic analysis such as the modelling of the Earth's gravity field, the importance of Hilbert's space of harmonic functions with the reproducing kernel is often discussed. Moreover, in case of an unbounded domain given by the exterior of the sphere or an ellipsoid, the reproducing kernel K(x,y) can be expressed analytically by means of closed formulas or by infinite series. Nevertheless, the straightforward numerical implementation of these formulas leads to dozen of problems, which are mostly connected with the floating-point arithmetic and a number representation. The contribution discusses numerical instabilities in K(x,y) and gradK(x,y) that can be overcome by employing elementary functions, in particular expm1 and log1p. Suggested evaluation scheme for reproducing kernels offers uniform formulas within the whole solution domain as well as superior speed and near-perfect accuracy (10-16 for IEC 60559 double-precision numbers) when compared with the straightforward formulas. The formulas can be easily implemented on the majority of computer platforms, especially when C standard library ISO/IEC 9899:1999 is available.
Biodiesel from Siberian apricot (Prunus sibirica L.) seed kernel oil.
Wang, Libing; Yu, Haiyan
2012-05-01
In this paper, Siberian apricot (Prunus sibirica L.) seed kernel oil was investigated for the first time as a promising non-conventional feedstock for preparation of biodiesel. Siberian apricot seed kernel has high oil content (50.18 ± 3.92%), and the oil has low acid value (0.46 mg g(-1)) and low water content (0.17%). The fatty acid composition of the Siberian apricot seed kernel oil includes a high percentage of oleic acid (65.23 ± 4.97%) and linoleic acid (28.92 ± 4.62%). The measured fuel properties of the Siberian apricot biodiesel, except cetane number and oxidative stability, were conformed to EN 14214-08, ASTM D6751-10 and GB/T 20828-07 standards, especially the cold flow properties were excellent (Cold filter plugging point -14°C). The addition of 500 ppm tert-butylhydroquinone (TBHQ) resulted in a higher induction period (7.7h) compliant with all the three biodiesel standards. PMID:22440572
Kernel Averaged Predictors for Spatio-Temporal Regression Models.
Heaton, Matthew J; Gelfand, Alan E
2012-12-01
In applications where covariates and responses are observed across space and time, a common goal is to quantify the effect of a change in the covariates on the response while adequately accounting for the spatio-temporal structure of the observations. The most common approach for building such a model is to confine the relationship between a covariate and response variable to a single spatio-temporal location. However, oftentimes the relationship between the response and predictors may extend across space and time. In other words, the response may be affected by levels of predictors in spatio-temporal proximity to the response location. Here, a flexible modeling framework is proposed to capture such spatial and temporal lagged effects between a predictor and a response. Specifically, kernel functions are used to weight a spatio-temporal covariate surface in a regression model for the response. The kernels are assumed to be parametric and non-stationary with the data informing the parameter values of the kernel. The methodology is illustrated on simulated data as well as a physical data set of ozone concentrations to be explained by temperature. PMID:24010051
Knowledge Driven Image Mining with Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Oza, Nikunj
2004-01-01
This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels; which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper, we present the theory of Mercer Kernels, describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.
NASA Astrophysics Data System (ADS)
Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas
2015-05-01
Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.
Hedstrom, C S; Shearer, P W; Miller, J C; Walton, V M
2014-10-01
Halyomorpha halys Stål, the brown marmorated stink bug (Hemiptera: Pentatomidae), is an invasive pest with established populations in Oregon. The generalist feeding habits of H. halys suggest it has the potential to be a pest of many specialty crops grown in Oregon, including hazelnuts, Corylus avellana L. The objectives of this study were to: 1) characterize the damage to developing hazelnut kernels resulting from feeding by H. halys adults, 2) determine how the timing of feeding during kernel development influences damage to kernels, and 3) determine if hazelnut shell thickness has an effect on feeding frequency on kernels. Adult brown marmorated stink bugs were allowed to feed on developing nuts for 1-wk periods from initial kernel development (spring) until harvest (fall). Developing nuts not exposed to feeding by H. halys served as a control treatment. The degree of damage and diagnostic symptoms corresponded with the hazelnut kernels' physiological development. Our results demonstrated that when H. halys fed on hazelnuts before kernel expansion, development of the kernels could cease, resulting in empty shells. When stink bugs fed during kernel expansion, kernels appeared malformed. When stink bugs fed on mature nuts the kernels exhibited corky, necrotic areas. Although significant differences in shell thickness were observed among the cultivars, no significant differences occurred in the proportions of damaged kernels based on field tests and laboratory choice tests. The results of these studies demonstrated that commercial hazelnuts are susceptible to damage caused by the feeding of H. halys throughout the entire period of kernel development. PMID:26309276
Design of a multiple kernel learning algorithm for LS-SVM by convex programming.
Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou
2011-06-01
As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. PMID:21441012
Tracking flame base movement and interaction with ignition kernels using topological methods
NASA Astrophysics Data System (ADS)
Mascarenhas, A.; Grout, R. W.; Yoo, C. S.; Chen, J. H.
2009-07-01
We segment the stabilization region in a simulation of a lifted jet flame based on its topology induced by the YOH field. Our segmentation method yields regions that correspond to the flame base and to potential auto-ignition kernels. We apply a region overlap based tracking method to follow the flame-base and the kernels over time, to study the evolution of kernels, and to detect when the kernels merge with the flame. The combination of our segmentation and tracking methods allow us observe flame stabilization via merging between the flame base and kernels; we also obtain YCH2O histories inside the kernels and detect a distinct decrease in radical concentration during transition to a developed flame.
Kernel-Correlated Levy Field Driven Forward Rate and Application to Derivative Pricing
Bo Lijun; Wang Yongjin; Yang Xuewei
2013-08-01
We propose a term structure of forward rates driven by a kernel-correlated Levy random field under the HJM framework. The kernel-correlated Levy random field is composed of a kernel-correlated Gaussian random field and a centered Poisson random measure. We shall give a criterion to preclude arbitrage under the risk-neutral pricing measure. As applications, an interest rate derivative with general payoff functional is priced under this pricing measure.
A framework for optimal kernel-based manifold embedding of medical image data.
Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma
2015-04-01
Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. PMID:25008538
A Testbed of Parallel Kernels for Computer Science Research
Bailey, David; Demmel, James; Ibrahim, Khaled; Kaiser, Alex; Koniges, Alice; Madduri, Kamesh; Shalf, John; Strohmaier, Erich; Williams, Samuel
2010-04-30
initial result of the more modern study was the seven dwarfs, which was subsequently extended to 13 motifs. These motifs have already been useful in defining classes of applications for architecture-software studies. However, these broad-brush problem statements often miss the nuance seen in individual kernels. For example, the computational requirements of particle methods vary greatly between the naive (but more accurate) direct calculations and the particle-mesh and particle-tree codes. Thus we commenced our study with an enumeration of problems, but then proceeded by providing not only reference implementations for each problem, but more importantly a mathematical definition that allows one to escape iterative approaches to software/hardware optimization. To ensure long term value, we have augmented each of our reference implementations with both a scalable problem generator and a verification scheme. In a paper we have prepared that documents our efforts, we describe in detail this process of problem definition, scalable input creation, verification, and implementation of reference codes for the scientific computing domain. Table 1 enumerates and describes the level of support we've developed for each kernel. We group these important kernels using the Berkeley dwarfs/motifs taxonomy using a red box in the appropriate column. As kernels become progressively complex, they build upon other, simpler computational methods. We note this dependency via orange boxes. After enumeration of the important numerical problems, we created a domain-appropriate high-level definition of each problem. To ensure future endeavors are not tainted by existing implementations, we specified the problem definition to be independent of both computer architecture and existing programming languages, models, and data types. Then, to provide context as to how such kernels productively map to existing architectures, languages and programming models, we produced reference implementations for most of
Xyloglucans from flaxseed kernel cell wall: Structural and conformational characterisation.
Ding, Huihuang H; Cui, Steve W; Goff, H Douglas; Chen, Jie; Guo, Qingbin; Wang, Qi
2016-10-20
The structure of ethanol precipitated fraction from 1M KOH extracted flaxseed kernel polysaccharides (KPI-EPF) was studied for better understanding the molecular structures of flaxseed kernel cell wall polysaccharides. Based on methylation/GC-MS, NMR spectroscopy, and MALDI-TOF-MS analysis, the dominate sugar residues of KPI-EPF fraction comprised of (1,4,6)-linked-β-d-glucopyranose (24.1mol%), terminal α-d-xylopyranose (16.2mol%), (1,2)-α-d-linked-xylopyranose (10.7mol%), (1,4)-β-d-linked-glucopyranose (10.7mol%), and terminal β-d-galactopyranose (8.5mol%). KPI-EPF was proposed as xyloglucans: The substitution rate of the backbone is 69.3%; R1 could be T-α-d-Xylp-(1→, or none; R2 could be T-α-d-Xylp-(1→, T-β-d-Galp-(1→2)-α-d-Xylp-(1→, or T-α-l-Araf-(1→2)-α-d-Xylp-(1→; R3 could be T-α-d-Xylp-(1→, T-β-d-Galp-(1→2)-α-d-Xylp-(1→, T-α-l-Fucp-(1→2)-β-d-Galp-(1→2)-α-d-Xylp-(1→, or none. The Mw of KPI-EPF was calculated to be 1506kDa by static light scattering (SLS). The structure-sensitive parameter (ρ) of KPI-EPF was calculated as 1.44, which confirmed the highly branched structure of extracted xyloglucans. This new findings on flaxseed kernel xyloglucans will be helpful for understanding its fermentation properties and potential applications. PMID:27474598
A class of kernel based real-time elastography algorithms.
Kibria, Md Golam; Hasan, Md Kamrul
2015-08-01
In this paper, a novel real-time kernel-based and gradient-based Phase Root Seeking (PRS) algorithm for ultrasound elastography is proposed. The signal-to-noise ratio of the strain image resulting from this method is improved by minimizing the cross-correlation discrepancy between the pre- and post-compression radio frequency signals with an adaptive temporal stretching method and employing built-in smoothing through an exponentially weighted neighborhood kernel in the displacement calculation. Unlike conventional PRS algorithms, displacement due to tissue compression is estimated from the root of the weighted average of the zero-lag cross-correlation phases of the pair of corresponding analytic pre- and post-compression windows in the neighborhood kernel. In addition to the proposed one, the other time- and frequency-domain elastography algorithms (Ara et al., 2013; Hussain et al., 2012; Hasan et al., 2012) proposed by our group are also implemented in real-time using Java where the computations are serially executed or parallely executed in multiple processors with efficient memory management. Simulation results using finite element modeling simulation phantom show that the proposed method significantly improves the strain image quality in terms of elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe) and mean structural similarity (MSSIM) for strains as high as 4% as compared to other reported techniques in the literature. Strain images obtained for the experimental phantom as well as in vivo breast data of malignant or benign masses also show the efficacy of our proposed method over the other reported techniques in the literature. PMID:25929595
Removing blur kernel noise via a hybrid ℓp norm
NASA Astrophysics Data System (ADS)
Yu, Xin; Zhang, Shunli; Zhao, Xiaolin; Zhang, Li
2015-01-01
When estimating a sharp image from a blurred one, blur kernel noise often leads to inaccurate recovery. We develop an effective method to estimate a blur kernel which is able to remove kernel noise and prevent the production of an overly sparse kernel. Our method is based on an iterative framework which alternatingly recovers the sharp image and estimates the blur kernel. In the image recovery step, we utilize the total variation (TV) regularization to recover latent images. In solving TV regularization, we propose a new criterion which adaptively terminates the iterations before convergence. While improving the efficiency, the quality of the final results is not degraded. In the kernel estimation step, we develop a metric to measure the usefulness of image edges, by which we can reduce the ambiguity of kernel estimation caused by small-scale edges. We also propose a hybrid ℓp norm, which is composed of ℓ2 norm and ℓp norm with 0.7≤p<1, to construct a sparsity constraint. Using the hybrid ℓp norm, we reduce a wider range of kernel noise and recover a more accurate blur kernel. The experiments show that the proposed method achieves promising results on both synthetic and real images.
Influence of argan kernel roasting-time on virgin argan oil composition and oxidative stability.
Harhar, Hicham; Gharby, Saïd; Kartah, Bader; El Monfalouti, Hanae; Guillaume, Dom; Charrouf, Zoubida
2011-06-01
Virgin argan oil, which is harvested from argan fruit kernels, constitutes an alimentary source of substances of nutraceutical value. Chemical composition and oxidative stability of argan oil prepared from argan kernels roasted for different times were evaluated and compared with those of beauty argan oil that is prepared from unroasted kernels. Prolonged roasting time induced colour development and increased phosphorous content whereas fatty acid composition and tocopherol levels did not change. Oxidative stability data indicate that kernel roasting for 15 to 30 min at 110 °C is optimum to preserve virgin argan oil nutritive properties. PMID:21442181
Random Variables and Positive Definite Kernels Associated with the Schroedinger Algebra
Accardi, Luigi; Boukas, Andreas
2010-06-17
We show that the Feinsilver-Kocik-Schott (FKS) kernel for the Schroedinger algebra is not positive definite. We show how the FKS Schroedinger kernel can be reduced to a positive definite one through a restriction of the defining parameters of the exponential vectors. We define the Fock space associated with the reduced FKS Schroedinger kernel. We compute the characteristic functions of quantum random variables naturally associated with the FKS Schroedinger kernel and expressed in terms of the renormalized higher powers of white noise (or RHPWN) Lie algebra generators.
Felker, F.C. )
1990-05-01
Maize (Zea mays L.) kernels cultured in vitro while still attached to cob pieces have been used as a model system to study the physiology of kernel development. In this study, the role of the cob tissue in uptake of medium components into kernels was examined. Cob tissue was essential for in vitro kernel growth, and better growth occurred with larger cob/kernel ratios. A symplastically transported fluorescent dye readily permeated the endosperm when supplied in the medium, while an apoplastic dye did not. Slicing the cob tissue to disrupt vascular connections, but not apoplastic continuity, greatly reduced ({sup 14}C)sucrose uptake into kernels. ({sup 14}C)Sucrose uptake by cob and kernel tissue was reduced 31% and 68%, respectively, by 5 mM PCMBS. L-({sup 14}C)glucose was absorbed much more slowly than D-({sup 14}C)glucose. These and other results indicate that phloem loading of sugars occurs in the cob tissue. Passage of medium components through the symplast cob tissue may be a prerequisite for uptake into the kernel. Simple diffusion from the medium to the kernels is unlikely. Therefore, the ability of substances to be transported into cob tissue cells should be considered in formulating culture medium.
Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua
2016-02-01
Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection. PMID:26420507
Kernel regression estimates of time delays between gravitationally lensed fluxes
NASA Astrophysics Data System (ADS)
AL Otaibi, Sultanah; Tiňo, Peter; Cuevas-Tello, Juan C.; Mandel, Ilya; Raychaudhury, Somak
2016-06-01
Strongly lensed variable quasars can serve as precise cosmological probes, provided that time delays between the image fluxes can be accurately measured. A number of methods have been proposed to address this problem. In this paper, we explore in detail a new approach based on kernel regression estimates, which is able to estimate a single time delay given several data sets for the same quasar. We develop realistic artificial data sets in order to carry out controlled experiments to test the performance of this new approach. We also test our method on real data from strongly lensed quasar Q0957+561 and compare our estimates against existing results.
Anytime query-tuned kernel machine classifiers via Cholesky factorization
NASA Technical Reports Server (NTRS)
DeCoste, D.
2002-01-01
We recently demonstrated 2 to 64-fold query-time speedups of Support Vector Machine and Kernel Fisher classifiers via a new computational geometry method for anytime output bounds (DeCoste,2002). This new paper refines our approach in two key ways. First, we introduce a simple linear algebra formulation based on Cholesky factorization, yielding simpler equations and lower computational overhead. Second, this new formulation suggests new methods for achieving additional speedups, including tuning on query samples. We demonstrate effectiveness on benchmark datasets.
Partial Kernelization for Rank Aggregation: Theory and Experiments
NASA Astrophysics Data System (ADS)
Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf
Rank Aggregation is important in many areas ranging from web search over databases to bioinformatics. The underlying decision problem Kemeny Score is NP-complete even in case of four input rankings to be aggregated into a "median ranking". We study efficient polynomial-time data reduction rules that allow us to find optimal median rankings. On the theoretical side, we improve a result for a "partial problem kernel" from quadratic to linear size. On the practical side, we provide encouraging experimental results with data based on web search and sport competitions, e.g., computing optimal median rankings for real-world instances with more than 100 candidates within milliseconds.
Kernel PLS-SVC for Linear and Nonlinear Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Matthews, Bryan
2003-01-01
A new methodology for discrimination is proposed. This is based on kernel orthonormalized partial least squares (PLS) dimensionality reduction of the original data space followed by support vector machines for classification. Close connection of orthonormalized PLS and Fisher's approach to linear discrimination or equivalently with canonical correlation analysis is described. This gives preference to use orthonormalized PLS over principal component analysis. Good behavior of the proposed method is demonstrated on 13 different benchmark data sets and on the real world problem of the classification finger movement periods versus non-movement periods based on electroencephalogram.
On the solution of integral equations with strongly singular kernels
NASA Technical Reports Server (NTRS)
Kaya, A. C.; Erdogan, F.
1986-01-01
Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m ,m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup -m , terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.
Non-separable pairing interaction kernels applied to superconducting cuprates
NASA Astrophysics Data System (ADS)
Haley, Stephen B.; Fink, Herman J.
2014-05-01
A pairing Hamiltonian H(Γ) with a non-separable interaction kernel Γ produces HTS for relatively weak interactions. The doping and temperature dependence of Γ(x,T) and the chemical potential μ(x) is determined by a probabilistic filling of the electronic states in the cuprate unit cell. A diverse set of HTS and normal state properties is examined, including the SC phase transition boundary TC(x), SC gap Δ(x,T), entropy S(x,T), specific heat C(x,T), and spin susceptibility χs(x,T). Detailed x,T agreement with cuprate experiment is obtained for all properties.
A rainfall spatial interpolation algorithm based on inhomogeneous kernels
NASA Astrophysics Data System (ADS)
Campo, Lorenzo; Fiori, Elisabetta; Molini, Luca
2015-04-01
Rainfall fields constitute the main input of hydrological distributed models, both for long period water balance and for short period flood forecast and monitoring. The importance of an accurate reconstruction of the spatial pattern of rainfall is, thus, well recognized in several fields of application: agricultural planning, water balance at watershed scale, water management, flood monitoring. The latter case is particularly critical, due to the strong effect of the combination of the soil moisture pattern and of the rainfall pattern on the intensity peak of the flood. Despite the importance of the spatial characterization of the rainfall height, this variable still presents several difficulties when an interpolation is required. Rainfall fields present spatial and temporal alternance of large zero-values areas (no-rainfall) and complex pattern of non zero heights (rainfall events). Furthermore, the spatial patterns strongly depend on the type and the origin of rain event (convective, stratiform, orographic) and on the spatial scale. Different kind of rainfall measures and estimates (rainfall gauges, satellite estimates, meteo radar) are available, as well as large amount of literature for the spatial interpolation: from Thiessen polygons to Inverse Distance Weight (IDW) to different variants of kriging, neural network and other deterministic or geostatistic methods. In this work a kernel-based method for interpolation of point measures (raingauges) is proposed, in which spatially inhomogeneous kernel are used. For each gauge a particular kernel is fitted following the particular correlation structures between the rainfall time series of the given gauge and those of its neighbors. In this way the local features of the field are considered following the observed dependence spatial pattern. The kernel are assumed to be Gaussian, whose covariance matrices are fitted basing on the values of the correlation of the time series and the location. A similar approach is
Kernel methods for large-scale genomic data analysis
Xing, Eric P.; Schaid, Daniel J.
2015-01-01
Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743
On the solution of integral equations with strongly singular kernels
NASA Technical Reports Server (NTRS)
Kaya, A. C.; Erdogan, F.
1987-01-01
Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.