NASA Astrophysics Data System (ADS)
Hellgren, Maria; Gross, E. K. U.
2013-11-01
We present a detailed study of the exact-exchange (EXX) kernel of time-dependent density-functional theory with an emphasis on its discontinuity at integer particle numbers. It was recently found that this exact property leads to sharp peaks and step features in the kernel that diverge in the dissociation limit of diatomic systems [Hellgren and Gross, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.85.022514 85, 022514 (2012)]. To further analyze the discontinuity of the kernel, we here make use of two different approximations to the EXX kernel: the Petersilka Gossmann Gross (PGG) approximation and a common energy denominator approximation (CEDA). It is demonstrated that whereas the PGG approximation neglects the discontinuity, the CEDA includes it explicitly. By studying model molecular systems it is shown that the so-called field-counteracting effect in the density-functional description of molecular chains can be viewed in terms of the discontinuity of the static kernel. The role of the frequency dependence is also investigated, highlighting its importance for long-range charge-transfer excitations as well as inner-shell excitations.
Putting Priors in Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
NASA Astrophysics Data System (ADS)
Panholzer, Martin; Gatti, Matteo; Reining, Lucia
2018-04-01
The charge-density response of extended materials is usually dominated by the collective oscillation of electrons, the plasmons. Beyond this feature, however, intriguing many-body effects are observed. They cannot be described by one of the most widely used approaches for the calculation of dielectric functions, which is time-dependent density functional theory (TDDFT) in the adiabatic local density approximation (ALDA). Here, we propose an approximation to the TDDFT exchange-correlation kernel which is nonadiabatic and nonlocal. It is extracted from correlated calculations in the homogeneous electron gas, where we have tabulated it for a wide range of wave vectors and frequencies. A simple mean density approximation allows one to use it in inhomogeneous materials where the density varies on a scale of 1.6 rs or faster. This kernel contains effects that are completely absent in the ALDA; in particular, it correctly describes the double plasmon in the dynamic structure factor of sodium, and it shows the characteristic low-energy peak that appears in systems with low electronic density. It also leads to an overall quantitative improvement of spectra.
Panholzer, Martin; Gatti, Matteo; Reining, Lucia
2018-04-20
The charge-density response of extended materials is usually dominated by the collective oscillation of electrons, the plasmons. Beyond this feature, however, intriguing many-body effects are observed. They cannot be described by one of the most widely used approaches for the calculation of dielectric functions, which is time-dependent density functional theory (TDDFT) in the adiabatic local density approximation (ALDA). Here, we propose an approximation to the TDDFT exchange-correlation kernel which is nonadiabatic and nonlocal. It is extracted from correlated calculations in the homogeneous electron gas, where we have tabulated it for a wide range of wave vectors and frequencies. A simple mean density approximation allows one to use it in inhomogeneous materials where the density varies on a scale of 1.6 r_{s} or faster. This kernel contains effects that are completely absent in the ALDA; in particular, it correctly describes the double plasmon in the dynamic structure factor of sodium, and it shows the characteristic low-energy peak that appears in systems with low electronic density. It also leads to an overall quantitative improvement of spectra.
Rafal Podlaski; Francis A. Roesch
2014-01-01
Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patrick, Christopher E., E-mail: chripa@fysik.dtu.dk; Thygesen, Kristian S., E-mail: thygesen@fysik.dtu.dk
2015-09-14
We present calculations of the correlation energies of crystalline solids and isolated systems within the adiabatic-connection fluctuation-dissipation formulation of density-functional theory. We perform a quantitative comparison of a set of model exchange-correlation kernels originally derived for the homogeneous electron gas (HEG), including the recently introduced renormalized adiabatic local-density approximation (rALDA) and also kernels which (a) satisfy known exact limits of the HEG, (b) carry a frequency dependence, or (c) display a 1/k{sup 2} divergence for small wavevectors. After generalizing the kernels to inhomogeneous systems through a reciprocal-space averaging procedure, we calculate the lattice constants and bulk moduli of a testmore » set of 10 solids consisting of tetrahedrally bonded semiconductors (C, Si, SiC), ionic compounds (MgO, LiCl, LiF), and metals (Al, Na, Cu, Pd). We also consider the atomization energy of the H{sub 2} molecule. We compare the results calculated with different kernels to those obtained from the random-phase approximation (RPA) and to experimental measurements. We demonstrate that the model kernels correct the RPA’s tendency to overestimate the magnitude of the correlation energy whilst maintaining a high-accuracy description of structural properties.« less
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pribram-Jones, Aurora; Grabowski, Paul E.; Burke, Kieron
We present that the van Leeuwen proof of linear-response time-dependent density functional theory (TDDFT) is generalized to thermal ensembles. This allows generalization to finite temperatures of the Gross-Kohn relation, the exchange-correlation kernel of TDDFT, and fluctuation dissipation theorem for DFT. Finally, this produces a natural method for generating new thermal exchange-correlation approximations.
Pribram-Jones, Aurora; Grabowski, Paul E.; Burke, Kieron
2016-06-08
We present that the van Leeuwen proof of linear-response time-dependent density functional theory (TDDFT) is generalized to thermal ensembles. This allows generalization to finite temperatures of the Gross-Kohn relation, the exchange-correlation kernel of TDDFT, and fluctuation dissipation theorem for DFT. Finally, this produces a natural method for generating new thermal exchange-correlation approximations.
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.
2013-01-01
Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Constantin, Lucian A.; Fabiano, Eduardo; Della Sala, Fabio
2018-05-01
Orbital-free density functional theory (OF-DFT) promises to describe the electronic structure of very large quantum systems, being its computational cost linear with the system size. However, the OF-DFT accuracy strongly depends on the approximation made for the kinetic energy (KE) functional. To date, the most accurate KE functionals are nonlocal functionals based on the linear-response kernel of the homogeneous electron gas, i.e., the jellium model. Here, we use the linear-response kernel of the jellium-with-gap model to construct a simple nonlocal KE functional (named KGAP) which depends on the band-gap energy. In the limit of vanishing energy gap (i.e., in the case of metals), the KGAP is equivalent to the Smargiassi-Madden (SM) functional, which is accurate for metals. For a series of semiconductors (with different energy gaps), the KGAP performs much better than SM, and results are close to the state-of-the-art functionals with sophisticated density-dependent kernels.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
NASA Astrophysics Data System (ADS)
Troudi, Molka; Alimi, Adel M.; Saoudi, Samir
2008-12-01
The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
Resummed memory kernels in generalized system-bath master equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mavros, Michael G.; Van Voorhis, Troy, E-mail: tvan@mit.edu
2014-08-07
Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between themore » two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.« less
NASA Astrophysics Data System (ADS)
Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas
2016-09-01
A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.
Hesselmann, Andreas; Görling, Andreas
2011-01-21
A recently introduced time-dependent exact-exchange (TDEXX) method, i.e., a response method based on time-dependent density-functional theory that treats the frequency-dependent exchange kernel exactly, is reformulated. In the reformulated version of the TDEXX method electronic excitation energies can be calculated by solving a linear generalized eigenvalue problem while in the original version of the TDEXX method a laborious frequency iteration is required in the calculation of each excitation energy. The lowest eigenvalues of the new TDEXX eigenvalue equation corresponding to the lowest excitation energies can be efficiently obtained by, e.g., a version of the Davidson algorithm appropriate for generalized eigenvalue problems. Alternatively, with the help of a series expansion of the new TDEXX eigenvalue equation, standard eigensolvers for large regular eigenvalue problems, e.g., the standard Davidson algorithm, can be used to efficiently calculate the lowest excitation energies. With the help of the series expansion as well, the relation between the TDEXX method and time-dependent Hartree-Fock is analyzed. Several ways to take into account correlation in addition to the exact treatment of exchange in the TDEXX method are discussed, e.g., a scaling of the Kohn-Sham eigenvalues, the inclusion of (semi)local approximate correlation potentials, or hybrids of the exact-exchange kernel with kernels within the adiabatic local density approximation. The lowest lying excitations of the molecules ethylene, acetaldehyde, and pyridine are considered as examples.
NASA Astrophysics Data System (ADS)
Bates, Jefferson; Laricchia, Savio; Ruzsinszky, Adrienn
The Random Phase Approximation (RPA) is quickly becoming a standard method beyond semi-local Density Functional Theory that naturally incorporates weak interactions and eliminates self-interaction error. RPA is not perfect, however, and suffers from self-correlation error as well as an incorrect description of short-ranged correlation typically leading to underbinding. To improve upon RPA we introduce a short-ranged, exchange-like kernel that is one-electron self-correlation free for one and two electron systems in the high-density limit. By tuning the one free parameter in our model to recover an exact limit of the homogeneous electron gas correlation energy we obtain a non-local, energy-optimized kernel that reduces the errors of RPA for both homogeneous and inhomogeneous solids. To reduce the computational cost of the standard kernel-corrected RPA, we also implement RPA renormalized perturbation theory for extended systems, and demonstrate its capability to describe the dominant correlation effects with a low-order expansion in both metallic and non-metallic systems. Furthermore we stress that for norm-conserving implementations the accuracy of RPA and beyond RPA structural properties compared to experiment is inherently limited by the choice of pseudopotential. Current affiliation: King's College London.
Ranking Support Vector Machine with Kernel Approximation
Dou, Yong
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
[Spatial analysis of road traffic accidents with fatalities in Spain, 2008-2011].
Gómez-Barroso, Diana; López-Cuadrado, Teresa; Llácer, Alicia; Palmera Suárez, Rocío; Fernández-Cuenca, Rafael
2015-09-01
To estimate the areas of greatest density of road traffic accidents with fatalities at 24 hours per km(2)/year in Spain from 2008 to 2011, using a geographic information system. Accidents were geocodified using the road and kilometer points where they occurred. The average nearest neighbor was calculated to detect possible clusters and to obtain the bandwidth for kernel density estimation. A total of 4775 accidents were analyzed, of which 73.3% occurred on conventional roads. The estimated average distance between accidents was 1,242 meters, and the average expected distance was 10,738 meters. The nearest neighbor index was 0.11, indicating that there were aggregations of accidents in space. A map showing the kernel density was obtained with a resolution of 1 km(2), which identified the areas of highest density. This methodology allowed a better approximation to locating accident risks by taking into account kilometer points. The map shows areas where there was a greater density of accidents. This could be an advantage in decision-making by the relevant authorities. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.
Shotorban, Babak
2010-04-01
The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.
Olaerts, Heleen; De Bondt, Yamina; Courtin, Christophe M
2018-02-15
As preharvest sprouting of wheat impairs its use in food applications, postharvest solutions for this problem are required. Due to the high kernel to kernel variability in enzyme activity in a batch of sprouted wheat, the potential of eliminating severely sprouted kernels based on density differences in NaCl solutions was evaluated. Compared to higher density kernels, lower density kernels displayed higher α-amylase, endoxylanase, and peptidase activities as well as signs of (incipient) protein, β-glucan and arabinoxylan breakdown. By discarding lower density kernels of mildly and severely sprouted wheat batches (11% and 16%, respectively), density separation increased flour FN of the batch from 280 to 345s and from 135 to 170s and increased RVA viscosity. This in turn improved dough handling, bread crumb texture and crust color. These data indicate that density separation is a powerful technique to increase the quality of a batch of sprouted wheat. Copyright © 2017 Elsevier Ltd. All rights reserved.
An improved numerical method for the kernel density functional estimation of disperse flow
NASA Astrophysics Data System (ADS)
Smith, Timothy; Ranjan, Reetesh; Pantano, Carlos
2014-11-01
We present an improved numerical method to solve the transport equation for the one-point particle density function (pdf), which can be used to model disperse flows. The transport equation, a hyperbolic partial differential equation (PDE) with a source term, is derived from the Lagrangian equations for a dilute particle system by treating position and velocity as state-space variables. The method approximates the pdf by a discrete mixture of kernel density functions (KDFs) with space and time varying parameters and performs a global Rayleigh-Ritz like least-square minimization on the state-space of velocity. Such an approximation leads to a hyperbolic system of PDEs for the KDF parameters that cannot be written completely in conservation form. This system is solved using a numerical method that is path-consistent, according to the theory of non-conservative hyperbolic equations. The resulting formulation is a Roe-like update that utilizes the local eigensystem information of the linearized system of PDEs. We will present the formulation of the base method, its higher-order extension and further regularization to demonstrate that the method can predict statistics of disperse flows in an accurate, consistent and efficient manner. This project was funded by NSF Project NSF-DMS 1318161.
Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards
2013-01-01
Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less
Widmann, Gerlig; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Al-Ekrish, Asma'a A
2017-05-01
Differences in noise and density values in MDCT images obtained using ultra-low doses with FBP, ASIR, and MBIR may possibly affect implant site density analysis. The aim of this study was to compare density and noise measurements recorded from dental implant sites using ultra-low doses combined with FBP, ASIR, and MBIR. Cadavers were scanned using a standard protocol and four low-dose protocols. Scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Density (mean Hounsfield units [HUs]) of alveolar bone and noise levels (mean standard deviation of HUs) was recorded from all datasets and measurements were compared by paired t tests and two-way ANOVA with repeated measures. Significant differences in density and noise were found between the reference dose/FBP protocol and almost all test combinations. Maximum mean differences in HU were 178.35 (bone kernel) and 273.74 (standard kernel), and in noise, were 243.73 (bone kernel) and 153.88 (standard kernel). Decreasing radiation dose increased density and noise regardless of reconstruction technique and kernel. The effect of reconstruction technique on density and noise depends on the reconstruction kernel used. • Ultra-low-dose MDCT protocols allowed more than 90 % reductions in dose. • Decreasing the dose generally increased density and noise. • Effect of IRT on density and noise varies with reconstruction kernel. • Accuracy of low-dose protocols for interpretation of bony anatomy not known. • Effect of low doses on accuracy of computer-aided design models unknown.
Enriched reproducing kernel particle method for fractional advection-diffusion equation
NASA Astrophysics Data System (ADS)
Ying, Yuping; Lian, Yanping; Tang, Shaoqiang; Liu, Wing Kam
2018-06-01
The reproducing kernel particle method (RKPM) has been efficiently applied to problems with large deformations, high gradients and high modal density. In this paper, it is extended to solve a nonlocal problem modeled by a fractional advection-diffusion equation (FADE), which exhibits a boundary layer with low regularity. We formulate this method on a moving least-square approach. Via the enrichment of fractional-order power functions to the traditional integer-order basis for RKPM, leading terms of the solution to the FADE can be exactly reproduced, which guarantees a good approximation to the boundary layer. Numerical tests are performed to verify the proposed approach.
NASA Astrophysics Data System (ADS)
Gritsenko, O. V.; van Gisbergen, S. J. A.; Görling, A.; Baerends, E. J.
2000-11-01
Time-dependent density functional theory (TDDFT) is applied for calculation of the excitation energies of the dissociating H2 molecule. The standard TDDFT method of adiabatic local density approximation (ALDA) totally fails to reproduce the potential curve for the lowest excited singlet 1Σu+ state of H2. Analysis of the eigenvalue problem for the excitation energies as well as direct derivation of the exchange-correlation (xc) kernel fxc(r,r',ω) shows that ALDA fails due to breakdown of its simple spatially local approximation for the kernel. The analysis indicates a complex structure of the function fxc(r,r',ω), which is revealed in a different behavior of the various matrix elements K1c,1cxc (between the highest occupied Kohn-Sham molecular orbital ψ1 and virtual MOs ψc) as a function of the bond distance R(H-H). The effect of nonlocality of fxc(r,r') is modeled by using different expressions for the corresponding matrix elements of different orbitals. Asymptotically corrected ALDA (ALDA-AC) expressions for the matrix elements K12,12xc(στ) are proposed, while for other matrix elements the standard ALDA expressions are retained. This approach provides substantial improvement over the standard ALDA. In particular, the ALDA-AC curve for the lowest singlet excitation qualitatively reproduces the shape of the exact curve. It displays a minimum and approaches a relatively large positive energy at large R(H-H). ALDA-AC also produces a substantial improvement for the calculated lowest triplet excitation, which is known to suffer from the triplet instability problem of the restricted KS ground state. Failure of the ALDA for the excitation energies is related to the failure of the local density as well as generalized gradient approximations to reproduce correctly the polarizability of dissociating H2. The expression for the response function χ is derived to show the origin of the field-counteracting term in the xc potential, which is lacking in the local density and generalized gradient approximations and which is required to obtain a correct polarizability.
Zhou, Hong; Liu, Shihang; Liu, Yujiao; Liu, Yaxi; You, Jing; Deng, Mei; Ma, Jian; Chen, Guangdeng; Wei, Yuming; Liu, Chunji; Zheng, Youliang
2016-09-13
Kernel length is an important target trait in barley (Hordeum vulgare L.) breeding programs. However, the number of known quantitative trait loci (QTLs) controlling kernel length is limited. In the present study, we aimed to identify major QTLs for kernel length, as well as putative candidate genes that might influence kernel length in wild barley. A recombinant inbred line (RIL) population derived from the barley cultivar Baudin (H. vulgare ssp. vulgare) and the long-kernel wild barley genotype Awcs276 (H.vulgare ssp. spontaneum) was evaluated at one location over three years. A high-density genetic linkage map was constructed using 1,832 genome-wide diversity array technology (DArT) markers, spanning a total of 927.07 cM with an average interval of approximately 0.49 cM. Two major QTLs for kernel length, LEN-3H and LEN-4H, were detected across environments and further validated in a second RIL population derived from Fleet (H. vulgare ssp. vulgare) and Awcs276. In addition, a systematic search of public databases identified four candidate genes and four categories of proteins related to LEN-3H and LEN-4H. This study establishes a fundamental research platform for genomic studies and marker-assisted selection, since LEN-3H and LEN-4H could be used for accelerating progress in barley breeding programs that aim to improve kernel length.
Ayers, Paul W; Parr, Robert G
2008-08-07
Higher-order global softnesses, local softnesses, and softness kernels are defined along with their hardness inverses. The local hardness equalization principle recently derived by the authors is extended to arbitrary order. The resulting hierarchy of equalization principles indicates that the electronegativity/chemical potential, local hardness, and local hyperhardnesses all are constant when evaluated for the ground-state electron density. The new equalization principles can be used to test whether a trial electron density is an accurate approximation to the true ground-state density and to discover molecules with desired reactive properties, as encapsulated by their chemical reactivity indicators.
NASA Astrophysics Data System (ADS)
Badalyan, S. M.; Kim, C. S.; Vignale, G.; Senatore, G.
2007-03-01
We investigate the effect of exchange and correlation (XC) on the plasmon spectrum and the Coulomb drag between spatially separated low-density two-dimensional electron layers. We adopt a different approach, which employs dynamic XC kernels in the calculation of the bilayer plasmon spectra and of the plasmon-mediated drag, and static many-body local field factors in the calculation of the particle-hole contribution to the drag. The spectrum of bilayer plasmons and the drag resistivity are calculated in a broad range of temperatures taking into account both intra- and interlayer correlation effects. We observe that both plasmon modes are strongly affected by XC corrections. After the inclusion of the complex dynamic XC kernels, a decrease of the electron density induces shifts of the plasmon branches in opposite directions. This is in stark contrast with the tendency observed within random phase approximation that both optical and acoustical plasmons move away from the boundary of the particle-hole continuum with a decrease in the electron density. We find that the introduction of XC corrections results in a significant enhancement of the transresistivity and qualitative changes in its temperature dependence. In particular, the large high-temperature plasmon peak that is present in the random phase approximation is found to disappear when the XC corrections are included. Our numerical results at low temperatures are in good agreement with the results of recent experiments by Kellogg [Solid State Commun. 123, 515 (2002)].
Direct Measurement of Wave Kernels in Time-Distance Helioseismology
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.
2006-01-01
Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Bhattacharya, Abhishek; Dunson, David B.
2012-01-01
This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295
Heßelmann, Andreas
2015-04-14
Molecular excitation energies have been calculated with time-dependent density-functional theory (TDDFT) using random-phase approximation Hessians augmented with exact exchange contributions in various orders. It has been observed that this approach yields fairly accurate local valence excitations if combined with accurate asymptotically corrected exchange-correlation potentials used in the ground-state Kohn-Sham calculations. The inclusion of long-range particle-particle with hole-hole interactions in the kernel leads to errors of 0.14 eV only for the lowest excitations of a selection of three alkene, three carbonyl, and five azabenzene molecules, thus surpassing the accuracy of a number of common TDDFT and even some wave function correlation methods. In the case of long-range charge-transfer excitations, the method typically underestimates accurate reference excitation energies by 8% on average, which is better than with standard hybrid-GGA functionals but worse compared to range-separated functional approximations.
Implementation of Two-Component Time-Dependent Density Functional Theory in TURBOMOLE.
Kühn, Michael; Weigend, Florian
2013-12-10
We report the efficient implementation of a two-component time-dependent density functional theory proposed by Wang et al. (Wang, F.; Ziegler, T.; van Lenthe, E.; van Gisbergen, S.; Baerends, E. J. J. Chem. Phys. 2005, 122, 204103) that accounts for spin-orbit effects on excitations of closed-shell systems by employing a noncollinear exchange-correlation kernel. In contrast to the aforementioned implementation, our method is based on two-component effective core potentials as well as Gaussian-type basis functions. It is implemented in the TURBOMOLE program suite for functionals of the local density approximation and the generalized gradient approximation. Accuracy is assessed by comparison of two-component vertical excitation energies of heavy atoms and ions (Cd, Hg, Au(+)) and small molecules (I2, TlH) to other two- and four-component approaches. Efficiency is demonstrated by calculating the electronic spectrum of Au20.
Irvine, Michael A; Hollingsworth, T Déirdre
2018-05-26
Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Lyman continuum observations of solar flares
NASA Technical Reports Server (NTRS)
Machado, M. E.; Noyes, R. W.
1978-01-01
A study is made of Lyman continuum observations of solar flares, using data obtained by the EUV spectroheliometer on the Apollo Telescope Mount. It is found that there are two main types of flare regions: an overall 'mean' flare coincident with the H-alpha flare region, and transient Lyman continuum kernels which can be identified with the H-alpha and X-ray kernels observed by other authors. It is found that the ground level hydrogen population in flares is closer to LTE than in the quiet sun and active regions, and that the level of Lyman continuum formation is lowered in the atmosphere from a mass column density .000005 g/sq cm in the quiet sun to .0003 g/sq cm in the mean flare, and to .001 g/sq cm in kernels. From these results the amount of chromospheric material 'evaporated' into the high temperature region is derived, which is found to be approximately 10 to the 15th g, in agreement with observations of X-ray emission measures.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
NASA Astrophysics Data System (ADS)
Jin, Ye; Yang, Yang; Zhang, Du; Peng, Degao; Yang, Weitao
2017-10-01
The optimized effective potential (OEP) that gives accurate Kohn-Sham (KS) orbitals and orbital energies can be obtained from a given reference electron density. These OEP-KS orbitals and orbital energies are used here for calculating electronic excited states with the particle-particle random phase approximation (pp-RPA). Our calculations allow the examination of pp-RPA excitation energies with the exact KS density functional theory (DFT). Various input densities are investigated. Specifically, the excitation energies using the OEP with the electron densities from the coupled-cluster singles and doubles method display the lowest mean absolute error from the reference data for the low-lying excited states. This study probes into the theoretical limit of the pp-RPA excitation energies with the exact KS-DFT orbitals and orbital energies. We believe that higher-order correlation contributions beyond the pp-RPA bare Coulomb kernel are needed in order to achieve even higher accuracy in excitation energy calculations.
Kernels, Degrees of Freedom, and Power Properties of Quadratic Distance Goodness-of-Fit Tests
Lindsay, Bruce G.; Markatou, Marianthi; Ray, Surajit
2014-01-01
In this article, we study the power properties of quadratic-distance-based goodness-of-fit tests. First, we introduce the concept of a root kernel and discuss the considerations that enter the selection of this kernel. We derive an easy to use normal approximation to the power of quadratic distance goodness-of-fit tests and base the construction of a noncentrality index, an analogue of the traditional noncentrality parameter, on it. This leads to a method akin to the Neyman-Pearson lemma for constructing optimal kernels for specific alternatives. We then introduce a midpower analysis as a device for choosing optimal degrees of freedom for a family of alternatives of interest. Finally, we introduce a new diffusion kernel, called the Pearson-normal kernel, and study the extent to which the normal approximation to the power of tests based on this kernel is valid. Supplementary materials for this article are available online. PMID:24764609
Performance Assessment of Kernel Density Clustering for Gene Expression Profile Data
Zeng, Beiyan; Chen, Yiping P.; Smith, Oscar H.
2003-01-01
Kernel density smoothing techniques have been used in classification or supervised learning of gene expression profile (GEP) data, but their applications to clustering or unsupervised learning of those data have not been explored and assessed. Here we report a kernel density clustering method for analysing GEP data and compare its performance with the three most widely-used clustering methods: hierarchical clustering, K-means clustering, and multivariate mixture model-based clustering. Using several methods to measure agreement, between-cluster isolation, and withincluster coherence, such as the Adjusted Rand Index, the Pseudo F test, the r2 test, and the profile plot, we have assessed the effectiveness of kernel density clustering for recovering clusters, and its robustness against noise on clustering both simulated and real GEP data. Our results show that the kernel density clustering method has excellent performance in recovering clusters from simulated data and in grouping large real expression profile data sets into compact and well-isolated clusters, and that it is the most robust clustering method for analysing noisy expression profile data compared to the other three methods assessed. PMID:18629292
Selection and properties of alternative forming fluids for TRISO fuel kernel production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, M. P.; King, J. C.; Gorman, B. P.
2013-01-01
Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardousmore » alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1- bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.« less
Selection and properties of alternative forming fluids for TRISO fuel kernel production
NASA Astrophysics Data System (ADS)
Baker, M. P.; King, J. C.; Gorman, B. P.; Marshall, D. W.
2013-01-01
Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ˜10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.
Earth Structure, Ice Mass Changes, and the Local Dynamic Geoid
NASA Astrophysics Data System (ADS)
Harig, C.; Simons, F. J.
2014-12-01
Spherical Slepian localization functions are a useful method for studying regional mass changes observed by satellite gravimetry. By projecting data onto a sparse basis set, the local field can be estimated more easily than with the full spherical harmonic basis. We have used this method previously to estimate the ice mass change in Greenland from GRACE data, and it can also be applied to other planetary problems such as global magnetic fields. Earth's static geoid, in contrast to the time-variable field, is in large part related to the internal density and rheological structure of the Earth. Past studies have used dynamic geoid kernels to relate this density structure and the internal deformation it induces to the surface geopotential at large scales. These now classical studies of the eighties and nineties were able to estimate the mantle's radial rheological profile, placing constraints on the ratio between upper and lower mantle viscosity. By combining these two methods, spherical Slepian localization and dynamic geoid kernels, we have created local dynamic geoid kernels which are sensitive only to density variations within an area of interest. With these kernels we can estimate the approximate local radial rheological structure that best explains the locally observed geoid on a regional basis. First-order differences of the regional mantle viscosity structure are accessible to this technique. In this contribution we present our latest, as yet unpublished results on the geographical and temporal pattern of ice mass changes in Antarctica over the past decade, and we introduce a new approach to extract regional information about the internal structure of the Earth from the static global gravity field. Both sets of results are linked in terms of the relevant physics, but also in being developed from the marriage of Slepian functions and geoid kernels. We make predictions on the utility of our approach to derive fully three-dimensional rheological Earth models, to be used for corrections for glacio-isostatic adjustment, as necessary for the interpretation of time-variable gravity observations in terms of ice sheet mass-balance studies.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
This paper describes an accurate economical method for generating approximations to the kernel of the integral equation relating unsteady pressure to normalwash in nonplanar flow. The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the non elementary integrals in the kernel by exponential approximations and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. Coefficients for 8, 12, 24, and 72 term approximations are tabulated in the report. Also, since the method is automated, it can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K
2017-10-17
Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p < 0.001) when compared with images reconstructed using the bone-sharpening kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p < 0.001, and 18.2%, p < 0.001, respectively) when compared with the image reconstructed by the bone-sharpening kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
Characterization of a maximum-likelihood nonparametric density estimator of kernel type
NASA Technical Reports Server (NTRS)
Geman, S.; Mcclure, D. E.
1982-01-01
Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).
Automated skin lesion segmentation with kernel density estimation
NASA Astrophysics Data System (ADS)
Pardo, A.; Real, E.; Fernandez-Barreras, G.; Madruga, F. J.; López-Higuera, J. M.; Conde, O. M.
2017-07-01
Skin lesion segmentation is a complex step for dermoscopy pathological diagnosis. Kernel density estimation is proposed as a segmentation technique based on the statistic distribution of color intensities in the lesion and non-lesion regions.
Nonparametric entropy estimation using kernel densities.
Lake, Douglas E
2009-01-01
The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.
Left ventricle segmentation via graph cut distribution matching.
Ben Ayed, Ismail; Punithakumar, Kumaradevan; Li, Shuo; Islam, Ali; Chong, Jaron
2009-01-01
We present a discrete kernel density matching energy for segmenting the left ventricle cavity in cardiac magnetic resonance sequences. The energy and its graph cut optimization based on an original first-order approximation of the Bhattacharyya measure have not been proposed previously, and yield competitive results in nearly real-time. The algorithm seeks a region within each frame by optimization of two priors, one geometric (distance-based) and the other photometric, each measuring a distribution similarity between the region and a model learned from the first frame. Based on global rather than pixelwise information, the proposed algorithm does not require complex training and optimization with respect to geometric transformations. Unlike related active contour methods, it does not compute iterative updates of computationally expensive kernel densities. Furthermore, the proposed first-order analysis can be used for other intractable energies and, therefore, can lead to segmentation algorithms which share the flexibility of active contours and computational advantages of graph cuts. Quantitative evaluations over 2280 images acquired from 20 subjects demonstrated that the results correlate well with independent manual segmentations by an expert.
On processed splitting methods and high-order actions in path-integral Monte Carlo simulations.
Casas, Fernando
2010-10-21
Processed splitting methods are particularly well adapted to carry out path-integral Monte Carlo (PIMC) simulations: since one is mainly interested in estimating traces of operators, only the kernel of the method is necessary to approximate the thermal density matrix. Unfortunately, they suffer the same drawback as standard, nonprocessed integrators: kernels of effective order greater than two necessarily involve some negative coefficients. This problem can be circumvented, however, by incorporating modified potentials into the composition, thus rendering schemes of higher effective order. In this work we analyze a family of fourth-order schemes recently proposed in the PIMC setting, paying special attention to their linear stability properties, and justify their observed behavior in practice. We also propose a new fourth-order scheme requiring the same computational cost but with an enlarged stability interval.
Nutrition quality of extraction mannan residue from palm kernel cake on brolier chicken
NASA Astrophysics Data System (ADS)
Tafsin, M.; Hanafi, N. D.; Kejora, E.; Yusraini, E.
2018-02-01
This study aims to find out the nutrient residue of palm kernel cake from mannan extraction on broiler chicken by evaluating physical quality (specific gravity, bulk density and compacted bulk density), chemical quality (proximate analysis and Van Soest Test) and biological test (metabolizable energy). Treatment composed of T0 : palm kernel cake extracted aquadest (control), T1 : palm kernel cake extracted acetic acid (CH3COOH) 1%, T2 : palm kernel cake extracted aquadest + mannanase enzyme 100 u/l and T3 : palm kernel cake extracted acetic acid (CH3COOH) 1% + enzyme mannanase 100 u/l. The results showed that mannan extraction had significant effect (P<0.05) in improving the quality of physical and numerically increase the value of crude protein and decrease the value of NDF (Neutral Detergent Fiber). Treatments had highly significant influence (P<0.01) on the metabolizable energy value of palm kernel cake residue in broiler chickens. It can be concluded that extraction with aquadest + enzyme mannanase 100 u/l yields the best nutrient quality of palm kernel cake residue for broiler chicken.
Lu, Zhao; Sun, Jing; Butts, Kenneth
2014-05-01
Support vector regression for approximating nonlinear dynamic systems is more delicate than the approximation of indicator functions in support vector classification, particularly for systems that involve multitudes of time scales in their sampled data. The kernel used for support vector learning determines the class of functions from which a support vector machine can draw its solution, and the choice of kernel significantly influences the performance of a support vector machine. In this paper, to bridge the gap between wavelet multiresolution analysis and kernel learning, the closed-form orthogonal wavelet is exploited to construct new multiscale asymmetric orthogonal wavelet kernels for linear programming support vector learning. The closed-form multiscale orthogonal wavelet kernel provides a systematic framework to implement multiscale kernel learning via dyadic dilations and also enables us to represent complex nonlinear dynamics effectively. To demonstrate the superiority of the proposed multiscale wavelet kernel in identifying complex nonlinear dynamic systems, two case studies are presented that aim at building parallel models on benchmark datasets. The development of parallel models that address the long-term/mid-term prediction issue is more intricate and challenging than the identification of series-parallel models where only one-step ahead prediction is required. Simulation results illustrate the effectiveness of the proposed multiscale kernel learning.
Linear-response time-dependent density-functional theory with pairing fields.
Peng, Degao; van Aggelen, Helen; Yang, Yang; Yang, Weitao
2014-05-14
Recent development in particle-particle random phase approximation (pp-RPA) broadens the perspective on ground state correlation energies [H. van Aggelen, Y. Yang, and W. Yang, Phys. Rev. A 88, 030501 (2013), Y. Yang, H. van Aggelen, S. N. Steinmann, D. Peng, and W. Yang, J. Chem. Phys. 139, 174110 (2013); D. Peng, S. N. Steinmann, H. van Aggelen, and W. Yang, J. Chem. Phys. 139, 104112 (2013)] and N ± 2 excitation energies [Y. Yang, H. van Aggelen, and W. Yang, J. Chem. Phys. 139, 224105 (2013)]. So far Hartree-Fock and approximated density-functional orbitals have been utilized to evaluate the pp-RPA equation. In this paper, to further explore the fundamentals and the potential use of pairing matrix dependent functionals, we present the linear-response time-dependent density-functional theory with pairing fields with both adiabatic and frequency-dependent kernels. This theory is related to the density-functional theory and time-dependent density-functional theory for superconductors, but is applied to normal non-superconducting systems for our purpose. Due to the lack of the proof of the one-to-one mapping between the pairing matrix and the pairing field for time-dependent systems, the linear-response theory is established based on the representability assumption of the pairing matrix. The linear response theory justifies the use of approximated density-functionals in the pp-RPA equation. This work sets the fundamentals for future density-functional development to enhance the description of ground state correlation energies and N ± 2 excitation energies.
Suspended liquid particle disturbance on laser-induced blast wave and low density distribution
NASA Astrophysics Data System (ADS)
Ukai, Takahiro; Zare-Behtash, Hossein; Kontis, Konstantinos
2017-12-01
The impurity effect of suspended liquid particles on the laser-induced gas breakdown was experimentally investigated in quiescent gas. The focus of this study is the investigation of the influence of the impurities on the shock wave structure as well as the low density distribution. A 532 nm Nd:YAG laser beam with an 188 mJ/pulse was focused on the chamber filled with suspended liquid particles 0.9 ± 0.63 μm in diameter. Several shock waves are generated by multiple gas breakdowns along the beam path in the breakdown with particles. Four types of shock wave structures can be observed: (1) the dual blast waves with a similar shock radius, (2) the dual blast waves with a large shock radius at the lower breakdown, (3) the dual blast waves with a large shock radius at the upper breakdown, and (4) the triple blast waves. The independent blast waves interact with each other and enhance the shock strength behind the shock front in the lateral direction. The triple blast waves lead to the strongest shock wave in all cases. The shock wave front that propagates toward the opposite laser focal spot impinges on one another, and thereafter a transmitted shock wave (TSW) appears. The TSW interacts with the low density core called a kernel; the kernel then longitudinally expands quickly due to a Richtmyer-Meshkov-like instability. The laser-particle interaction causes an increase in the kernel volume which is approximately five times as large as that in the gas breakdown without particles. In addition, the laser-particle interaction can improve the laser energy efficiency.
Lytras, Theodore; Kossyvakis, Athanasios; Mentis, Andreas
2016-02-01
The results of neuraminidase inhibitor (NAI) enzyme inhibition assays are commonly expressed as 50% inhibitory concentration (IC50) fold-change values and presented graphically in box plots (box-and-whisker plots). An alternative and more informative type of graph is the kernel density plot, which we propose should be the preferred one for this purpose. In this paper we discuss the limitations of box plots and the advantages of the kernel density plot, and we present NAIplot, an opensource web application that allows convenient creation of density plots specifically for visualizing the results of NAI enzyme inhibition assays, as well as for general purposes. Copyright © 2015 Elsevier B.V. All rights reserved.
Górnaś, Paweł; Mišina, Inga; Grāvīte, Ilze; Soliven, Arianne; Kaufmane, Edīte; Segliņa, Dalija
2015-01-01
Composition of tocochromanols in kernels recovered from 16 different apricot varieties (Prunus armeniaca L.) was studied. Three tocopherol (T) homologues, namely α, γ and δ, were quantified in all tested samples by an RP-HPLC/FLD method. The γ-T was the main tocopherol homologue identified in apricot kernels and constituted approximately 93% of total detected tocopherols. The RP-UPLC-ESI/MS(n) method detected trace amounts of two tocotrienol homologues α and γ in the apricot kernels. The concentration of individual tocopherol homologues in kernels of different apricots varieties, expressed in mg/100 g dwb, was in the following range: 1.38-4.41 (α-T), 42.48-73.27 (γ-T) and 0.77-2.09 (δ-T). Moreover, the ratio between individual tocopherol homologues α:γ:δ was nearly constant in all varieties and amounted to approximately 2:39:1.
NASA Astrophysics Data System (ADS)
Jourde, K.; Gibert, D.; Marteau, J.
2015-04-01
This paper examines how the resolution of small-scale geological density models is improved through the fusion of information provided by gravity measurements and density muon radiographies. Muon radiography aims at determining the density of geological bodies by measuring their screening effect on the natural flux of cosmic muons. Muon radiography essentially works like medical X-ray scan and integrates density information along elongated narrow conical volumes. Gravity measurements are linked to density by a 3-D integration encompassing the whole studied domain. We establish the mathematical expressions of these integration formulas - called acquisition kernels - and derive the resolving kernels that are spatial filters relating the true unknown density structure to the density distribution actually recovered from the available data. The resolving kernels approach allows to quantitatively describe the improvement of the resolution of the density models achieved by merging gravity data and muon radiographies. The method developed in this paper may be used to optimally design the geometry of the field measurements to perform in order to obtain a given spatial resolution pattern of the density model to construct. The resolving kernels derived in the joined muon/gravimetry case indicate that gravity data are almost useless to constrain the density structure in regions sampled by more than two muon tomography acquisitions. Interestingly the resolution in deeper regions not sampled by muon tomography is significantly improved by joining the two techniques. The method is illustrated with examples for La Soufrière of Guadeloupe volcano.
Limitations of shallow nets approximation.
Lin, Shao-Bo
2017-10-01
In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
NASA Astrophysics Data System (ADS)
Shiju, S.; Sumitra, S.
2017-12-01
In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.
Macroscopic and microscopic components of exchange-correlation interactions
NASA Astrophysics Data System (ADS)
Sottile, F.; Karlsson, K.; Reining, L.; Aryasetiawan, F.
2003-11-01
We consider two commonly used approaches for the ab initio calculation of optical-absorption spectra, namely, many-body perturbation theory based on Green’s functions and time-dependent density-functional theory (TDDFT). The former leads to the two-particle Bethe-Salpeter equation that contains a screened electron-hole interaction. We approximate this interaction in various ways, and discuss in particular the results obtained for a local contact potential. This, in fact, allows us to straightforwardly make the link to the TDDFT approach, and to discuss the exchange-correlation kernel fxc that corresponds to the contact exciton. Our main results, illustrated in the examples of bulk silicon, GaAs, argon, and LiF, are the following. (i) The simple contact exciton model, used on top of an ab initio calculated band structure, yields reasonable absorption spectra. (ii) Qualitatively extremely different fxc can be derived approximatively from the same Bethe-Salpeter equation. These kernels can however yield very similar spectra. (iii) A static fxc, both with or without a long-range component, can create transitions in the quasiparticle gap. To the best of our knowledge, this is the first time that TDDFT has been shown to be able to reproduce bound excitons.
Kernel Wiener filter and its application to pattern recognition.
Yoshino, Hirokazu; Dong, Chen; Washizawa, Yoshikazu; Yamashita, Yukihiko
2010-11-01
The Wiener filter (WF) is widely used for inverse problems. From an observed signal, it provides the best estimated signal with respect to the squared error averaged over the original and the observed signals among linear operators. The kernel WF (KWF), extended directly from WF, has a problem that an additive noise has to be handled by samples. Since the computational complexity of kernel methods depends on the number of samples, a huge computational cost is necessary for the case. By using the first-order approximation of kernel functions, we realize KWF that can handle such a noise not by samples but as a random variable. We also propose the error estimation method for kernel filters by using the approximations. In order to show the advantages of the proposed methods, we conducted the experiments to denoise images and estimate errors. We also apply KWF to classification since KWF can provide an approximated result of the maximum a posteriori classifier that provides the best recognition accuracy. The noise term in the criterion can be used for the classification in the presence of noise or a new regularization to suppress changes in the input space, whereas the ordinary regularization for the kernel method suppresses changes in the feature space. In order to show the advantages of the proposed methods, we conducted experiments of binary and multiclass classifications and classification in the presence of noise.
Zhou, Qijing; Jiang, Biao; Dong, Fei; Huang, Peiyu; Liu, Hongtao; Zhang, Minming
2014-01-01
To evaluate the improvement of iterative reconstruction in image space (IRIS) technique in computed tomographic (CT) coronary stent imaging with sharp kernel, and to make a trade-off analysis. Fifty-six patients with 105 stents were examined by 128-slice dual-source CT coronary angiography (CTCA). Images were reconstructed using standard filtered back projection (FBP) and IRIS with both medium kernel and sharp kernel applied. Image noise and the stent diameter were investigated. Image noise was measured both in background vessel and in-stent lumen as objective image evaluation. Image noise score and stent score were performed as subjective image evaluation. The CTCA images reconstructed with IRIS were associated with significant noise reduction compared to that of CTCA images reconstructed using FBP technique in both of background vessel and in-stent lumen (the background noise decreased by approximately 25.4% ± 8.2% in medium kernel (P
A fast and objective multidimensional kernel density estimation method: fastKDE
O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...
2016-03-07
Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
A Linear Kernel for Co-Path/Cycle Packing
NASA Astrophysics Data System (ADS)
Chen, Zhi-Zhong; Fellows, Michael; Fu, Bin; Jiang, Haitao; Liu, Yang; Wang, Lusheng; Zhu, Binhai
Bounded-Degree Vertex Deletion is a fundamental problem in graph theory that has new applications in computational biology. In this paper, we address a special case of Bounded-Degree Vertex Deletion, the Co-Path/Cycle Packing problem, which asks to delete as few vertices as possible such that the graph of the remaining (residual) vertices is composed of disjoint paths and simple cycles. The problem falls into the well-known class of 'node-deletion problems with hereditary properties', is hence NP-complete and unlikely to admit a polynomial time approximation algorithm with approximation factor smaller than 2. In the framework of parameterized complexity, we present a kernelization algorithm that produces a kernel with at most 37k vertices, improving on the super-linear kernel of Fellows et al.'s general theorem for Bounded-Degree Vertex Deletion. Using this kernel,and the method of bounded search trees, we devise an FPT algorithm that runs in time O *(3.24 k ). On the negative side, we show that the problem is APX-hard and unlikely to have a kernel smaller than 2k by a reduction from Vertex Cover.
NASA Astrophysics Data System (ADS)
Jourde, K.; Gibert, D.; Marteau, J.
2015-08-01
This paper examines how the resolution of small-scale geological density models is improved through the fusion of information provided by gravity measurements and density muon radiographies. Muon radiography aims at determining the density of geological bodies by measuring their screening effect on the natural flux of cosmic muons. Muon radiography essentially works like a medical X-ray scan and integrates density information along elongated narrow conical volumes. Gravity measurements are linked to density by a 3-D integration encompassing the whole studied domain. We establish the mathematical expressions of these integration formulas - called acquisition kernels - and derive the resolving kernels that are spatial filters relating the true unknown density structure to the density distribution actually recovered from the available data. The resolving kernel approach allows one to quantitatively describe the improvement of the resolution of the density models achieved by merging gravity data and muon radiographies. The method developed in this paper may be used to optimally design the geometry of the field measurements to be performed in order to obtain a given spatial resolution pattern of the density model to be constructed. The resolving kernels derived in the joined muon-gravimetry case indicate that gravity data are almost useless for constraining the density structure in regions sampled by more than two muon tomography acquisitions. Interestingly, the resolution in deeper regions not sampled by muon tomography is significantly improved by joining the two techniques. The method is illustrated with examples for the La Soufrière volcano of Guadeloupe.
Symbol recognition with kernel density matching.
Zhang, Wan; Wenyin, Liu; Zhang, Kun
2006-12-01
We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.
Study of multiband disordered systems using the typical medium dynamical cluster approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yi; Terletska, Hanna; Moore, C.
We generalize the typical medium dynamical cluster approximation to multiband disordered systems. Using our extended formalism, we perform a systematic study of the nonlocal correlation effects induced by disorder on the density of states and the mobility edge of the three-dimensional two-band Anderson model. We include interband and intraband hopping and an intraband disorder potential. Our results are consistent with those obtained by the transfer matrix and the kernel polynomial methods. We also apply the method to K xFe 2-ySe 2 with Fe vacancies. Despite the strong vacancy disorder and anisotropy, we find the material is not an Anderson insulator.more » Moreover our results demonstrate the application of the typical medium dynamical cluster approximation method to study Anderson localization in real materials.« less
Study of multiband disordered systems using the typical medium dynamical cluster approximation
Zhang, Yi; Terletska, Hanna; Moore, C.; ...
2015-11-06
We generalize the typical medium dynamical cluster approximation to multiband disordered systems. Using our extended formalism, we perform a systematic study of the nonlocal correlation effects induced by disorder on the density of states and the mobility edge of the three-dimensional two-band Anderson model. We include interband and intraband hopping and an intraband disorder potential. Our results are consistent with those obtained by the transfer matrix and the kernel polynomial methods. We also apply the method to K xFe 2-ySe 2 with Fe vacancies. Despite the strong vacancy disorder and anisotropy, we find the material is not an Anderson insulator.more » Moreover our results demonstrate the application of the typical medium dynamical cluster approximation method to study Anderson localization in real materials.« less
Nonlocal kinetic energy functionals by functional integration.
Mi, Wenhui; Genova, Alessandro; Pavanello, Michele
2018-05-14
Since the seminal studies of Thomas and Fermi, researchers in the Density-Functional Theory (DFT) community are searching for accurate electron density functionals. Arguably, the toughest functional to approximate is the noninteracting kinetic energy, T s [ρ], the subject of this work. The typical paradigm is to first approximate the energy functional and then take its functional derivative, δT s [ρ]δρ(r), yielding a potential that can be used in orbital-free DFT or subsystem DFT simulations. Here, this paradigm is challenged by constructing the potential from the second-functional derivative via functional integration. A new nonlocal functional for T s [ρ] is prescribed [which we dub Mi-Genova-Pavanello (MGP)] having a density independent kernel. MGP is constructed to satisfy three exact conditions: (1) a nonzero "Kinetic electron" arising from a nonzero exchange hole; (2) the second functional derivative must reduce to the inverse Lindhard function in the limit of homogenous densities; (3) the potential is derived from functional integration of the second functional derivative. Pilot calculations show that MGP is capable of reproducing accurate equilibrium volumes, bulk moduli, total energy, and electron densities for metallic (body-centered cubic, face-centered cubic) and semiconducting (crystal diamond) phases of silicon as well as of III-V semiconductors. The MGP functional is found to be numerically stable typically reaching self-consistency within 12 iterations of a truncated Newton minimization algorithm. MGP's computational cost and memory requirements are low and comparable to the Wang-Teter nonlocal functional or any generalized gradient approximation functional.
Nonlocal kinetic energy functionals by functional integration
NASA Astrophysics Data System (ADS)
Mi, Wenhui; Genova, Alessandro; Pavanello, Michele
2018-05-01
Since the seminal studies of Thomas and Fermi, researchers in the Density-Functional Theory (DFT) community are searching for accurate electron density functionals. Arguably, the toughest functional to approximate is the noninteracting kinetic energy, Ts[ρ], the subject of this work. The typical paradigm is to first approximate the energy functional and then take its functional derivative, δ/Ts[ρ ] δ ρ (r ) , yielding a potential that can be used in orbital-free DFT or subsystem DFT simulations. Here, this paradigm is challenged by constructing the potential from the second-functional derivative via functional integration. A new nonlocal functional for Ts[ρ] is prescribed [which we dub Mi-Genova-Pavanello (MGP)] having a density independent kernel. MGP is constructed to satisfy three exact conditions: (1) a nonzero "Kinetic electron" arising from a nonzero exchange hole; (2) the second functional derivative must reduce to the inverse Lindhard function in the limit of homogenous densities; (3) the potential is derived from functional integration of the second functional derivative. Pilot calculations show that MGP is capable of reproducing accurate equilibrium volumes, bulk moduli, total energy, and electron densities for metallic (body-centered cubic, face-centered cubic) and semiconducting (crystal diamond) phases of silicon as well as of III-V semiconductors. The MGP functional is found to be numerically stable typically reaching self-consistency within 12 iterations of a truncated Newton minimization algorithm. MGP's computational cost and memory requirements are low and comparable to the Wang-Teter nonlocal functional or any generalized gradient approximation functional.
Kumar, Ajay; Mantovani, E E; Seetan, R; Soltani, A; Echeverry-Solarte, M; Jain, S; Simsek, S; Doehlert, D; Alamri, M S; Elias, E M; Kianian, S F; Mergoum, M
2016-03-01
Wheat kernel shape and size has been under selection since early domestication. Kernel morphology is a major consideration in wheat breeding, as it impacts grain yield and quality. A population of 160 recombinant inbred lines (RIL), developed using an elite (ND 705) and a nonadapted genotype (PI 414566), was extensively phenotyped in replicated field trials and genotyped using Infinium iSelect 90K assay to gain insight into the genetic architecture of kernel shape and size. A high density genetic map consisting of 10,172 single nucleotide polymorphism (SNP) markers, with an average marker density of 0.39 cM/marker, identified a total of 29 genomic regions associated with six grain shape and size traits; ∼80% of these regions were associated with multiple traits. The analyses showed that kernel length (KL) and width (KW) are genetically independent, while a large number (∼59%) of the quantitative trait loci (QTL) for kernel shape traits were in common with genomic regions associated with kernel size traits. The most significant QTL was identified on chromosome 4B, and could be an ortholog of major rice grain size and shape gene or . Major and stable loci also were identified on the homeologous regions of Group 5 chromosomes, and in the regions of (6A) and (7A) genes. Both parental genotypes contributed equivalent positive QTL alleles, suggesting that the nonadapted germplasm has a great potential for enhancing the gene pool for grain shape and size. This study provides new knowledge on the genetic dissection of kernel morphology, with a much higher resolution, which may aid further improvement in wheat yield and quality using genomic tools. Copyright © 2016 Crop Science Society of America.
Edgeworth expansions of stochastic trading time
NASA Astrophysics Data System (ADS)
Decamps, Marc; De Schepper, Ann
2010-08-01
Under most local and stochastic volatility models the underlying forward is assumed to be a positive function of a time-changed Brownian motion. It relates nicely the implied volatility smile to the so-called activity rate in the market. Following Young and DeWitt-Morette (1986) [8], we propose to apply the Duru-Kleinert process-cum-time transformation in path integral to formulate the transition density of the forward. The method leads to asymptotic expansions of the transition density around a Gaussian kernel corresponding to the average activity in the market conditional on the forward value. The approximation is numerically illustrated for pricing vanilla options under the CEV model and the popular normal SABR model. The asymptotics can also be used for Monte Carlo simulations or backward integration schemes.
Scalable Nonparametric Low-Rank Kernel Learning Using Block Coordinate Descent.
Hu, En-Liang; Kwok, James T
2015-09-01
Nonparametric kernel learning (NPKL) is a flexible approach to learn the kernel matrix directly without assuming any parametric form. It can be naturally formulated as a semidefinite program (SDP), which, however, is not very scalable. To address this problem, we propose the combined use of low-rank approximation and block coordinate descent (BCD). Low-rank approximation avoids the expensive positive semidefinite constraint in the SDP by replacing the kernel matrix variable with V(T)V, where V is a low-rank matrix. The resultant nonlinear optimization problem is then solved by BCD, which optimizes each column of V sequentially. It can be shown that the proposed algorithm has nice convergence properties and low computational complexities. Experiments on a number of real-world data sets show that the proposed algorithm outperforms state-of-the-art NPKL solvers.
Ghorai, Santanu; Mukherjee, Anirban; Dutta, Pranab K
2010-06-01
In this brief we have proposed the multiclass data classification by computationally inexpensive discriminant analysis through vector-valued regularized kernel function approximation (VVRKFA). VVRKFA being an extension of fast regularized kernel function approximation (FRKFA), provides the vector-valued response at single step. The VVRKFA finds a linear operator and a bias vector by using a reduced kernel that maps a pattern from feature space into the low dimensional label space. The classification of patterns is carried out in this low dimensional label subspace. A test pattern is classified depending on its proximity to class centroids. The effectiveness of the proposed method is experimentally verified and compared with multiclass support vector machine (SVM) on several benchmark data sets as well as on gene microarray data for multi-category cancer classification. The results indicate the significant improvement in both training and testing time compared to that of multiclass SVM with comparable testing accuracy principally in large data sets. Experiments in this brief also serve as comparison of performance of VVRKFA with stratified random sampling and sub-sampling.
New developments of the Extended Quadrature Method of Moments to solve Population Balance Equations
NASA Astrophysics Data System (ADS)
Pigou, Maxime; Morchain, Jérôme; Fede, Pascal; Penet, Marie-Isabelle; Laronze, Geoffrey
2018-07-01
Population Balance Models have a wide range of applications in many industrial fields as they allow accounting for heterogeneity among properties which are crucial for some system modelling. They actually describe the evolution of a Number Density Function (NDF) using a Population Balance Equation (PBE). For instance, they are applied to gas-liquid columns or stirred reactors, aerosol technology, crystallisation processes, fine particles or biological systems. There is a significant interest for fast, stable and accurate numerical methods in order to solve for PBEs, a class of such methods actually does not solve directly the NDF but resolves their moments. These methods of moments, and in particular quadrature-based methods of moments, have been successfully applied to a variety of systems. Point-wise values of the NDF are sometimes required but are not directly accessible from the moments. To address these issues, the Extended Quadrature Method of Moments (EQMOM) has been developed in the past few years and approximates the NDF, from its moments, as a convex mixture of Kernel Density Functions (KDFs) of the same parametric family. In the present work EQMOM is further developed on two aspects. The main one is a significant improvement of the core iterative procedure of that method, the corresponding reduction of its computational cost is estimated to range from 60% up to 95%. The second aspect is an extension of EQMOM to two new KDFs used for the approximation, the Weibull and the Laplace kernels. All MATLAB source codes used for this article are provided with this article.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, Rodney Dale; Johnson, Jared A.; Collins, Jack Lee
A comparison study on carbon blacks and dispersing agents was performed to determine their impacts on the final properties of uranium fuel kernels with carbon. The main target compositions in this internal gelation study were 10 and 20 mol % uranium dicarbide (UC 2), which is UC 1.86, with the balance uranium dioxide. After heat treatment at 1900 K in flowing carbon monoxide in argon for 12 h, the density of the kernels produced using a X-energy proprietary carbon suspension, which is commercially available, ranged from 96% to 100% of theoretical density (TD), with full conversion of UC to UCmore » 2 at both carbon concentrations. However, higher carbon concentrations such as a 2.5 mol ratio of carbon to uranium in the feed solutions failed to produce gel spheres with the proprietary carbon suspension. The kernels using our former baseline of Mogul L carbon black and Tamol SN were 90–92% of TD with full conversion of UC to UC 2 at a variety of carbon levels. Raven 5000 carbon black and Tamol SN were used to produce 10 mol % UC2 kernels with 95% of TD. However, an increase in the Raven 5000 concentration led to a kernel density below 90% of TD. Raven 3500 carbon black and Tamol SN were used to make very dense kernels without complete conversion to UC 2. Lastly, the selection of the carbon black and dispersing agent is highly dependent on the desired final properties of the target kernels.« less
NASA Astrophysics Data System (ADS)
Hunt, R. D.; Johnson, J. A.; Collins, J. L.; McMurray, J. W.; Reif, T. J.; Brown, D. R.
2018-01-01
A comparison study on carbon blacks and dispersing agents was performed to determine their impacts on the final properties of uranium fuel kernels with carbon. The main target compositions in this internal gelation study were 10 and 20 mol % uranium dicarbide (UC2), which is UC1.86, with the balance uranium dioxide. After heat treatment at 1900 K in flowing carbon monoxide in argon for 12 h, the density of the kernels produced using a X-energy proprietary carbon suspension, which is commercially available, ranged from 96% to 100% of theoretical density (TD), with full conversion of UC to UC2 at both carbon concentrations. However, higher carbon concentrations such as a 2.5 mol ratio of carbon to uranium in the feed solutions failed to produce gel spheres with the proprietary carbon suspension. The kernels using our former baseline of Mogul L carbon black and Tamol SN were 90-92% of TD with full conversion of UC to UC2 at a variety of carbon levels. Raven 5000 carbon black and Tamol SN were used to produce 10 mol % UC2 kernels with 95% of TD. However, an increase in the Raven 5000 concentration led to a kernel density below 90% of TD. Raven 3500 carbon black and Tamol SN were used to make very dense kernels without complete conversion to UC2. The selection of the carbon black and dispersing agent is highly dependent on the desired final properties of the target kernels.
Hunt, Rodney Dale; Johnson, Jared A.; Collins, Jack Lee; ...
2017-10-12
A comparison study on carbon blacks and dispersing agents was performed to determine their impacts on the final properties of uranium fuel kernels with carbon. The main target compositions in this internal gelation study were 10 and 20 mol % uranium dicarbide (UC 2), which is UC 1.86, with the balance uranium dioxide. After heat treatment at 1900 K in flowing carbon monoxide in argon for 12 h, the density of the kernels produced using a X-energy proprietary carbon suspension, which is commercially available, ranged from 96% to 100% of theoretical density (TD), with full conversion of UC to UCmore » 2 at both carbon concentrations. However, higher carbon concentrations such as a 2.5 mol ratio of carbon to uranium in the feed solutions failed to produce gel spheres with the proprietary carbon suspension. The kernels using our former baseline of Mogul L carbon black and Tamol SN were 90–92% of TD with full conversion of UC to UC 2 at a variety of carbon levels. Raven 5000 carbon black and Tamol SN were used to produce 10 mol % UC2 kernels with 95% of TD. However, an increase in the Raven 5000 concentration led to a kernel density below 90% of TD. Raven 3500 carbon black and Tamol SN were used to make very dense kernels without complete conversion to UC 2. Lastly, the selection of the carbon black and dispersing agent is highly dependent on the desired final properties of the target kernels.« less
Computational investigation of intense short-wavelength laser interaction with rare gas clusters
NASA Astrophysics Data System (ADS)
Bigaouette, Nicolas
Current Very High Temperature Reactor designs incorporate TRi-structural ISOtropic (TRISO) particle fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel by dropping a cold precursor solution into a column of hot trichloroethylene (TCE). The temperature difference drives the liquid precursor solution to precipitate the metal solution into gel spheres before reaching the bottom of a production column. Over time, gelation byproducts inhibit complete gelation and the TCE must be purified or discarded. The resulting mixed-waste stream is expensive to dispose of or recycle, and changing the forming fluid to a non-hazardous alternative could greatly improve the economics of kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacements. The physical properties of the alternatives were measured as a function of temperature between 25 °C and 80 °C. Calculated terminal velocities and heat transfer rates provided an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane were selected for further testing, and surrogate yttria-stabilized zirconia (YSZ) kernels were produced using these selected fluids. The kernels were characterized for density, geometry, composition, and crystallinity and compared to a control group of kernels produced in silicone oil. Production in 1-bromotetradecane showed positive results, producing dense (93.8 %TD) and spherical (1.03 aspect ratio) kernels, but proper gelation did not occur in the other alternative forming fluids. With many of the YSZ kernels not properly gelling within the length of the column, this project further investigated the heat transfer properties of the forming fluids and precursor solution. A sensitivity study revealed that the heat transfer properties of the precursor solution have the strongest impact on gelation time. A COMSOL heat transfer model estimated an effective thermal diffusivity range for the YSZ precursor solution as 1.13x10 -8 m2/s to 3.35x10-8 m 2/s, which is an order of magnitude smaller than the value used in previous studies. 1-bromotetradecane is recommended for further investigation with the production of uranium-based kernels.
Ziegler, Tom; Krykunov, Mykhaylo
2010-08-21
It is well known that time-dependent density functional theory (TD-DFT) based on standard gradient corrected functionals affords both a quantitative and qualitative incorrect picture of charge transfer transitions between two spatially separated regions. It is shown here that the well known failure can be traced back to the use of linear response theory. Further, it is demonstrated that the inclusion of higher order terms readily affords a qualitatively correct picture even for simple functionals based on the local density approximation. The inclusion of these terms is done within the framework of a newly developed variational approach to excitation energies called constrained variational density functional theory (CV-DFT). To second order [CV(2)-DFT] this theory is identical to adiabatic TD-DFT within the Tamm-Dancoff approximation. With inclusion of fourth order corrections [CV(4)-DFT] it affords a qualitative correct description of charge transfer transitions. It is finally demonstrated that the relaxation of the ground state Kohn-Sham orbitals to first order in response to the change in density on excitation together with CV(4)-DFT affords charge transfer excitations in good agreement with experiment. The new relaxed theory is termed R-CV(4)-DFT. The relaxed scheme represents an effective way in which to introduce double replacements into the description of single electron excitations, something that would otherwise require a frequency dependent kernel.
Resolvability of regional density structure
NASA Astrophysics Data System (ADS)
Plonka, A.; Fichtner, A.
2016-12-01
Lateral density variations are the source of mass transport in the Earth at all scales, acting as drivers of convectivemotion. However, the density structure of the Earth remains largely unknown since classic seismic observables and gravityprovide only weak constraints with strong trade-offs. Current density models are therefore often based on velocity scaling,making strong assumptions on the origin of structural heterogeneities, which may not necessarily be correct. Our goal is to assessif 3D density structure may be resolvable with emerging full-waveform inversion techniques. We have previously quantified the impact of regional-scale crustal density structure on seismic waveforms with the conclusion that reasonably sized density variations within thecrust can leave a strong imprint on both travel times and amplitudes, and, while this can produce significant biases in velocity and Q estimates, the seismic waveform inversion for density may become feasible. In this study we performprincipal component analyses of sensitivity kernels for P velocity, S velocity, and density. This is intended to establish theextent to which these kernels are linearly independent, i.e. the extent to which the different parameters may be constrainedindependently. Since the density imprint we observe is not exclusively linked to travel times and amplitudes of specific phases,we consider waveform differences between complete seismograms. We test the method using a known smooth model of the crust and seismograms with clear Love and Rayleigh waves, showing that - as expected - the first principal kernel maximizes sensitivity to SH and SV velocity structure, respectively, and that the leakage between S velocity, P velocity and density parameter spaces is minimal in the chosen setup. Next, we apply the method to data from 81 events around the Iberian Penninsula, registered in total by 492 stations. The objective is to find a principal kernel which would maximize the sensitivity to density, potentially allowing for independent density resolution, and, as the final goal, for direct density inversion.
An Experimental Study of Briquetting Process of Torrefied Rubber Seed Kernel and Palm Oil Shell.
Hamid, M Fadzli; Idroas, M Yusof; Ishak, M Zulfikar; Zainal Alauddin, Z Alimuddin; Miskam, M Azman; Abdullah, M Khalil
2016-01-01
Torrefaction process of biomass material is essential in converting them into biofuel with improved calorific value and physical strength. However, the production of torrefied biomass is loose, powdery, and nonuniform. One method of upgrading this material to improve their handling and combustion properties is by densification into briquettes of higher density than the original bulk density of the material. The effects of critical parameters of briquetting process that includes the type of biomass material used for torrefaction and briquetting, densification temperature, and composition of binder for torrefied biomass are studied and characterized. Starch is used as a binder in the study. The results showed that the briquette of torrefied rubber seed kernel (RSK) is better than torrefied palm oil shell (POS) in both calorific value and compressive strength. The best quality of briquettes is yielded from torrefied RSK at the ambient temperature of briquetting process with the composition of 60% water and 5% binder. The maximum compressive load for the briquettes of torrefied RSK is 141 N and the calorific value is 16 MJ/kg. Based on the economic evaluation analysis, the return of investment (ROI) for the mass production of both RSK and POS briquettes is estimated in 2-year period and the annual profit after payback was approximately 107,428.6 USD.
USDA-ARS?s Scientific Manuscript database
Wheat kernel shape and size has been under selection since early domestication. Kernel morphology is a major consideration in wheat breeding, as it impacts grain yield and quality. A population of 160 recombinant inbred lines (RIL), developed using an elite (ND 705) and a nonadapted genotype (PI 414...
How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.
Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J
2014-09-01
Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Semiclassical dynamics of spin density waves
NASA Astrophysics Data System (ADS)
Chern, Gia-Wei; Barros, Kipton; Wang, Zhentao; Suwa, Hidemaro; Batista, Cristian D.
2018-01-01
We present a theoretical framework for equilibrium and nonequilibrium dynamical simulation of quantum states with spin-density-wave (SDW) order. Within a semiclassical adiabatic approximation that retains electron degrees of freedom, we demonstrate that the SDW order parameter obeys a generalized Landau-Lifshitz equation. With the aid of an enhanced kernel polynomial method, our linear-scaling quantum Landau-Lifshitz dynamics (QLLD) method enables dynamical SDW simulations with N ≃105 lattice sites. Our real-space formulation can be used to compute dynamical responses, such as the dynamical structure factor, of complex and even inhomogeneous SDW configurations at zero or finite temperatures. Applying the QLLD to study the relaxation of a noncoplanar topological SDW under the excitation of a short pulse, we further demonstrate the crucial role of spatial correlations and fluctuations in the SDW dynamics.
NASA Astrophysics Data System (ADS)
Silva, Chinthaka M.; Lindemer, Terrence B.; Voit, Stewart R.; Hunt, Rodney D.; Besmann, Theodore M.; Terrani, Kurt A.; Snead, Lance L.
2014-11-01
Three sets of experimental conditions were tested to synthesize uranium carbonitride (UC1-xNx) kernels from gel-derived urania-carbon microspheres. Primarily, three sequences of gases were used, N2 to N2-4%H2 to Ar, Ar to N2 to Ar, and Ar-4%H2 to N2-4%H2 to Ar-4%H2. Physical and chemical characteristics such as geometrical density, phase purity, and chemical compositions of the synthesized UC1-xNx were measured. Single-phase kernels were commonly obtained with densities generally ranging from 85% to 93% TD and values of x as high as 0.99. In-depth analysis of the microstrutures of UC1-xNx has been carried out and is discussed with the objective of large batch fabrication of high density UC1-xNx kernels.
NASA Astrophysics Data System (ADS)
Du, Peijun; Tan, Kun; Xing, Xiaoshi
2010-12-01
Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.
Yeung, Dit-Yan; Chang, Hong; Dai, Guang
2008-11-01
In recent years, metric learning in the semisupervised setting has aroused a lot of research interest. One type of semisupervised metric learning utilizes supervisory information in the form of pairwise similarity or dissimilarity constraints. However, most methods proposed so far are either limited to linear metric learning or unable to scale well with the data set size. In this letter, we propose a nonlinear metric learning method based on the kernel approach. By applying low-rank approximation to the kernel matrix, our method can handle significantly larger data sets. Moreover, our low-rank approximation scheme can naturally lead to out-of-sample generalization. Experiments performed on both artificial and real-world data show very promising results.
NASA Astrophysics Data System (ADS)
Ghale, Purnima; Johnson, Harley T.
2018-06-01
We present an efficient sparse matrix-vector (SpMV) based method to compute the density matrix P from a given Hamiltonian in electronic structure computations. Our method is a hybrid approach based on Chebyshev-Jackson approximation theory and matrix purification methods like the second order spectral projection purification (SP2). Recent methods to compute the density matrix scale as O(N) in the number of floating point operations but are accompanied by large memory and communication overhead, and they are based on iterative use of the sparse matrix-matrix multiplication kernel (SpGEMM), which is known to be computationally irregular. In addition to irregularity in the sparse Hamiltonian H, the nonzero structure of intermediate estimates of P depends on products of H and evolves over the course of computation. On the other hand, an expansion of the density matrix P in terms of Chebyshev polynomials is straightforward and SpMV based; however, the resulting density matrix may not satisfy the required constraints exactly. In this paper, we analyze the strengths and weaknesses of the Chebyshev-Jackson polynomials and the second order spectral projection purification (SP2) method, and propose to combine them so that the accurate density matrix can be computed using the SpMV computational kernel only, and without having to store the density matrix P. Our method accomplishes these objectives by using the Chebyshev polynomial estimate as the initial guess for SP2, which is followed by using sparse matrix-vector multiplications (SpMVs) to replicate the behavior of the SP2 algorithm for purification. We demonstrate the method on a tight-binding model system of an oxide material containing more than 3 million atoms. In addition, we also present the predicted behavior of our method when applied to near-metallic Hamiltonians with a wide energy spectrum.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Some physical properties of ginkgo nuts and kernels
NASA Astrophysics Data System (ADS)
Ch'ng, P. E.; Abdullah, M. H. R. O.; Mathai, E. J.; Yunus, N. A.
2013-12-01
Some data of the physical properties of ginkgo nuts at a moisture content of 45.53% (±2.07) (wet basis) and of their kernels at 60.13% (± 2.00) (wet basis) are presented in this paper. It consists of the estimation of the mean length, width, thickness, the geometric mean diameter, sphericity, aspect ratio, unit mass, surface area, volume, true density, bulk density, and porosity measures. The coefficient of static friction for nuts and kernels was determined by using plywood, glass, rubber, and galvanized steel sheet. The data are essential in the field of food engineering especially dealing with design and development of machines, and equipment for processing and handling agriculture products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jolly, Brian C.; Helmreich, Grant; Cooley, Kevin M.
In support of fully ceramic microencapsulated (FCM) fuel development, coating development work is ongoing at Oak Ridge National Laboratory (ORNL) to produce tri-structural isotropic (TRISO) coated fuel particles with both UN kernels and surrogate (uranium-free) kernels. The nitride kernels are used to increase fissile density in these SiC-matrix fuel pellets with details described elsewhere. The surrogate TRISO particles are necessary for separate effects testing and for utilization in the consolidation process development. This report focuses on the fabrication and characterization of surrogate TRISO particles which use 800μm in diameter ZrO 2 microspheres as the kernel.
A shock-capturing SPH scheme based on adaptive kernel estimation
NASA Astrophysics Data System (ADS)
Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime
2006-02-01
Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.
Triso coating development progress for uranium nitride kernels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jolly, Brian C.; Lindemer, Terrence; Terrani, Kurt A.
2015-08-01
In support of fully ceramic matrix (FCM) fuel development [1-2], coating development work is ongoing at the Oak Ridge National Laboratory (ORNL) to produce tri-structural isotropic (TRISO) coated fuel particles with UN kernels [3]. The nitride kernels are used to increase fissile density in these SiC-matrix fuel pellets with details described elsewhere [4]. The advanced gas reactor (AGR) program at ORNL used fluidized bed chemical vapor deposition (FBCVD) techniques for TRISO coating of UCO (two phase mixture of UO2 and UCx) kernels [5]. Similar techniques were employed for coating of the UN kernels, however significant changes in processing conditions weremore » required to maintain acceptable coating properties due to physical property and dimensional differences between the UCO and UN kernels (Table 1).« less
NASA Technical Reports Server (NTRS)
Kahler, S. W.; Petrasso, R. D.; Kane, S. R.
1976-01-01
The physical parameters for the kernels of three solar X-ray flare events have been deduced using photographic data from the S-054 X-ray telescope on Skylab as the primary data source and 1-8 and 8-20 A fluxes from Solrad 9 as the secondary data source. The kernels had diameters of about 5-7 seconds of arc and in two cases electron densities at least as high as 0.3 trillion per cu cm. The lifetimes of the kernels were 5-10 min. The presence of thermal conduction during the decay phases is used to argue: (1) that kernels are entire, not small portions of, coronal loop structures, and (2) that flare heating must continue during the decay phase. We suggest a simple geometric model to explain the role of kernels in flares in which kernels are identified with emerging flux regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jolly, Brian C.; Lindemer, Terrence; Terrani, Kurt A.
In support of fully ceramic matrix (FCM) fuel development, coating development work has begun at the Oak Ridge National Laboratory (ORNL) to produce tri-isotropic (TRISO) coated fuel particles with UN kernels. The nitride kernels are used to increase heavy metal density in these SiC-matrix fuel pellets with details described elsewhere. The advanced gas reactor (AGR) program at ORNL used fluidized bed chemical vapor deposition (FBCVD) techniques for TRISO coating of UCO (two phase mixture of UO 2 and UC x) kernels. Similar techniques were employed for coating of the UN kernels, however significant changes in processing conditions were required tomore » maintain acceptable coating properties due to physical property and dimensional differences between the UCO and UN kernels.« less
Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party
NASA Astrophysics Data System (ADS)
Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi
The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.
Spectral methods in machine learning and new strategies for very large datasets
Belabbas, Mohamed-Ali; Wolfe, Patrick J.
2009-01-01
Spectral methods are of fundamental importance in statistics and machine learning, because they underlie algorithms from classical principal components analysis to more recent approaches that exploit manifold structure. In most cases, the core technical problem can be reduced to computing a low-rank approximation to a positive-definite kernel. For the growing number of applications dealing with very large or high-dimensional datasets, however, the optimal approximation afforded by an exact spectral decomposition is too costly, because its complexity scales as the cube of either the number of training examples or their dimensionality. Motivated by such applications, we present here 2 new algorithms for the approximation of positive-semidefinite kernels, together with error bounds that improve on results in the literature. We approach this problem by seeking to determine, in an efficient manner, the most informative subset of our data relative to the kernel approximation task at hand. This leads to two new strategies based on the Nyström method that are directly applicable to massive datasets. The first of these—based on sampling—leads to a randomized algorithm whereupon the kernel induces a probability distribution on its set of partitions, whereas the latter approach—based on sorting—provides for the selection of a partition in a deterministic way. We detail their numerical implementation and provide simulation results for a variety of representative problems in statistical data analysis, each of which demonstrates the improved performance of our approach relative to existing methods. PMID:19129490
Mass functions from the excursion set model
NASA Astrophysics Data System (ADS)
Hiotelis, Nicos; Del Popolo, Antonino
2017-11-01
Aims: We aim to study the stochastic evolution of the smoothed overdensity δ at scale S of the form δ(S) = ∫0S K(S,u)dW(u), where K is a kernel and dW is the usual Wiener process. Methods: For a Gaussian density field, smoothed by the top-hat filter, in real space, we used a simple kernel that gives the correct correlation between scales. A Monte Carlo procedure was used to construct random walks and to calculate first crossing distributions and consequently mass functions for a constant barrier. Results: We show that the evolution considered here improves the agreement with the results of N-body simulations relative to analytical approximations which have been proposed from the same problem by other authors. In fact, we show that an evolution which is fully consistent with the ideas of the excursion set model, describes accurately the mass function of dark matter haloes for values of ν ≤ 1 and underestimates the number of larger haloes. Finally, we show that a constant threshold of collapse, lower than it is usually used, it is able to produce a mass function which approximates the results of N-body simulations for a variety of redshifts and for a wide range of masses. Conclusions: A mass function in good agreement with N-body simulations can be obtained analytically using a lower than usual constant collapse threshold.
Ziegler, Tom; Krykunov, Mykhaylo; Autschbach, Jochen
2014-09-09
The random phase approximation (RPA) equation of adiabatic time dependent density functional ground state response theory (ATDDFT) has been used extensively in studies of excited states. It extracts information about excited states from frequency dependent ground state response properties and avoids, thus, in an elegant way, direct Kohn-Sham calculations on excited states in accordance with the status of DFT as a ground state theory. Thus, excitation energies can be found as resonance poles of frequency dependent ground state polarizability from the eigenvalues of the RPA equation. ATDDFT is approximate in that it makes use of a frequency independent energy kernel derived from the ground state functional. It is shown in this study that one can derive the RPA equation of ATDDFT from a purely variational approach in which stationary states above the ground state are located using our constricted variational DFT (CV-DFT) method and the ground state functional. Thus, locating stationary states above the ground state due to one-electron excitations with a ground state functional is completely equivalent to solving the RPA equation of TDDFT employing the same functional. The present study is an extension of a previous work in which we demonstrated the equivalence between ATDDFT and CV-DFT within the Tamm-Dancoff approximation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.
Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less
The gravitational potential of axially symmetric bodies from a regularized green kernel
NASA Astrophysics Data System (ADS)
Trova, A.; Huré, J.-M.; Hersant, F.
2011-12-01
The determination of the gravitational potential inside celestial bodies (rotating stars, discs, planets, asteroids) is a common challenge in numerical Astrophysics. Under axial symmetry, the potential is classically found from a two-dimensional integral over the body's meridional cross-section. Because it involves an improper integral, high accuracy is generally difficult to reach. We have discovered that, for homogeneous bodies, the singular Green kernel can be converted into a regular kernel by direct analytical integration. This new kernel, easily managed with standard techniques, opens interesting horizons, not only for numerical calculus but also to generate approximations, in particular for geometrically thin discs and rings.
Handling Density Conversion in TPS.
Isobe, Tomonori; Mori, Yutaro; Takei, Hideyuki; Sato, Eisuke; Tadano, Kiichi; Kobayashi, Daisuke; Tomita, Tetsuya; Sakae, Takeji
2016-01-01
Conversion from CT value to density is essential to a radiation treatment planning system. Generally CT value is converted to the electron density in photon therapy. In the energy range of therapeutic photon, interactions between photons and materials are dominated with Compton scattering which the cross-section depends on the electron density. The dose distribution is obtained by calculating TERMA and kernel using electron density where TERMA is the energy transferred from primary photons and kernel is a volume considering spread electrons. Recently, a new method was introduced which uses the physical density. This method is expected to be faster and more accurate than that using the electron density. As for particle therapy, dose can be calculated with CT-to-stopping power conversion since the stopping power depends on the electron density. CT-to-stopping power conversion table is also called as CT-to-water-equivalent range and is an essential concept for the particle therapy.
Efficient calculation of beyond RPA correlation energies in the dielectric matrix formalism
NASA Astrophysics Data System (ADS)
Beuerle, Matthias; Graf, Daniel; Schurkus, Henry F.; Ochsenfeld, Christian
2018-05-01
We present efficient methods to calculate beyond random phase approximation (RPA) correlation energies for molecular systems with up to 500 atoms. To reduce the computational cost, we employ the resolution-of-the-identity and a double-Laplace transform of the non-interacting polarization propagator in conjunction with an atomic orbital formalism. Further improvements are achieved using integral screening and the introduction of Cholesky decomposed densities. Our methods are applicable to the dielectric matrix formalism of RPA including second-order screened exchange (RPA-SOSEX), the RPA electron-hole time-dependent Hartree-Fock (RPA-eh-TDHF) approximation, and RPA renormalized perturbation theory using an approximate exchange kernel (RPA-AXK). We give an application of our methodology by presenting RPA-SOSEX benchmark results for the L7 test set of large, dispersion dominated molecules, yielding a mean absolute error below 1 kcal/mol. The present work enables calculating beyond RPA correlation energies for significantly larger molecules than possible to date, thereby extending the applicability of these methods to a wider range of chemical systems.
Novel characterization method of impedance cardiography signals using time-frequency distributions.
Escrivá Muñoz, Jesús; Pan, Y; Ge, S; Jensen, E W; Vallverdú, M
2018-03-16
The purpose of this document is to describe a methodology to select the most adequate time-frequency distribution (TFD) kernel for the characterization of impedance cardiography signals (ICG). The predominant ICG beat was extracted from a patient and was synthetized using time-frequency variant Fourier approximations. These synthetized signals were used to optimize several TFD kernels according to a performance maximization. The optimized kernels were tested for noise resistance on a clinical database. The resulting optimized TFD kernels are presented with their performance calculated using newly proposed methods. The procedure explained in this work showcases a new method to select an appropriate kernel for ICG signals and compares the performance of different time-frequency kernels found in the literature for the case of ICG signals. We conclude that, for ICG signals, the performance (P) of the spectrogram with either Hanning or Hamming windows (P = 0.780) and the extended modified beta distribution (P = 0.765) provided similar results, higher than the rest of analyzed kernels. Graphical abstract Flowchart for the optimization of time-frequency distribution kernels for impedance cardiography signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less
NASA Astrophysics Data System (ADS)
Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai
2016-07-01
Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.
NASA Technical Reports Server (NTRS)
Bland, S. R.
1982-01-01
Finite difference methods for unsteady transonic flow frequency use simplified equations in which certain of the time dependent terms are omitted from the governing equations. Kernel functions are derived for two dimensional subsonic flow, and provide accurate solutions of the linearized potential equation with the same time dependent terms omitted. These solutions make possible a direct evaluation of the finite difference codes for the linear problem. Calculations with two of these low frequency kernel functions verify the accuracy of the LTRAN2 and HYTRAN2 finite difference codes. Comparisons of the low frequency kernel function results with the Possio kernel function solution of the complete linear equations indicate the adequacy of the HYTRAN approximation for frequencies in the range of interest for flutter calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Baker, Nathan A.; Li, Xiantao
We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.
Exact Doppler broadening of tabulated cross sections. [SIGMA 1 kernel broadening method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cullen, D.E.; Weisbin, C.R.
1976-07-01
The SIGMA1 kernel broadening method is presented to Doppler broaden to any required accuracy a cross section that is described by a table of values and linear-linear interpolation in energy-cross section between tabulated values. The method is demonstrated to have no temperature or energy limitations and to be equally applicable to neutron or charged-particle cross sections. The method is qualitatively and quantitatively compared to contemporary approximate methods of Doppler broadening with particular emphasis on the effect of each approximation introduced.
Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation
Huang, Hao; Yoo, Shinjae; Yu, Dantong; ...
2015-06-01
Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, somore » it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.« less
Online learning control using adaptive critic designs with sparse kernel machines.
Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo
2013-05-01
In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.
Morris, Craig F; Beecher, Brian S
2012-07-01
Kernel vitreosity is an important trait of wheat grain, but its developmental control is not completely known. We developed back-cross seven (BC(7)) near-isogenic lines in the soft white spring wheat cultivar Alpowa that lack the distal portion of chromosome 5D short arm. From the final back-cross, 46 BC(7)F(2) plants were isolated. These plants exhibited a complete and perfect association between kernel vitreosity (i.e. vitreous, non-vitreous or mixed) and Single Kernel Characterization System (SKCS) hardness. Observed segregation of 10:28:7 fit a 1:2:1 Chi-square. BC(7)F(2) plants classified as heterozygous for both SKCS hardness and kernel vitreosity (n = 29) were selected and a single vitreous and non-vitreous kernel were selected, and grown to maturity and subjected to SKCS analysis. The resultant phenotypic ratios were, from non-vitreous kernels, 23:6:0, and from vitreous kernels, 0:1:28, soft:heterozygous:hard, respectively. Three of these BC(7)F(2) heterozygous plants were selected and 40 kernels each drawn at random, grown to maturity and subjected to SKCS analysis. Phenotypic segregation ratios were 7:27:6, 11:20:9, and 3:28:9, soft:heterozygous:hard. Chi-square analysis supported a 1:2:1 segregation for one plant but not the other two, in which cases the two homozygous classes were under-represented. Twenty-two paired BC(7)F(2):F(3) full sibs were compared for kernel hardness, weight, size, density and protein content. SKCS hardness index differed markedly, 29.4 for the lines with a complete 5DS, and 88.6 for the lines possessing the deletion. The soft non-vitreous kernels were on average significantly heavier, by nearly 20%, and were slightly larger. Density and protein contents were similar, however. The results provide strong genetic evidence that gene(s) on distal 5DS control not only kernel hardness but also the manner in which the endosperm develops, viz. whether it is vitreous or non-vitreous.
Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density
NASA Astrophysics Data System (ADS)
Hohl, A.; Delmelle, E. M.; Tang, W.
2015-07-01
Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
Generalization of the subsonic kernel function in the s-plane, with applications to flutter analysis
NASA Technical Reports Server (NTRS)
Cunningham, H. J.; Desmarais, R. N.
1984-01-01
A generalized subsonic unsteady aerodynamic kernel function, valid for both growing and decaying oscillatory motions, is developed and applied in a modified flutter analysis computer program to solve the boundaries of constant damping ratio as well as the flutter boundary. Rates of change of damping ratios with respect to dynamic pressure near flutter are substantially lower from the generalized-kernel-function calculations than from the conventional velocity-damping (V-g) calculation. A rational function approximation for aerodynamic forces used in control theory for s-plane analysis gave rather good agreement with kernel-function results, except for strongly damped motion at combinations of high (subsonic) Mach number and reduced frequency.
Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.
Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit
2018-02-13
Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Płonka, Agnieszka; Fichtner, Andreas
2017-04-01
Lateral density variations are the source of mass transport in the Earth at all scales, acting as drivers of convective motion. However, the density structure of the Earth remains largely unknown since classic seismic observables and gravity provide only weak constraints with strong trade-offs. Current density models are therefore often based on velocity scaling, making strong assumptions on the origin of structural heterogeneities, which may not necessarily be correct. Our goal is to assess if 3D density structure may be resolvable with emerging full-waveform inversion techniques. We have previously quantified the impact of regional-scale crustal density structure on seismic waveforms with the conclusion that reasonably sized density variations within the crust can leave a strong imprint on both travel times and amplitudes, and, while this can produce significant biases in velocity and Q estimates, the seismic waveform inversion for density may become feasible. In this study we perform principal component analyses of sensitivity kernels for P velocity, S velocity, and density. This is intended to establish the extent to which these kernels are linearly independent, i.e. the extent to which the different parameters may be constrained independently. We apply the method to data from 81 events around the Iberian Penninsula, registered in total by 492 stations. The objective is to find a principal kernel which would maximize the sensitivity to density, potentially allowing for as independent as possible density resolution. We find that surface (mosty Rayleigh) waves have significant sensitivity to density, and that the trade-off with velocity is negligible. We also show the preliminary results of the inversion.
NASA Astrophysics Data System (ADS)
Bekas, C.; Curioni, A.
2010-06-01
Enforcing the orthogonality of approximate wavefunctions becomes one of the dominant computational kernels in planewave based Density Functional Theory electronic structure calculations that involve thousands of atoms. In this context, algorithms that enjoy both excellent scalability and single processor performance properties are much needed. In this paper we present block versions of the Gram-Schmidt method and we show that they are excellent candidates for our purposes. We compare the new approach with the state of the art practice in planewave based calculations and find that it has much to offer, especially when applied on massively parallel supercomputers such as the IBM Blue Gene/P Supercomputer. The new method achieves excellent sustained performance that surpasses 73 TFLOPS (67% of peak) on 8 Blue Gene/P racks (32 768 compute cores), while it enables more than a two fold decrease in run time when compared with the best competing methodology.
A continued fraction resummation form of bath relaxation effect in the spin-boson model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Zhihao; Tang, Zhoufei; Wu, Jianlan, E-mail: jianlanwu@zju.edu.cn
2015-02-28
In the spin-boson model, a continued fraction form is proposed to systematically resum high-order quantum kinetic expansion (QKE) rate kernels, accounting for the bath relaxation effect beyond the second-order perturbation. In particular, the analytical expression of the sixth-order QKE rate kernel is derived for resummation. With higher-order correction terms systematically extracted from higher-order rate kernels, the resummed quantum kinetic expansion approach in the continued fraction form extends the Pade approximation and can fully recover the exact quantum dynamics as the expansion order increases.
Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan
2018-02-01
In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.
Unified connected theory of few-body reaction mechanisms in N-body scattering theory
NASA Technical Reports Server (NTRS)
Polyzou, W. N.; Redish, E. F.
1978-01-01
A unified treatment of different reaction mechanisms in nonrelativistic N-body scattering is presented. The theory is based on connected kernel integral equations that are expected to become compact for reasonable constraints on the potentials. The operators T/sub +-//sup ab/(A) are approximate transition operators that describe the scattering proceeding through an arbitrary reaction mechanism A. These operators are uniquely determined by a connected kernel equation and satisfy an optical theorem consistent with the choice of reaction mechanism. Connected kernel equations relating T/sub +-//sup ab/(A) to the full T/sub +-//sup ab/ allow correction of the approximate solutions for any ignored process to any order. This theory gives a unified treatment of all few-body reaction mechanisms with the same dynamic simplicity of a model calculation, but can include complicated reaction mechanisms involving overlapping configurations where it is difficult to formulate models.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Data-driven parameterization of the generalized Langevin equation
Lei, Huan; Baker, Nathan A.; Li, Xiantao
2016-11-29
We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.
Bayne, Michael G; Scher, Jeremy A; Ellis, Benjamin H; Chakraborty, Arindam
2018-05-21
Electron-hole or quasiparticle representation plays a central role in describing electronic excitations in many-electron systems. For charge-neutral excitation, the electron-hole interaction kernel is the quantity of interest for calculating important excitation properties such as optical gap, optical spectra, electron-hole recombination and electron-hole binding energies. The electron-hole interaction kernel can be formally derived from the density-density correlation function using both Green's function and TDDFT formalism. The accurate determination of the electron-hole interaction kernel remains a significant challenge for precise calculations of optical properties in the GW+BSE formalism. From the TDDFT perspective, the electron-hole interaction kernel has been viewed as a path to systematic development of frequency-dependent exchange-correlation functionals. Traditional approaches, such as MBPT formalism, use unoccupied states (which are defined with respect to Fermi vacuum) to construct the electron-hole interaction kernel. However, the inclusion of unoccupied states has long been recognized as the leading computational bottleneck that limits the application of this approach for larger finite systems. In this work, an alternative derivation that avoids using unoccupied states to construct the electron-hole interaction kernel is presented. The central idea of this approach is to use explicitly correlated geminal functions for treating electron-electron correlation for both ground and excited state wave functions. Using this ansatz, it is derived using both diagrammatic and algebraic techniques that the electron-hole interaction kernel can be expressed only in terms of linked closed-loop diagrams. It is proved that the cancellation of unlinked diagrams is a consequence of linked-cluster theorem in real-space representation. The electron-hole interaction kernel derived in this work was used to calculate excitation energies in many-electron systems and results were found to be in good agreement with the EOM-CCSD and GW+BSE methods. The numerical results highlight the effectiveness of the developed method for overcoming the computational barrier of accurately determining the electron-hole interaction kernel to applications of large finite systems such as quantum dots and nanorods.
Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.
Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E
2018-06-01
An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.
Pattern formation of microtubules and motors: inelastic interaction of polar rods.
Aranson, Igor S; Tsimring, Lev S
2005-05-01
We derive a model describing spatiotemporal organization of an array of microtubules interacting via molecular motors. Starting from a stochastic model of inelastic polar rods with a generic anisotropic interaction kernel we obtain a set of equations for the local rods concentration and orientation. At large enough mean density of rods and concentration of motors, the model describes orientational instability. We demonstrate that the orientational instability leads to the formation of vortices and (for large density and/or kernel anisotropy) asters seen in recent experiments.
Front propagation and clustering in the stochastic nonlocal Fisher equation
NASA Astrophysics Data System (ADS)
Ganan, Yehuda A.; Kessler, David A.
2018-04-01
In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.
Front propagation and clustering in the stochastic nonlocal Fisher equation.
Ganan, Yehuda A; Kessler, David A
2018-04-01
In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.
NASA Astrophysics Data System (ADS)
Mancinelli, N. J.; Fischer, K. M.
2018-03-01
We characterize the spatial sensitivity of Sp converted waves to improve constraints on lateral variations in uppermost-mantle velocity gradients, such as the lithosphere-asthenosphere boundary (LAB) and the mid-lithospheric discontinuities. We use SPECFEM2D to generate 2-D scattering kernels that relate perturbations from an elastic half-space to Sp waveforms. We then show that these kernels can be well approximated using ray theory, and develop an approach to calculating kernels for layered background models. As proof of concept, we show that lateral variations in uppermost-mantle discontinuity structure are retrieved by implementing these scattering kernels in the first iteration of a conjugate-directions inversion algorithm. We evaluate the performance of this technique on synthetic seismograms computed for 2-D models with undulations on the LAB of varying amplitude, wavelength and depth. The technique reliably images the position of discontinuities with dips <35° and horizontal wavelengths >100-200 km. In cases of mild topography on a shallow LAB, the relative brightness of the LAB and Moho converters approximately agrees with the ratio of velocity contrasts across the discontinuities. Amplitude retrieval degrades at deeper depths. For dominant periods of 4 s, the minimum station spacing required to produce unaliased results is 5 km, but the application of a Gaussian filter can improve discontinuity imaging where station spacing is greater.
Increasing the Size of Microwave Popcorn
NASA Astrophysics Data System (ADS)
Smoyer, Justin
2005-03-01
Each year Americans consume approximately 17 billion quarts of popcorn. Since the 1940s, microwaves have been the heating source of choice for most. By treating the popcorn mechanism as a thermodynamic system, it has been shown mathematically and experimentally that reducing the surrounding pressure of the unpopped kernels, results in an increased volume of the kernels [Quinn et al, http://xxx.lanl.gov/abs/cond-mat/0409434 v1 2004]. In this project an alternate method of popping with the microwave was used to further test and confirm this hypothesis. Numerous experimental trials where run to test the validity of the theory. The results show that there is a significant increase in the average kernel size as well as a reduction in the number of unpopped kernels.
NASA Astrophysics Data System (ADS)
Chen, Guoxiong; Cheng, Qiuming
2016-02-01
Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.
A climatological model of North Indian Ocean tropical cyclone genesis, tracks and landfall
NASA Astrophysics Data System (ADS)
Wahiduzzaman, Mohammad; Oliver, Eric C. J.; Wotherspoon, Simon J.; Holbrook, Neil J.
2017-10-01
Extensive damage and loss of life can be caused by tropical cyclones (TCs) that make landfall. Modelling of TC landfall probability is beneficial to insurance/re-insurance companies, decision makers, government policy and planning, and residents in coastal areas. In this study, we develop a climatological model of tropical cyclone genesis, tracks and landfall for North Indian Ocean (NIO) rim countries based on kernel density estimation, a generalised additive model (GAM) including an Euler integration step, and landfall detection using a country mask approach. Using a 35-year record (1979-2013) of tropical cyclone track observations from the Joint Typhoon Warning Centre (part of the International Best Track Archive Climate Stewardship Version 6), the GAM is fitted to the observed cyclone track velocities as a smooth function of location in each season. The distribution of cyclone genesis points is approximated by kernel density estimation. The model simulated TCs are randomly selected from the fitted kernel (TC genesis), and the cyclone paths (TC tracks), represented by the GAM together with the application of stochastic innovations at each step, are simulated to generate a suite of NIO rim landfall statistics. Three hindcast validation methods are applied to evaluate the integrity of the model. First, leave-one-out cross validation is applied whereby the country of landfall is determined by the majority vote (considering the location by only highest percentage of landfall) from the simulated tracks. Second, the probability distribution of simulated landfall is evaluated against the observed landfall. Third, the distances between the point of observed landfall and simulated landfall are compared and quantified. Overall, the model shows very good cross-validated hindcast skill of modelled landfalling cyclones against observations in each of the NIO tropical cyclone seasons and for most NIO rim countries, with only a relatively small difference in the percentage of predicted landfall locations compared with observations.
NASA Astrophysics Data System (ADS)
Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-12-01
Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a limited number of particles.
Using Adjoint Methods to Improve 3-D Velocity Models of Southern California
NASA Astrophysics Data System (ADS)
Liu, Q.; Tape, C.; Maggi, A.; Tromp, J.
2006-12-01
We use adjoint methods popular in climate and ocean dynamics to calculate Fréchet derivatives for tomographic inversions in southern California. The Fréchet derivative of an objective function χ(m), where m denotes the Earth model, may be written in the generic form δχ=int Km(x) δln m(x) d3x, where δln m=δ m/m denotes the relative model perturbation. For illustrative purposes, we construct the 3-D finite-frequency banana-doughnut kernel Km, corresponding to the misfit of a single traveltime measurement, by simultaneously computing the 'adjoint' wave field s† forward in time and reconstructing the regular wave field s backward in time. The adjoint wave field is produced by using the time-reversed velocity at the receiver as a fictitious source, while the regular wave field is reconstructed on the fly by propagating the last frame of the wave field saved by a previous forward simulation backward in time. The approach is based upon the spectral-element method, and only two simulations are needed to produce density, shear-wave, and compressional-wave sensitivity kernels. This method is applied to the SCEC southern California velocity model. Various density, shear-wave, and compressional-wave sensitivity kernels are presented for different phases in the seismograms. We also generate 'event' kernels for Pnl, S and surface waves, which are the Fréchet kernels of misfit functions that measure the P, S or surface wave traveltime residuals at all the receivers simultaneously for one particular event. Effectively, an event kernel is a sum of weighted Fréchet kernels, with weights determined by the associated traveltime anomalies. By the nature of the 3-D simulation, every event kernel is also computed based upon just two simulations, i.e., its construction costs the same amount of computation time as an individual banana-doughnut kernel. One can think of the sum of the event kernels for all available earthquakes, called the 'misfit' kernel, as a graphical representation of the gradient of the misfit function. With the capability of computing both the value of the misfit function and its gradient, which assimilates the traveltime anomalies, we are ready to use a non-linear conjugate gradient algorithm to iteratively improve velocity models of southern California.
New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.
Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto
2017-06-21
We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.
Cid, Jaime A; von Davier, Alina A
2015-05-01
Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.
Looe, Hui Khee; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn
2017-06-21
The distortion of detector reading profiles across photon beams in the presence of magnetic fields is a developing subject of clinical photon-beam dosimetry. The underlying modification by the Lorentz force of a detector's lateral dose response function-the convolution kernel transforming the true cross-beam dose profile in water into the detector reading profile-is here studied for the first time. The three basic convolution kernels, the photon fluence response function, the dose deposition kernel, and the lateral dose response function, of wall-less cylindrical detectors filled with water of low, normal and enhanced density are shown by Monte Carlo simulation to be distorted in the prevailing direction of the Lorentz force. The asymmetric shape changes of these convolution kernels in a water medium and in magnetic fields of up to 1.5 T are confined to the lower millimetre range, and they depend on the photon beam quality, the magnetic flux density and the detector's density. The impact of this distortion on detector reading profiles is demonstrated using a narrow photon beam profile. For clinical applications it appears as favourable that the magnetic flux density dependent distortion of the lateral dose response function, as far as secondary electron transport is concerned, vanishes in the case of water-equivalent detectors of normal water density. By means of secondary electron history backtracing, the spatial distribution of the photon interactions giving rise either directly to secondary electrons or to scattered photons further downstream producing secondary electrons which contribute to the detector's signal, and their lateral shift due to the Lorentz force is elucidated. Electron history backtracing also serves to illustrate the correct treatment of the influences of the Lorentz force in the EGSnrc Monte Carlo code applied in this study.
Reproductive sink of sweet corn in response to plant density and hybrid
USDA-ARS?s Scientific Manuscript database
Improvements in plant density tolerance have played an essential role in grain corn yield gains for ~80 years; however, plant density effects on sweet corn biomass allocation to the ear (the reproductive ‘sink’) is poorly quantified. Moreover, optimal plant densities for modern white-kernel shrunke...
Reservoir area of influence and implications for fisheries management
Martin, Dustin R.; Chizinski, Christopher J.; Pope, Kevin L.
2015-01-01
Understanding the spatial area that a reservoir draws anglers from, defined as the reservoir's area of influence, and the potential overlap of that area of influence between reservoirs is important for fishery managers. Our objective was to define the area of influence for reservoirs of the Salt Valley regional fishery in southeastern Nebraska using kernel density estimation. We used angler survey data obtained from in-person interviews at 17 reservoirs during 2009–2012. The area of influence, defined by the 95% kernel density, for reservoirs within the Salt Valley regional fishery varied, indicating that anglers use reservoirs differently across the regional fishery. Areas of influence reveal angler preferences in a regional context, indicating preferred reservoirs with a greater area of influence. Further, differences in areas of influences across time and among reservoirs can be used as an assessment following management changes on an individual reservoir or within a regional fishery. Kernel density estimation provided a clear method for creating spatial maps of areas of influence and provided a two-dimensional view of angler travel, as opposed to the traditional mean travel distance assessment.
Arcisauskaite, Vaida; Melo, Juan I; Hemmingsen, Lars; Sauer, Stephan P A
2011-07-28
We investigate the importance of relativistic effects on NMR shielding constants and chemical shifts of linear HgL(2) (L = Cl, Br, I, CH(3)) compounds using three different relativistic methods: the fully relativistic four-component approach and the two-component approximations, linear response elimination of small component (LR-ESC) and zeroth-order regular approximation (ZORA). LR-ESC reproduces successfully the four-component results for the C shielding constant in Hg(CH(3))(2) within 6 ppm, but fails to reproduce the Hg shielding constants and chemical shifts. The latter is mainly due to an underestimation of the change in spin-orbit contribution. Even though ZORA underestimates the absolute Hg NMR shielding constants by ∼2100 ppm, the differences between Hg chemical shift values obtained using ZORA and the four-component approach without spin-density contribution to the exchange-correlation (XC) kernel are less than 60 ppm for all compounds using three different functionals, BP86, B3LYP, and PBE0. However, larger deviations (up to 366 ppm) occur for Hg chemical shifts in HgBr(2) and HgI(2) when ZORA results are compared with four-component calculations with non-collinear spin-density contribution to the XC kernel. For the ZORA calculations it is necessary to use large basis sets (QZ4P) and the TZ2P basis set may give errors of ∼500 ppm for the Hg chemical shifts, despite deceivingly good agreement with experimental data. A Gaussian nucleus model for the Coulomb potential reduces the Hg shielding constants by ∼100-500 ppm and the Hg chemical shifts by 1-143 ppm compared to the point nucleus model depending on the atomic number Z of the coordinating atom and the level of theory. The effect on the shielding constants of the lighter nuclei (C, Cl, Br, I) is, however, negligible. © 2011 American Institute of Physics
NASA Astrophysics Data System (ADS)
Pan, Wenyong; Geng, Yu; Innanen, Kristopher A.
2018-05-01
The problem of inverting for multiple physical parameters in the subsurface using seismic full-waveform inversion (FWI) is complicated by interparameter trade-off arising from inherent ambiguities between different physical parameters. Parameter resolution is often characterized using scattering radiation patterns, but these neglect some important aspects of interparameter trade-off. More general analysis and mitigation of interparameter trade-off in isotropic-elastic FWI is possible through judiciously chosen multiparameter Hessian matrix-vector products. We show that products of multiparameter Hessian off-diagonal blocks with model perturbation vectors, referred to as interparameter contamination kernels, are central to the approach. We apply the multiparameter Hessian to various vectors designed to provide information regarding the strengths and characteristics of interparameter contamination, both locally and within the whole volume. With numerical experiments, we observe that S-wave velocity perturbations introduce strong contaminations into density and phase-reversed contaminations into P-wave velocity, but themselves experience only limited contaminations from other parameters. Based on these findings, we introduce a novel strategy to mitigate the influence of interparameter trade-off with approximate contamination kernels. Furthermore, we recommend that the local spatial and interparameter trade-off of the inverted models be quantified using extended multiparameter point spread functions (EMPSFs) obtained with pre-conditioned conjugate-gradient algorithm. Compared to traditional point spread functions, the EMPSFs appear to provide more accurate measurements for resolution analysis, by de-blurring the estimations, scaling magnitudes and mitigating interparameter contamination. Approximate eigenvalue volumes constructed with stochastic probing approach are proposed to evaluate the resolution of the inverted models within the whole model. With a synthetic Marmousi model example and a land seismic field data set from Hussar, Alberta, Canada, we confirm that the new inversion strategy suppresses the interparameter contamination effectively and provides more reliable density estimations in isotropic-elastic FWI as compared to standard simultaneous inversion approach.
A heat kernel proof of the index theorem for deformation quantization
NASA Astrophysics Data System (ADS)
Karabegov, Alexander
2017-11-01
We give a heat kernel proof of the algebraic index theorem for deformation quantization with separation of variables on a pseudo-Kähler manifold. We use normalizations of the canonical trace density of a star product and of the characteristic classes involved in the index formula for which this formula contains no extra constant factors.
Diversity of maize kernels from a breeding program for protein quality III: Ionome profiling
USDA-ARS?s Scientific Manuscript database
Densities of single and multiple macro- and micronutrients have been estimated in mature kernels of 1,348 accessions in 13 maize genotypes. The germplasm belonged to stiff stalk (SS) and non-stiff stalk (NS) heterotic groups (HG) with one (S1) to four (S4) years of inbreeding (IB), or open pollinati...
Computational applications of the many-interacting-worlds interpretation of quantum mechanics.
Sturniolo, Simone
2018-05-01
While historically many quantum-mechanical simulations of molecular dynamics have relied on the Born-Oppenheimer approximation to separate electronic and nuclear behavior, recently a great deal of interest has arisen in quantum effects in nuclear dynamics as well. Due to the computational difficulty of solving the Schrödinger equation in full, these effects are often treated with approximate methods. In this paper, we present an algorithm to tackle these problems using an extension to the many-interacting-worlds approach to quantum mechanics. This technique uses a kernel function to rebuild the probability density, and therefore, in contrast with the approximation presented in the original paper, it can be naturally extended to n-dimensional systems. This opens up the possibility of performing quantum ground-state searches with steepest-descent methods, and it could potentially lead to real-time quantum molecular-dynamics simulations. The behavior of the algorithm is studied in different potentials and numbers of dimensions and compared both to the original approach and to exact Schrödinger equation solutions whenever possible.
Brownian motion of a nano-colloidal particle: the role of the solvent.
Torres-Carbajal, Alexis; Herrera-Velarde, Salvador; Castañeda-Priego, Ramón
2015-07-15
Brownian motion is a feature of colloidal particles immersed in a liquid-like environment. Usually, it can be described by means of the generalised Langevin equation (GLE) within the framework of the Mori theory. In principle, all quantities that appear in the GLE can be calculated from the molecular information of the whole system, i.e., colloids and solvent molecules. In this work, by means of extensive Molecular Dynamics simulations, we study the effects of the microscopic details and the thermodynamic state of the solvent on the movement of a single nano-colloid. In particular, we consider a two-dimensional model system in which the mass and size of the colloid are two and one orders of magnitude, respectively, larger than the ones associated with the solvent molecules. The latter ones interact via a Lennard-Jones-type potential to tune the nature of the solvent, i.e., it can be either repulsive or attractive. We choose the linear momentum of the Brownian particle as the observable of interest in order to fully describe the Brownian motion within the Mori framework. We particularly focus on the colloid diffusion at different solvent densities and two temperature regimes: high and low (near the critical point) temperatures. To reach our goal, we have rewritten the GLE as a second kind Volterra integral in order to compute the memory kernel in real space. With this kernel, we evaluate the momentum-fluctuating force correlation function, which is of particular relevance since it allows us to establish when the stationarity condition has been reached. Our findings show that even at high temperatures, the details of the attractive interaction potential among solvent molecules induce important changes in the colloid dynamics. Additionally, near the critical point, the dynamical scenario becomes more complex; all the correlation functions decay slowly in an extended time window, however, the memory kernel seems to be only a function of the solvent density. Thus, the explicit inclusion of the solvent in the description of Brownian motion allows us to better understand the behaviour of the memory kernel at those thermodynamic states near the critical region without any further approximation. This information is useful to elaborate more realistic descriptions of Brownian motion that take into account the particular details of the host medium.
Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila
2018-05-07
Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.
Efficient 3D movement-based kernel density estimator and application to wildlife ecology
Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.
2014-01-01
We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.
Calculation of the time resolution of the J-PET tomograph using kernel density estimation
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
2017-06-01
In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.
ERIC Educational Resources Information Center
von Davier, Alina A.; Holland, Paul W.; Livingston, Samuel A.; Casabianca, Jodi; Grant, Mary C.; Martin, Kathleen
2006-01-01
This study examines how closely the kernel equating (KE) method (von Davier, Holland, & Thayer, 2004a) approximates the results of other observed-score equating methods--equipercentile and linear equatings. The study used pseudotests constructed of item responses from a real test to simulate three equating designs: an equivalent groups (EG)…
Kernel-based least squares policy iteration for reinforcement learning.
Xu, Xin; Hu, Dewen; Lu, Xicheng
2007-07-01
In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating an initial controller to ensure online performance.
NASA Astrophysics Data System (ADS)
Yu, Y.; Shen, Y.; Chen, Y. J.
2015-12-01
By using ray theory in conjunction with the Born approximation, Dahlen et al. [2000] computed 3-D sensitivity kernels for finite-frequency seismic traveltimes. A series of studies have been conducted based on this theory to model the mantle velocity structure [e.g., Hung et al., 2004; Montelli et al., 2004; Ren and Shen, 2008; Yang et al., 2009; Liang et al., 2011; Tang et al., 2014]. One of the simplifications in the calculation of the kernels is the paraxial assumption, which may not be strictly valid near the receiver, the region of interest in regional teleseismic tomography. In this study, we improve the accuracy of traveltime sensitivity kernels of the first P arrival by eliminating the paraxial approximation. For calculation efficiency, the traveltime table built by the Fast Marching Method (FMM) is used to calculate both the wave vector and the geometrical spreading at every grid in the whole volume. The improved kernels maintain the sign, but with different amplitudes at different locations. We also find that when the directivity of the scattered wave is being taken into consideration, the differential sensitivity kernel of traveltimes measured at the vertical and radial component of the same receiver concentrates beneath the receiver, which can be used to invert for the structure inside the Earth. Compared with conventional teleseismic tomography, which uses the differential traveltimes between two stations in an array, this method is not affected by instrument response and timing errors, and reduces the uncertainty caused by the finite dimension of the model in regional tomography. In addition, the cross-dependence of P traveltimes to S-wave velocity anomaly is significant and sensitive to the structure beneath the receiver. So with the component-differential finite-frequency sensitivity kernel, the anomaly of both P-wave and S-wave velocity and Vp/Vs ratio can be achieved at the same time.
Plasmon dispersion and Coulomb drag in low-density electron bi-layers
NASA Astrophysics Data System (ADS)
Badalyan, S. M.; Kim, C. S.; Vignale, G.; Senatore, G.
2007-03-01
We investigate the effect of exchange and correlation (xc) on the plasmon spectrum and the Coulomb drag between spatially separated low-density two-dimensional electron layers. We adopt a new approach, which employs dynamic xc kernels in the calculation of the bi-layer plasmon spectra and of the plasmon-mediated drag, and static many-body local field factors in the calculation of the particle-hole contribution to the drag. We observe that both optical and acoustical plasmon modes are strongly affected by xc corrections and shift in opposite directions with decreasing density. This is in stark contrast with the tendency observed within the random phase approximation (RPA). We find that the introduction of xc corrections results in a significant enhancement of the transresistivity and qualitative changes in its temperature dependence. In particular, the large high-temperature plasmon peak that is present in the RPA is found to disappear when the xc corrections are included. Our numerical results are in good agreement with the results of recent experiments by M. Kellogg et al., Solid State Commun. 123, 515 (2002).
SOME ENGINEERING PROPERTIES OF SHELLED AND KERNEL TEA (Camellia sinensis) SEEDS.
Altuntas, Ebubekir; Yildiz, Merve
2017-01-01
Camellia sinensis is the source of tea leaves and it is an economic crop now grown around the World. Tea seed oil has been used for cooking in China and other Asian countries for more than a thousand years. Tea is the most widely consumed beverages after water in the world. It is mainly produced in Asia, central Africa, and exported throughout the World. Some engineering properties (size dimensions, sphericity, volume, bulk and true densities, friction coefficient, colour characteristics and mechanical behaviour as rupture force of shelled and kernel tea ( Camellia sinensis ) seeds were determined in this study. This research was carried out for shelled and kernel tea seeds. The shelled tea seeds used in this study were obtained from East-Black Sea Tea Cooperative Institution in Rize city of Turkey. Shelled and kernel tea seeds were characterized as large and small sizes. The average geometric mean diameter and seed mass of the shelled tea seeds were 15.8 mm, 10.7 mm (large size); 1.47 g, 0.49 g (small size); while the average geometric mean diameter and seed mass of the kernel tea seeds were 11.8 mm, 8 mm for large size; 0.97 g, 0.31 g for small size, respectively. The sphericity, surface area and volume values were found to be higher in a larger size than small size for the shelled and kernel tea samples. The shelled tea seed's colour intensity (Chroma) were found between 59.31 and 64.22 for large size, while the kernel tea seed's chroma values were found between 56.04 68.34 for large size, respectively. The rupture force values of kernel tea seeds were higher than shelled tea seeds for the large size along X axis; whereas, the rupture force values of along X axis were higher than Y axis for large size of shelled tea seeds. The static coefficients of friction of shelled and kernel tea seeds for the large and small sizes higher values for rubber than the other friction surfaces. Some engineering properties, such as geometric mean diameter, sphericity, volume, bulk and true densities, the coefficient of friction, L*, a*, b* colour characteristics and rupture force of shelled and kernel tea ( Camellia sinensis ) seeds will serve to design the equipment used in postharvest treatments.
Wilson loops and QCD/string scattering amplitudes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makeenko, Yuri; Olesen, Poul; Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen O
2009-07-15
We generalize modern ideas about the duality between Wilson loops and scattering amplitudes in N=4 super Yang-Mills theory to large N QCD by deriving a general relation between QCD meson scattering amplitudes and Wilson loops. We then investigate properties of the open-string disk amplitude integrated over reparametrizations. When the Wilson-loop is approximated by the area behavior, we find that the QCD scattering amplitude is a convolution of the standard Koba-Nielsen integrand and a kernel. As usual poles originate from the first factor, whereas no (momentum-dependent) poles can arise from the kernel. We show that the kernel becomes a constant whenmore » the number of external particles becomes large. The usual Veneziano amplitude then emerges in the kinematical regime, where the Wilson loop can be reliably approximated by the area behavior. In this case, we obtain a direct duality between Wilson loops and scattering amplitudes when spatial variables and momenta are interchanged, in analogy with the N=4 super Yang-Mills theory case.« less
NASA Astrophysics Data System (ADS)
Sole-Mari, G.; Fernandez-Garcia, D.
2016-12-01
Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.
Screening of the aerodynamic and biophysical properties of barley malt
NASA Astrophysics Data System (ADS)
Ghodsvali, Alireza; Farzaneh, Vahid; Bakhshabadi, Hamid; Zare, Zahra; Karami, Zahra; Mokhtarian, Mohsen; Carvalho, Isabel. S.
2016-10-01
An understanding of the aerodynamic and biophysical properties of barley malt is necessary for the appropriate design of equipment for the handling, shipping, dehydration, grading, sorting and warehousing of this strategic crop. Malting is a complex biotechnological process that includes steeping; germination and finally, the dehydration of cereal grains under controlled temperature and humidity conditions. In this investigation, the biophysical properties of barley malt were predicted using two models of artificial neural networks as well as response surface methodology. Stepping time and germination time were selected as the independent variables and 1 000 kernel weight, kernel density and terminal velocity were selected as the dependent variables (responses). The obtained outcomes showed that the artificial neural network model, with a logarithmic sigmoid activation function, presents more precise results than the response surface model in the prediction of the aerodynamic and biophysical properties of produced barley malt. This model presented the best result with 8 nodes in the hidden layer and significant correlation coefficient values of 0.783, 0.767 and 0.991 were obtained for responses one thousand kernel weight, kernel density, and terminal velocity, respectively. The outcomes indicated that this novel technique could be successfully applied in quantitative and qualitative monitoring within the malting process.
Locally adaptive methods for KDE-based random walk models of reactive transport in porous media
NASA Astrophysics Data System (ADS)
Sole-Mari, G.; Fernandez-Garcia, D.
2017-12-01
Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.
Kandianis, Catherine B.; Michenfelder, Abigail S.; Simmons, Susan J.; Grusak, Michael A.; Stapleton, Ann E.
2013-01-01
The improvement of grain nutrient profiles for essential minerals and vitamins through breeding strategies is a target important for agricultural regions where nutrient poor crops like maize contribute a large proportion of the daily caloric intake. Kernel iron concentration in maize exhibits a broad range. However, the magnitude of genotype by environment (GxE) effects on this trait reduces the efficacy and predictability of selection programs, particularly when challenged with abiotic stress such as water and nitrogen limitations. Selection has also been limited by an inverse correlation between kernel iron concentration and the yield component of kernel size in target environments. Using 25 maize inbred lines for which extensive genome sequence data is publicly available, we evaluated the response of kernel iron density and kernel mass to water and nitrogen limitation in a managed field stress experiment using a factorial design. To further understand GxE interactions we used partition analysis to characterize response of kernel iron and weight to abiotic stressors among all genotypes, and observed two patterns: one characterized by higher kernel iron concentrations in control over stress conditions, and another with higher kernel iron concentration under drought and combined stress conditions. Breeding efforts for this nutritional trait could exploit these complementary responses through combinations of favorable allelic variation from these already well-characterized genetic stocks. PMID:24363659
Kernel and divergence techniques in high energy physics separations
NASA Astrophysics Data System (ADS)
Bouř, Petr; Kůs, Václav; Franc, Jiří
2017-10-01
Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.
Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua
2016-02-01
Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.
On supervised graph Laplacian embedding CA model & kernel construction and its application
NASA Astrophysics Data System (ADS)
Zeng, Junwei; Qian, Yongsheng; Wang, Min; Yang, Yongzhong
2017-01-01
There are many methods to construct kernel with given data attribute information. Gaussian radial basis function (RBF) kernel is one of the most popular ways to construct a kernel. The key observation is that in real-world data, besides the data attribute information, data label information also exists, which indicates the data class. In order to make use of both data attribute information and data label information, in this work, we propose a supervised kernel construction method. Supervised information from training data is integrated into standard kernel construction process to improve the discriminative property of resulting kernel. A supervised Laplacian embedding cellular automaton model is another key application developed for two-lane heterogeneous traffic flow with the safe distance and large-scale truck. Based on the properties of traffic flow in China, we re-calibrate the cell length, velocity, random slowing mechanism and lane-change conditions and use simulation tests to study the relationships among the speed, density and flux. The numerical results show that the large-scale trucks will have great effects on the traffic flow, which are relevant to the proportion of the large-scale trucks, random slowing rate and the times of the lane space change.
Analysis of the spatial distribution of dengue cases in the city of Rio de Janeiro, 2011 and 2012
Carvalho, Silvia; Magalhães, Mônica de Avelar Figueiredo Mafra; Medronho, Roberto de Andrade
2017-01-01
ABSTRACT OBJECTIVE Analyze the spatial distribution of classical dengue and severe dengue cases in the city of Rio de Janeiro. METHODS Exploratory study, considering cases of classical dengue and severe dengue with laboratory confirmation of the infection in the city of Rio de Janeiro during the years 2011/2012. The georeferencing technique was applied for the cases notified in the Notification Increase Information System in the period of 2011 and 2012. For this process, the fields “street” and “number” were used. The ArcGis10 program’s Geocoding tool’s automatic process was performed. The spatial analysis was done through the kernel density estimator. RESULTS Kernel density pointed out hotspots for classic dengue that did not coincide geographically with severe dengue and were in or near favelas. The kernel ratio did not show a notable change in the spatial distribution pattern observed in the kernel density analysis. The georeferencing process showed a loss of 41% of classic dengue registries and 17% of severe dengue registries due to the address in the Notification Increase Information System form. CONCLUSIONS The hotspots near the favelas suggest that the social vulnerability of these localities can be an influencing factor for the occurrence of this aggravation since there is a deficiency of the supply and access to essential goods and services for the population. To reduce this vulnerability, interventions must be related to macroeconomic policies. PMID:28832752
NASA Technical Reports Server (NTRS)
Desmarais, R. N.; Rowe, W. S.
1984-01-01
For the design of active controls to stabilize flight vehicles, which requires the use of unsteady aerodynamics that are valid for arbitrary complex frequencies, algorithms are derived for evaluating the nonelementary part of the kernel of the integral equation that relates unsteady pressure to downwash. This part of the kernel is separated into an infinite limit integral that is evaluated using Bessel and Struve functions and into a finite limit integral that is expanded in series and integrated termwise in closed form. The developed series expansions gave reliable answers for all complex reduced frequencies and executed faster than exponential approximations for many pressure stations.
Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel
NASA Astrophysics Data System (ADS)
Pai, Akshay; Sommer, Stefan; Sørensen, Lauge; Darkner, Sune; Sporring, Jon; Nielsen, Mads
2015-03-01
Interpolating kernels are crucial to solving a stationary velocity field (SVF) based image registration problem. This is because, velocity fields need to be computed in non-integer locations during integration. The regularity in the solution to the SVF registration problem is controlled by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p<0.05 in amygdala) and B-Spline freeform deformation (p<0.05 in amygdala and cortical gray matter).
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
Efficient Multiple Kernel Learning Algorithms Using Low-Rank Representation.
Niu, Wenjia; Xia, Kewen; Zu, Baokai; Bai, Jianchuan
2017-01-01
Unlike Support Vector Machine (SVM), Multiple Kernel Learning (MKL) allows datasets to be free to choose the useful kernels based on their distribution characteristics rather than a precise one. It has been shown in the literature that MKL holds superior recognition accuracy compared with SVM, however, at the expense of time consuming computations. This creates analytical and computational difficulties in solving MKL algorithms. To overcome this issue, we first develop a novel kernel approximation approach for MKL and then propose an efficient Low-Rank MKL (LR-MKL) algorithm by using the Low-Rank Representation (LRR). It is well-acknowledged that LRR can reduce dimension while retaining the data features under a global low-rank constraint. Furthermore, we redesign the binary-class MKL as the multiclass MKL based on pairwise strategy. Finally, the recognition effect and efficiency of LR-MKL are verified on the datasets Yale, ORL, LSVT, and Digit. Experimental results show that the proposed LR-MKL algorithm is an efficient kernel weights allocation method in MKL and boosts the performance of MKL largely.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Säkkinen, Niko; Peng, Yang; Fritz-Haber-Institut der Max-Planck-Gesellschaft, Faradayweg 4-6, 14195 Berlin-Dahlem
2015-12-21
We present a Kadanoff-Baym formalism to study time-dependent phenomena for systems of interacting electrons and phonons in the framework of many-body perturbation theory. The formalism takes correctly into account effects of the initial preparation of an equilibrium state and allows for an explicit time-dependence of both the electronic and phononic degrees of freedom. The method is applied to investigate the charge neutral and non-neutral excitation spectra of a homogeneous, two-site, two-electron Holstein model. This is an extension of a previous study of the ground state properties in the Hartree (H), partially self-consistent Born (Gd) and fully self-consistent Born (GD) approximationsmore » published in Säkkinen et al. [J. Chem. Phys. 143, 234101 (2015)]. Here, the homogeneous ground state solution is shown to become unstable for a sufficiently strong interaction while a symmetry-broken ground state solution is shown to be stable in the Hartree approximation. Signatures of this instability are observed for the partially self-consistent Born approximation but are not found for the fully self-consistent Born approximation. By understanding the stability properties, we are able to study the linear response regime by calculating the density-density response function by time-propagation. This amounts to a solution of the Bethe-Salpeter equation with a sophisticated kernel. The results indicate that none of the approximations is able to describe the response function during or beyond the bipolaronic crossover for the parameters investigated. Overall, we provide an extensive discussion on when the approximations are valid and how they fail to describe the studied exact properties of the chosen model system.« less
NASA Astrophysics Data System (ADS)
Lindemer, T. B.; Voit, S. L.; Silva, C. M.; Besmann, T. M.; Hunt, R. D.
2014-05-01
The US Department of Energy is developing a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with uranium nitride (UN) kernels with diameters near 825 μm. This effort explores factors involved in the conversion of uranium oxide-carbon microspheres into UN kernels. An analysis of previous studies with sufficient experimental details is provided. Thermodynamic calculations were made to predict pressures of carbon monoxide and other relevant gases for several reactions that can be involved in the conversion of uranium oxides and carbides into UN. Uranium oxide-carbon microspheres were heated in a microbalance with an attached mass spectrometer to determine details of calcining and carbothermic conversion in argon, nitrogen, and vacuum. A model was derived from experiments on the vacuum conversion to uranium oxide-carbide kernels. UN-containing kernels were fabricated using this vacuum conversion as part of the overall process. Carbonitride kernels of ∼89% of theoretical density were produced along with several observations concerning the different stages of the process.
Surface-hopping dynamics and decoherence with quantum equilibrium structure.
Grunwald, Robbie; Kim, Hyojoon; Kapral, Raymond
2008-04-28
In open quantum systems, decoherence occurs through interaction of a quantum subsystem with its environment. The computation of expectation values requires a knowledge of the quantum dynamics of operators and sampling from initial states of the density matrix describing the subsystem and bath. We consider situations where the quantum evolution can be approximated by quantum-classical Liouville dynamics and examine the circumstances under which the evolution can be reduced to surface-hopping dynamics, where the evolution consists of trajectory segments exclusively evolving on single adiabatic surfaces, with probabilistic hops between these surfaces. The justification for the reduction depends on the validity of a Markovian approximation on a bath averaged memory kernel that accounts for quantum coherence in the system. We show that such a reduction is often possible when initial sampling is from either the quantum or classical bath initial distributions. If the average is taken only over the quantum dispersion that broadens the classical distribution, then such a reduction is not always possible.
NASA Astrophysics Data System (ADS)
Toufik, Mekkaoui; Atangana, Abdon
2017-10-01
Recently a new concept of fractional differentiation with non-local and non-singular kernel was introduced in order to extend the limitations of the conventional Riemann-Liouville and Caputo fractional derivatives. A new numerical scheme has been developed, in this paper, for the newly established fractional differentiation. We present in general the error analysis. The new numerical scheme was applied to solve linear and non-linear fractional differential equations. We do not need a predictor-corrector to have an efficient algorithm, in this method. The comparison of approximate and exact solutions leaves no doubt believing that, the new numerical scheme is very efficient and converges toward exact solution very rapidly.
A new discrete dipole kernel for quantitative susceptibility mapping.
Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian
2018-09-01
Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.
Production of near-full density uranium nitride microspheres with a hot isostatic press
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMurray, Jacob W.; Kiggans, Jr., Jim O.; Helmreich, Grant W.
Depleted uranium nitride (UN) kernels with diameters ranging from 420 to 858 microns and theoretical densities (TD) between 87 and 91 percent were postprocessed using a hot isostatic press (HIP) in an argon gas media. This treatment was shown to increase the TD up to above 97%. Uranium nitride is highly reactive with oxygen. Therefore, a novel crucible design was implemented to remove impurities in the argon gas via in situ gettering to avoid oxidation of the UN kernels. The density before and after each HIP procedure was calculated from average weight, volume, and ellipticity determined with established characterization techniquesmore » for particle. Furthermore, micrographs confirmed the nearly full densification of the particles using the gettering approach and HIP processing parameters investigated in this work.« less
A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4
NASA Technical Reports Server (NTRS)
Park, Young-Keun; Fahrenthold, Eric P.
2004-01-01
An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.
Modeling utilization distributions in space and time
Keating, K.A.; Cherry, S.
2009-01-01
W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.
Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; ...
2015-02-25
Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H 2, Be, N 2, H 2O, and C 2H 4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less
NASA Astrophysics Data System (ADS)
Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.
2014-12-01
Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ≥105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.
Lu, Deyu
2016-08-05
A systematic route to go beyond the exact exchange plus random phase approximation (RPA) is to include a physical exchange-correlation kernel in the adiabatic-connection fluctuation-dissipation theorem. Previously, [D. Lu, J. Chem. Phys. 140, 18A520 (2014)], we found that non-local kernels with a screening length depending on the local Wigner-Seitz radius, r s(r), suffer an error associated with a spurious long-range repulsion in van der Waals bounded systems, which deteriorates the binding energy curve as compared to RPA. Here, we analyze the source of the error and propose to replace r s(r) by a global, average r s in the kernel.more » Exemplary studies with the Corradini, del Sole, Onida, and Palummo kernel show that while this change does not affect the already outstanding performance in crystalline solids, using an average r s significantly reduces the spurious long-range tail in the exchange-correlation kernel in van der Waals bounded systems. Finally, when this method is combined with further corrections using local dielectric response theory, the binding energy of the Kr dimer is improved three times as compared to RPA.« less
Exploiting graph kernels for high performance biomedical relation extraction.
Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri
2018-01-30
Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.
On- and off-axis spectral emission features from laser-produced gas breakdown plasmas
NASA Astrophysics Data System (ADS)
Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.; Brumfield, B. E.; Phillips, M. C.; Miloshevsky, G.
2017-06-01
Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during their early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of the surrounding ambient: photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early times of their creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with a pulse duration of 6 ns are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density, and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times, while space and time resolved spectroscopy is used for evaluating the emission features and for inferring plasma physical conditions at on- and off-axis positions. The structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using the computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms, and molecules are separated in time with early time temperatures and densities in excess of 35 000 K and 4 × 1018/cm3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N2 bands and is represented by non-local thermodynamic equilibrium (non-LTE) conditions. Our results also highlight that the ultraviolet radiation emitted during the early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.
On- and off-axis spectral emission features from laser-produced gas breakdown plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.
Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during its early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of surrounding ambient: viz. photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early timesmore » of its creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission features of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with 6 ns pulse duration are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times while space and time resolved spectroscopy is used for evaluating the emission features as well as for inferring plasma fundaments at on- and off-axis. Structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms and molecules are separated in time with an early time temperatures and densities in excess of 35000 K and 4×10 18 /cm 3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N 2 bands and represented by non-LTE conditions. Finally, our results also highlight that the ultraviolet radiation emitted during early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.« less
On- and off-axis spectral emission features from laser-produced gas breakdown plasmas
Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.; ...
2017-06-01
Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during its early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of surrounding ambient: viz. photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early timesmore » of its creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission features of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with 6 ns pulse duration are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times while space and time resolved spectroscopy is used for evaluating the emission features as well as for inferring plasma fundaments at on- and off-axis. Structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms and molecules are separated in time with an early time temperatures and densities in excess of 35000 K and 4×1018 /cm3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N2 bands and represented by non-LTE conditions. Our results also highlight that the ultraviolet radiation emitted during early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal
Open Computing Language (OpenCL) is a high-level language that enables software programmers to explore Field Programmable Gate Arrays (FPGAs) for application acceleration. The Intel FPGA software development kit (SDK) for OpenCL allows a user to specify applications at a high level and explore the performance of low-level hardware acceleration. In this report, we present the FPGA performance and power consumption results of the single-precision floating-point vector add OpenCL kernel using the Intel FPGA SDK for OpenCL on the Nallatech 385A FPGA board. The board features an Arria 10 FPGA. We evaluate the FPGA implementations using the compute unit duplication andmore » kernel vectorization optimization techniques. On the Nallatech 385A FPGA board, the maximum compute kernel bandwidth we achieve is 25.8 GB/s, approximately 76% of the peak memory bandwidth. The power consumption of the FPGA device when running the kernels ranges from 29W to 42W.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yanai, Takeshi; Fann, George I.; Beylkin, Gregory
Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H 2, Be, N 2, H 2O, and C 2H 4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less
A method of smoothed particle hydrodynamics using spheroidal kernels
NASA Technical Reports Server (NTRS)
Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.
1995-01-01
We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.
An Ensemble Approach to Building Mercer Kernels with Prior Information
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2005-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.
Transient and asymptotic behaviour of the binary breakage problem
NASA Astrophysics Data System (ADS)
Mantzaris, Nikos V.
2005-06-01
The general binary breakage problem with power-law breakage functions and two families of symmetric and asymmetric breakage kernels is studied in this work. A useful transformation leads to an equation that predicts self-similar solutions in its asymptotic limit and offers explicit knowledge of the mean size and particle density at each point in dimensionless time. A novel moving boundary algorithm in the transformed coordinate system is developed, allowing the accurate prediction of the full transient behaviour of the system from the initial condition up to the point where self-similarity is achieved, and beyond if necessary. The numerical algorithm is very rapid and its results are in excellent agreement with known analytical solutions. In the case of the symmetric breakage kernels only unimodal, self-similar number density functions are obtained asymptotically for all parameter values and independent of the initial conditions, while in the case of asymmetric breakage kernels, bimodality appears for high degrees of asymmetry and sharp breakage functions. For symmetric and discrete breakage kernels, self-similarity is not achieved. The solution exhibits sustained oscillations with amplitude that depends on the initial condition and the sharpness of the breakage mechanism, while the period is always fixed and equal to ln 2 with respect to dimensionless time.
Quantification of process variables for carbothermic synthesis of UC 1-xN x fuel microspheres
Lindemer, Terrance B.; Silva, Chinthaka M.; Henry, Jr, John James; ...
2016-11-05
This report details the continued investigation of process variables involved in converting sol-gel-derived, urania-carbon microspheres to ~820-μm-dia. UC 1-xN x fuel kernels in flow-through, vertical Mo and W crucibles at temperatures up to 2123 K. Experiments included calcining of air-dried UO 3-H 2O-C microspheres in Ar and H 2-containing gases, conversion of the resulting UO 2-C kernels to dense UO2:2UC in the same gases and vacuum, and its conversion in N 2 to UC 1-xN x (x = ~0.85). The thermodynamics of the relevant reactions were applied extensively to interpret and control the process variables. Producing the precursor UO 2:2UCmore » kernel of ~96% theoretical density was required, but its subsequent conversion to UC 1-xN x at 2123 K was not accompanied by sintering and resulted in ~83-86% of theoretical density. Increasing the UC 1-xN x kernel nitride component to ~0.98 in flowing N 2-H 2 mixtures to evolve HCN was shown to be quantitatively consistent with present and past experiments and the only useful application of H 2 in the entire process.« less
Quantification of process variables for carbothermic synthesis of UC1-xNx fuel microspheres
NASA Astrophysics Data System (ADS)
Lindemer, T. B.; Silva, C. M.; Henry, J. J.; McMurray, J. W.; Voit, S. L.; Collins, J. L.; Hunt, R. D.
2017-01-01
This report details the continued investigation of process variables involved in converting sol-gel-derived, urania-carbon microspheres to ∼820-μm-dia. UC1-xNx fuel kernels in flow-through, vertical Mo and W crucibles at temperatures up to 2123 K. Experiments included calcining of air-dried UO3-H2O-C microspheres in Ar and H2-containing gases, conversion of the resulting UO2-C kernels to dense UO2:2UC in the same gases and vacuum, and its conversion in N2 to UC1-xNx (x = ∼0.85). The thermodynamics of the relevant reactions were applied extensively to interpret and control the process variables. Producing the precursor UO2:2UC kernel of ∼96% theoretical density was required, but its subsequent conversion to UC1-xNx at 2123 K was not accompanied by sintering and resulted in ∼83-86% of theoretical density. Increasing the UC1-xNx kernel nitride component to ∼0.98 in flowing N2-H2 mixtures to evolve HCN was shown to be quantitatively consistent with present and past experiments and the only useful application of H2 in the entire process.
Using kernel density estimation to understand the influence of neighbourhood destinations on BMI
King, Tania L; Bentley, Rebecca J; Thornton, Lukar E; Kavanagh, Anne M
2016-01-01
Objectives Little is known about how the distribution of destinations in the local neighbourhood is related to body mass index (BMI). Kernel density estimation (KDE) is a spatial analysis technique that accounts for the location of features relative to each other. Using KDE, this study investigated whether individuals living near destinations (shops and service facilities) that are more intensely distributed rather than dispersed, have lower BMIs. Study design and setting A cross-sectional study of 2349 residents of 50 urban areas in metropolitan Melbourne, Australia. Methods Destinations were geocoded, and kernel density estimates of destination intensity were created using kernels of 400, 800 and 1200 m. Using multilevel linear regression, the association between destination intensity (classified in quintiles Q1(least)–Q5(most)) and BMI was estimated in models that adjusted for the following confounders: age, sex, country of birth, education, dominant household occupation, household type, disability/injury and area disadvantage. Separate models included a physical activity variable. Results For kernels of 800 and 1200 m, there was an inverse relationship between BMI and more intensely distributed destinations (compared to areas with least destination intensity). Effects were significant at 1200 m: Q4, β −0.86, 95% CI −1.58 to −0.13, p=0.022; Q5, β −1.03 95% CI −1.65 to −0.41, p=0.001. Inclusion of physical activity in the models attenuated effects, although effects remained marginally significant for Q5 at 1200 m: β −0.77 95% CI −1.52, −0.02, p=0.045. Conclusions This study conducted within urban Melbourne, Australia, found that participants living in areas of greater destination intensity within 1200 m of home had lower BMIs. Effects were partly explained by physical activity. The results suggest that increasing the intensity of destination distribution could reduce BMI levels by encouraging higher levels of physical activity. PMID:26883235
Uosyte, Raimonda; Shaw, Darren J; Gunn-Moore, Danielle A; Fraga-Manteiga, Eduardo; Schwarz, Tobias
2015-01-01
Turbinate destruction is an important diagnostic criterion in canine and feline nasal computed tomography (CT). However decreased turbinate visibility may also be caused by technical CT settings and nasal fluid. The purpose of this experimental, crossover study was to determine whether fluid reduces conspicuity of canine and feline nasal turbinates in CT and if so, whether CT settings can maximize conspicuity. Three canine and three feline cadaver heads were used. Nasal slabs were CT-scanned before and after submerging them in a water bath; using sequential, helical, and ultrahigh resolution modes; with images in low, medium, and high frequency image reconstruction kernels; and with application of additional posterior fossa optimization and high contrast enhancing filters. Visible turbinate length was measured by a single observer using manual tracing. Nasal density heterogeneity was measured using the standard deviation (SD) of mean nasal density from a region of interest in each nasal cavity. Linear mixed-effect models using the R package ‘nlme’, multivariable models and standard post hoc Tukey pair-wise comparisons were performed to investigate the effect of several variables (nasal content, scanning mode, image reconstruction kernel, application of post reconstruction filters) on measured visible total turbinate length and SD of mean nasal density. All canine and feline water-filled nasal slabs showed significantly decreased visibility of nasal turbinates (P < 0.001). High frequency kernels provided the best turbinate visibility and highest SD of aerated nasal slabs, whereas medium frequency kernels were optimal for water-filled nasal slabs. Scanning mode and filter application had no effect on turbinate visibility. PMID:25867935
SOMKE: kernel density estimation over data streams by sequences of self-organizing maps.
Cao, Yuan; He, Haibo; Man, Hong
2012-08-01
In this paper, we propose a novel method SOMKE, for kernel density estimation (KDE) over data streams based on sequences of self-organizing map (SOM). In many stream data mining applications, the traditional KDE methods are infeasible because of the high computational cost, processing time, and memory requirement. To reduce the time and space complexity, we propose a SOM structure in this paper to obtain well-defined data clusters to estimate the underlying probability distributions of incoming data streams. The main idea of this paper is to build a series of SOMs over the data streams via two operations, that is, creating and merging the SOM sequences. The creation phase produces the SOM sequence entries for windows of the data, which obtains clustering information of the incoming data streams. The size of the SOM sequences can be further reduced by combining the consecutive entries in the sequence based on the measure of Kullback-Leibler divergence. Finally, the probability density functions over arbitrary time periods along the data streams can be estimated using such SOM sequences. We compare SOMKE with two other KDE methods for data streams, the M-kernel approach and the cluster kernel approach, in terms of accuracy and processing time for various stationary data streams. Furthermore, we also investigate the use of SOMKE over nonstationary (evolving) data streams, including a synthetic nonstationary data stream, a real-world financial data stream and a group of network traffic data streams. The simulation results illustrate the effectiveness and efficiency of the proposed approach.
Convergence behavior of the random phase approximation renormalized correlation energy
NASA Astrophysics Data System (ADS)
Bates, Jefferson E.; Sensenig, Jonathon; Ruzsinszky, Adrienn
2017-05-01
Based on the random phase approximation (RPA), RPA renormalization [J. E. Bates and F. Furche, J. Chem. Phys. 139, 171103 (2013), 10.1063/1.4827254] is a robust many-body perturbation theory that works for molecules and materials because it does not diverge as the Kohn-Sham gap approaches zero. Additionally, RPA renormalization enables the simultaneous calculation of RPA and beyond-RPA correlation energies since the total correlation energy is the sum of a series of independent contributions. The first-order approximation (RPAr1) yields the dominant beyond-RPA contribution to the correlation energy for a given exchange-correlation kernel, but systematically underestimates the total beyond-RPA correction. For both the homogeneous electron gas model and real systems, we demonstrate numerically that RPA renormalization beyond first order converges monotonically to the infinite-order beyond-RPA correlation energy for several model exchange-correlation kernels and that the rate of convergence is principally determined by the choice of the kernel and spin polarization of the ground state. The monotonic convergence is rationalized from an analysis of the RPA renormalized correlation energy corrections, assuming the exchange-correlation kernel and response functions satisfy some reasonable conditions. For spin-unpolarized atoms, molecules, and bulk solids, we find that RPA renormalization is typically converged to 1 meV error or less by fourth order regardless of the band gap or dimensionality. Most spin-polarized systems converge at a slightly slower rate, with errors on the order of 10 meV at fourth order and typically requiring up to sixth order to reach 1 meV error or less. Slowest to converge, however, open-shell atoms present the most challenging case and require many higher orders to converge.
Comparing fixed and variable-width Gaussian networks.
Kůrková, Věra; Kainen, Paul C
2014-09-01
The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Deyu
A systematic route to go beyond the exact exchange plus random phase approximation (RPA) is to include a physical exchange-correlation kernel in the adiabatic-connection fluctuation-dissipation theorem. Previously, [D. Lu, J. Chem. Phys. 140, 18A520 (2014)], we found that non-local kernels with a screening length depending on the local Wigner-Seitz radius, r s(r), suffer an error associated with a spurious long-range repulsion in van der Waals bounded systems, which deteriorates the binding energy curve as compared to RPA. Here, we analyze the source of the error and propose to replace r s(r) by a global, average r s in the kernel.more » Exemplary studies with the Corradini, del Sole, Onida, and Palummo kernel show that while this change does not affect the already outstanding performance in crystalline solids, using an average r s significantly reduces the spurious long-range tail in the exchange-correlation kernel in van der Waals bounded systems. Finally, when this method is combined with further corrections using local dielectric response theory, the binding energy of the Kr dimer is improved three times as compared to RPA.« less
Wu, Jianlan; Cao, Jianshu
2013-07-28
We apply a new formalism to derive the higher-order quantum kinetic expansion (QKE) for studying dissipative dynamics in a general quantum network coupled with an arbitrary thermal bath. The dynamics of system population is described by a time-convoluted kinetic equation, where the time-nonlocal rate kernel is systematically expanded of the order of off-diagonal elements of the system Hamiltonian. In the second order, the rate kernel recovers the expression of the noninteracting-blip approximation method. The higher-order corrections in the rate kernel account for the effects of the multi-site quantum coherence and the bath relaxation. In a quantum harmonic bath, the rate kernels of different orders are analytically derived. As demonstrated by four examples, the higher-order QKE can reliably predict quantum dissipative dynamics, comparing well with the hierarchic equation approach. More importantly, the higher-order rate kernels can distinguish and quantify distinct nontrivial quantum coherent effects, such as long-range energy transfer from quantum tunneling and quantum interference arising from the phase accumulation of interactions.
Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration
Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng
2012-01-01
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969
Boundary conditions for gas flow problems from anisotropic scattering kernels
NASA Astrophysics Data System (ADS)
To, Quy-Dong; Vu, Van-Huyen; Lauriat, Guy; Léonard, Céline
2015-10-01
The paper presents an interface model for gas flowing through a channel constituted of anisotropic wall surfaces. Using anisotropic scattering kernels and Chapman Enskog phase density, the boundary conditions (BCs) for velocity, temperature, and discontinuities including velocity slip and temperature jump at the wall are obtained. Two scattering kernels, Dadzie and Méolans (DM) kernel, and generalized anisotropic Cercignani-Lampis (ACL) are examined in the present paper, yielding simple BCs at the wall fluid interface. With these two kernels, we rigorously recover the analytical expression for orientation dependent slip shown in our previous works [Pham et al., Phys. Rev. E 86, 051201 (2012) and To et al., J. Heat Transfer 137, 091002 (2015)] which is in good agreement with molecular dynamics simulation results. More important, our models include both thermal transpiration effect and new equations for the temperature jump. While the same expression depending on the two tangential accommodation coefficients is obtained for slip velocity, the DM and ACL temperature equations are significantly different. The derived BC equations associated with these two kernels are of interest for the gas simulations since they are able to capture the direction dependent slip behavior of anisotropic interfaces.
P- and S-wave Receiver Function Imaging with Scattering Kernels
NASA Astrophysics Data System (ADS)
Hansen, S. M.; Schmandt, B.
2017-12-01
Full waveform inversion provides a flexible approach to the seismic parameter estimation problem and can account for the full physics of wave propagation using numeric simulations. However, this approach requires significant computational resources due to the demanding nature of solving the forward and adjoint problems. This issue is particularly acute for temporary passive-source seismic experiments (e.g. PASSCAL) that have traditionally relied on teleseismic earthquakes as sources resulting in a global scale forward problem. Various approximation strategies have been proposed to reduce the computational burden such as hybrid methods that embed a heterogeneous regional scale model in a 1D global model. In this study, we focus specifically on the problem of scattered wave imaging (migration) using both P- and S-wave receiver function data. The proposed method relies on body-wave scattering kernels that are derived from the adjoint data sensitivity kernels which are typically used for full waveform inversion. The forward problem is approximated using ray theory yielding a computationally efficient imaging algorithm that can resolve dipping and discontinuous velocity interfaces in 3D. From the imaging perspective, this approach is closely related to elastic reverse time migration. An energy stable finite-difference method is used to simulate elastic wave propagation in a 2D hypothetical subduction zone model. The resulting synthetic P- and S-wave receiver function datasets are used to validate the imaging method. The kernel images are compared with those generated by the Generalized Radon Transform (GRT) and Common Conversion Point stacking (CCP) methods. These results demonstrate the potential of the kernel imaging approach to constrain lithospheric structure in complex geologic environments with sufficiently dense recordings of teleseismic data. This is demonstrated using a receiver function dataset from the Central California Seismic Experiment which shows several dipping interfaces related to the tectonic assembly of this region. Figure 1. Scattering kernel examples for three receiver function phases. A) direct P-to-s (Ps), B) direct S-to-p and C) free-surface PP-to-s (PPs).
Three-Dimensional Sensitivity Kernels of Z/H Amplitude Ratios of Surface and Body Waves
NASA Astrophysics Data System (ADS)
Bao, X.; Shen, Y.
2017-12-01
The ellipticity of Rayleigh wave particle motion, or Z/H amplitude ratio, has received increasing attention in inversion for shallow Earth structures. Previous studies of the Z/H ratio assumed one-dimensional (1D) velocity structures beneath the receiver, ignoring the effects of three-dimensional (3D) heterogeneities on wave amplitudes. This simplification may introduce bias in the resulting models. Here we present 3D sensitivity kernels of the Z/H ratio to Vs, Vp, and density perturbations, based on finite-difference modeling of wave propagation in 3D structures and the scattering-integral method. Our full-wave approach overcomes two main issues in previous studies of Rayleigh wave ellipticity: (1) the finite-frequency effects of wave propagation in 3D Earth structures, and (2) isolation of the fundamental mode Rayleigh waves from Rayleigh wave overtones and converted Love waves. In contrast to the 1D depth sensitivity kernels in previous studies, our 3D sensitivity kernels exhibit patterns that vary with azimuths and distances to the receiver. The laterally-summed 3D sensitivity kernels and 1D depth sensitivity kernels, based on the same homogeneous reference model, are nearly identical with small differences that are attributable to the single period of the 1D kernels and a finite period range of the 3D kernels. We further verify the 3D sensitivity kernels by comparing the predictions from the kernels with the measurements from numerical simulations of wave propagation for models with various small-scale perturbations. We also calculate and verify the amplitude kernels for P waves. This study shows that both Rayleigh and body wave Z/H ratios provide vertical and lateral constraints on the structure near the receiver. With seismic arrays, the 3D kernels afford a powerful tool to use the Z/H ratios to obtain accurate and high-resolution Earth models.
Raihan, Mohammad Sharif; Liu, Jie; Huang, Juan; Guo, Huan; Pan, Qingchun; Yan, Jianbing
2016-08-01
Sixteen major QTLs regulating maize kernel traits were mapped in multiple environments and one of them, qKW - 9.2 , was restricted to 630 Kb, harboring 28 putative gene models. To elucidate the genetic basis of kernel traits, a quantitative trait locus (QTL) analysis was conducted in a maize recombinant inbred line population derived from a cross between two diverse parents Zheng58 and SK, evaluated across eight environments. Construction of a high-density linkage map was based on 13,703 single-nucleotide polymorphism markers, covering 1860.9 cM of the whole genome. In total, 18, 26, 23, and 19 QTLs for kernel length, width, thickness, and 100-kernel weight, respectively, were detected on the basis of a single-environment analysis, and each QTL explained 3.2-23.7 % of the phenotypic variance. Sixteen major QTLs, which could explain greater than 10 % of the phenotypic variation, were mapped in multiple environments, implying that kernel traits might be controlled by many minor and multiple major QTLs. The major QTL qKW-9.2 with physical confidence interval of 1.68 Mbp, affecting kernel width, was then selected for fine mapping using heterogeneous inbred families. At final, the location of the underlying gene was narrowed down to 630 Kb, harboring 28 putative candidate-gene models. This information will enhance molecular breeding for kernel traits and simultaneously assist the gene cloning underlying this QTL, helping to reveal the genetic basis of kernel development in maize.
Sensitivity Kernels for the Cross-Convolution Measure: Eliminate the Source in Waveform Tomography
NASA Astrophysics Data System (ADS)
Menke, W. H.
2017-12-01
We use the adjoint method to derive sensitivity kernels for the cross-convolution measure, a goodness-of-fit criterion that is applicable to seismic data containing closely-spaced multiple arrivals, such as reverberating compressional waves and split shear waves. In addition to a general formulation, specific expressions for sensitivity with respect to density, Lamé parameter and shear modulus are derived for a isotropic elastic solid. As is typical of adjoint methods, the kernels depend upon an adjoint field, the source of which, in this case, is the reference displacement field, pre-multiplied by a matrix of cross-correlations of components of the observed field. We use a numerical simulation to evaluate the resolving power of a topographic inversion that employs the cross-convolution measure. The estimated resolving kernel shows is point-like, indicating that the cross-convolution measure will perform well in waveform tomography settings.
StarSmasher: Smoothed Particle Hydrodynamics code for smashing stars and planets
NASA Astrophysics Data System (ADS)
Gaburov, Evghenii; Lombardi, James C., Jr.; Portegies Zwart, Simon; Rasio, F. A.
2018-05-01
Smoothed Particle Hydrodynamics (SPH) is a Lagrangian particle method that approximates a continuous fluid as discrete nodes, each carrying various parameters such as mass, position, velocity, pressure, and temperature. In an SPH simulation the resolution scales with the particle density; StarSmasher is able to handle both equal-mass and equal number-density particle models. StarSmasher solves for hydro forces by calculating the pressure for each particle as a function of the particle's properties - density, internal energy, and internal properties (e.g. temperature and mean molecular weight). The code implements variational equations of motion and libraries to calculate the gravitational forces between particles using direct summation on NVIDIA graphics cards. Using a direct summation instead of a tree-based algorithm for gravity increases the accuracy of the gravity calculations at the cost of speed. The code uses a cubic spline for the smoothing kernel and an artificial viscosity prescription coupled with a Balsara Switch to prevent unphysical interparticle penetration. The code also implements an artificial relaxation force to the equations of motion to add a drag term to the calculated accelerations during relaxation integrations. Initially called StarCrash, StarSmasher was developed originally by Rasio.
Analysis of Drude model using fractional derivatives without singular kernels
NASA Astrophysics Data System (ADS)
Jiménez, Leonardo Martínez; García, J. Juan Rosales; Contreras, Abraham Ortega; Baleanu, Dumitru
2017-11-01
We report study exploring the fractional Drude model in the time domain, using fractional derivatives without singular kernels, Caputo-Fabrizio (CF), and fractional derivatives with a stretched Mittag-Leffler function. It is shown that the velocity and current density of electrons moving through a metal depend on both the time and the fractional order 0 < γ ≤ 1. Due to non-singular fractional kernels, it is possible to consider complete memory effects in the model, which appear neither in the ordinary model, nor in the fractional Drude model with Caputo fractional derivative. A comparison is also made between these two representations of the fractional derivatives, resulting a considered difference when γ < 0.8.
Travel-time sensitivity kernels in long-range propagation.
Skarsoulis, E K; Cornuelle, B D; Dzieciuch, M A
2009-11-01
Wave-theoretic travel-time sensitivity kernels (TSKs) are calculated in two-dimensional (2D) and three-dimensional (3D) environments and their behavior with increasing propagation range is studied and compared to that of ray-theoretic TSKs and corresponding Fresnel-volumes. The differences between the 2D and 3D TSKs average out when horizontal or cross-range marginals are considered, which indicates that they are not important in the case of range-independent sound-speed perturbations or perturbations of large scale compared to the lateral TSK extent. With increasing range, the wave-theoretic TSKs expand in the horizontal cross-range direction, their cross-range extent being comparable to that of the corresponding free-space Fresnel zone, whereas they remain bounded in the vertical. Vertical travel-time sensitivity kernels (VTSKs)-one-dimensional kernels describing the effect of horizontally uniform sound-speed changes on travel-times-are calculated analytically using a perturbation approach, and also numerically, as horizontal marginals of the corresponding TSKs. Good agreement between analytical and numerical VTSKs, as well as between 2D and 3D VTSKs, is found. As an alternative method to obtain wave-theoretic sensitivity kernels, the parabolic approximation is used; the resulting TSKs and VTSKs are in good agreement with normal-mode results. With increasing range, the wave-theoretic VTSKs approach the corresponding ray-theoretic sensitivity kernels.
Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F
2012-01-01
Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.
Factorization and the synthesis of optimal feedback kernels for differential-delay systems
NASA Technical Reports Server (NTRS)
Milman, Mark M.; Scheid, Robert E.
1987-01-01
A combination of ideas from the theories of operator Riccati equations and Volterra factorizations leads to the derivation of a novel, relatively simple set of hyperbolic equations which characterize the optimal feedback kernel for the finite-time regulator problem for autonomous differential-delay systems. Analysis of these equations elucidates the underlying structure of the feedback kernel and leads to the development of fast and accurate numerical methods for its computation. Unlike traditional formulations based on the operator Riccati equation, the gain is characterized by means of classical solutions of the derived set of equations. This leads to the development of approximation schemes which are analogous to what has been accomplished for systems of ordinary differential equations with given initial conditions.
A simple method for computing the relativistic Compton scattering kernel for radiative transfer
NASA Technical Reports Server (NTRS)
Prasad, M. K.; Kershaw, D. S.; Beason, J. D.
1986-01-01
Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.
Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen
2014-09-01
For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Guo, Qi; Shen, Shu-Ting
2016-04-29
There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.
Droplet Image Super Resolution Based on Sparse Representation and Kernel Regression
NASA Astrophysics Data System (ADS)
Zou, Zhenzhen; Luo, Xinghong; Yu, Qiang
2018-02-01
Microgravity and containerless conditions, which are produced via electrostatic levitation combined with a drop tube, are important when studying the intrinsic properties of new metastable materials. Generally, temperature and image sensors can be used to measure the changes of sample temperature, morphology and volume. Then, the specific heat, surface tension, viscosity changes and sample density can be obtained. Considering that the falling speed of the material sample droplet is approximately 31.3 m/s when it reaches the bottom of a 50-meter-high drop tube, a high-speed camera with a collection rate of up to 106 frames/s is required to image the falling droplet. However, at the high-speed mode, very few pixels, approximately 48-120, will be obtained in each exposure time, which results in low image quality. Super-resolution image reconstruction is an algorithm that provides finer details than the sampling grid of a given imaging device by increasing the number of pixels per unit area in the image. In this work, we demonstrate the application of single image-resolution reconstruction in the microgravity and electrostatic levitation for the first time. Here, using the image super-resolution method based on sparse representation, a low-resolution droplet image can be reconstructed. Employed Yang's related dictionary model, high- and low-resolution image patches were combined with dictionary training, and high- and low-resolution-related dictionaries were obtained. The online double-sparse dictionary training algorithm was used in the study of related dictionaries and overcome the shortcomings of the traditional training algorithm with small image patch. During the stage of image reconstruction, the algorithm of kernel regression is added, which effectively overcomes the shortcomings of the Yang image's edge blurs.
Droplet Image Super Resolution Based on Sparse Representation and Kernel Regression
NASA Astrophysics Data System (ADS)
Zou, Zhenzhen; Luo, Xinghong; Yu, Qiang
2018-05-01
Microgravity and containerless conditions, which are produced via electrostatic levitation combined with a drop tube, are important when studying the intrinsic properties of new metastable materials. Generally, temperature and image sensors can be used to measure the changes of sample temperature, morphology and volume. Then, the specific heat, surface tension, viscosity changes and sample density can be obtained. Considering that the falling speed of the material sample droplet is approximately 31.3 m/s when it reaches the bottom of a 50-meter-high drop tube, a high-speed camera with a collection rate of up to 106 frames/s is required to image the falling droplet. However, at the high-speed mode, very few pixels, approximately 48-120, will be obtained in each exposure time, which results in low image quality. Super-resolution image reconstruction is an algorithm that provides finer details than the sampling grid of a given imaging device by increasing the number of pixels per unit area in the image. In this work, we demonstrate the application of single image-resolution reconstruction in the microgravity and electrostatic levitation for the first time. Here, using the image super-resolution method based on sparse representation, a low-resolution droplet image can be reconstructed. Employed Yang's related dictionary model, high- and low-resolution image patches were combined with dictionary training, and high- and low-resolution-related dictionaries were obtained. The online double-sparse dictionary training algorithm was used in the study of related dictionaries and overcome the shortcomings of the traditional training algorithm with small image patch. During the stage of image reconstruction, the algorithm of kernel regression is added, which effectively overcomes the shortcomings of the Yang image's edge blurs.
Méndez, Nelson; Oviedo-Pastrana, Misael; Mattar, Salim; Caicedo-Castro, Isaac; Arrieta, German
2017-01-01
The Zika virus disease (ZVD) has had a huge impact on public health in Colombia for the numbers of people affected and the presentation of Guillain-Barre syndrome (GBS) and microcephaly cases associated to ZVD. A retrospective descriptive study was carried out, we analyze the epidemiological situation of ZVD and its association with microcephaly and GBS during a 21-month period, from October 2015 to June 2017. The variables studied were: (i) ZVD cases, (ii) ZVD cases in pregnant women, (iii) laboratory-confirmed ZVD in pregnant women, (iv) ZVD cases associated with microcephaly, (v) laboratory-confirmed ZVD associated with microcephaly, and (vi) ZVD associated to GBS cases. Average number of cases, attack rates (AR) and proportions were also calculated. The studied variables were plotted by epidemiological weeks and months. The distribution of ZVD cases in Colombia was mapped across the time using Kernel density estimator and QGIS software; we adopted Kernel Ridge Regression (KRR) and the Gaussian Kernel to estimate the number of Guillain Barre cases given the number of ZVD cases. One hundred eight thousand eighty-seven ZVD cases had been reported in Colombia, including 19,963 (18.5%) in pregnant women, 710 (0.66%) associated with microcephaly (AR, 4.87 cases per 10,000 live births) and 453 (0.42%) ZVD associated to GBS cases (AR, 41.9 GBS cases per 10,000 ZVD cases). It appears the cases of GBS increased in parallel with the cases of ZVD, cases of microcephaly appeared 5 months after recognition of the outbreak. The kernel density map shows that throughout the study period, the states most affected by the Zika outbreak in Colombia were mainly San Andrés and Providencia islands, Casanare, Norte de Santander, Arauca and Huila. The KRR shows that there is no proportional relationship between the number of GBS and ZVD cases. During the cross validation, the RMSE achieved for the second order polynomial kernel, the linear kernel, the sigmoid kernel, and the Gaussian kernel are 9.15, 9.2, 10.7, and 7.2 respectively. This study updates the epidemiological analysis of the ZVD situation in Colombia describes the geographical distribution of ZVD and shows the functional relationship between ZVD cases and GBS.
Buck, Christoph; Kneib, Thomas; Tkaczick, Tobias; Konstabel, Kenn; Pigeot, Iris
2015-12-22
Built environment studies provide broad evidence that urban characteristics influence physical activity (PA). However, findings are still difficult to compare, due to inconsistent measures assessing urban point characteristics and varying definitions of spatial scale. Both were found to influence the strength of the association between the built environment and PA. We simultaneously evaluated the effect of kernel approaches and network-distances to investigate the association between urban characteristics and physical activity depending on spatial scale and intensity measure. We assessed urban measures of point characteristics such as intersections, public transit stations, and public open spaces in ego-centered network-dependent neighborhoods based on geographical data of one German study region of the IDEFICS study. We calculated point intensities using the simple intensity and kernel approaches based on fixed bandwidths, cross-validated bandwidths including isotropic and anisotropic kernel functions and considering adaptive bandwidths that adjust for residential density. We distinguished six network-distances from 500 m up to 2 km to calculate each intensity measure. A log-gamma regression model was used to investigate the effect of each urban measure on moderate-to-vigorous physical activity (MVPA) of 400 2- to 9.9-year old children who participated in the IDEFICS study. Models were stratified by sex and age groups, i.e. pre-school children (2 to <6 years) and school children (6-9.9 years), and were adjusted for age, body mass index (BMI), education and safety concerns of parents, season and valid weartime of accelerometers. Association between intensity measures and MVPA strongly differed by network-distance, with stronger effects found for larger network-distances. Simple intensity revealed smaller effect estimates and smaller goodness-of-fit compared to kernel approaches. Smallest variation in effect estimates over network-distances was found for kernel intensity measures based on isotropic and anisotropic cross-validated bandwidth selection. We found a strong variation in the association between the built environment and PA of children based on the choice of intensity measure and network-distance. Kernel intensity measures provided stable results over various scales and improved the assessment compared to the simple intensity measure. Considering different spatial scales and kernel intensity methods might reduce methodological limitations in assessing opportunities for PA in the built environment.
Tarjan, Lily M; Tinker, M. Tim
2016-01-01
Parametric and nonparametric kernel methods dominate studies of animal home ranges and space use. Most existing methods are unable to incorporate information about the underlying physical environment, leading to poor performance in excluding areas that are not used. Using radio-telemetry data from sea otters, we developed and evaluated a new algorithm for estimating home ranges (hereafter Permissible Home Range Estimation, or “PHRE”) that reflects habitat suitability. We began by transforming sighting locations into relevant landscape features (for sea otters, coastal position and distance from shore). Then, we generated a bivariate kernel probability density function in landscape space and back-transformed this to geographic space in order to define a permissible home range. Compared to two commonly used home range estimation methods, kernel densities and local convex hulls, PHRE better excluded unused areas and required a smaller sample size. Our PHRE method is applicable to species whose ranges are restricted by complex physical boundaries or environmental gradients and will improve understanding of habitat-use requirements and, ultimately, aid in conservation efforts.
Improved response functions for gamma-ray skyshine analyses
NASA Astrophysics Data System (ADS)
Shultis, J. K.; Faw, R. E.; Deng, X.
1992-09-01
A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study, the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15 MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This re-evaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results are compared to previous calculations and benchmark data.
Improved response functions for gamma-ray skyshine analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Deng, X.
1992-09-01
A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15more » MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This reevaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results compared to previous calculations and benchmark data.« less
Omnibus Risk Assessment via Accelerated Failure Time Kernel Machine Modeling
Sinnott, Jennifer A.; Cai, Tianxi
2013-01-01
Summary Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai et al., 2011). In this paper, we derive testing and prediction methods for KM regression under the accelerated failure time model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. PMID:24328713
NASA Astrophysics Data System (ADS)
Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord
2017-04-01
This article establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T - namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin ,Tmax ]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [ 300 K , 3000 K ] with only 9 reference temperatures.
NASA Astrophysics Data System (ADS)
Chen, Y.; Ho, C.; Chang, L.
2011-12-01
In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the conditional probability density function (PDF) of precipitations approximated by the kernel density estimation are calculated respectively for each weather types. In the synthesis step, 100 patterns of synthesis data are generated. First, the weather type of the n-th day are determined by the results of K-means clustering. The associated transition matrix and PDF of the weather type were also determined for the usage of the next sub-step in the synthesis process. Second, the precipitation condition, dry or wet, can be synthesized basing on the transition matrix. If the synthesized condition is dry, the quantity of precipitation is zero; otherwise, the quantity should be further determined in the third sub-step. Third, the quantity of the synthesized precipitation is assigned as the random variable of the PDF defined above. The synthesis efficiency compares the gap of the monthly mean curves and monthly standard deviation curves between the historical precipitation data and the 100 patterns of synthesis data.
Ha, S; Matej, S; Ispiryan, M; Mueller, K
2013-02-01
We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.
NASA Astrophysics Data System (ADS)
Ha, S.; Matej, S.; Ispiryan, M.; Mueller, K.
2013-02-01
We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.
Microwave sensing of moisture content and bulk density in flowing grain
USDA-ARS?s Scientific Manuscript database
Moisture content and bulk density were determined from measurement of the dielectric properties of flowing wheat kernels at a single microwave frequency (5.8 GHz). The measuring system consisted of two high-gain microwave patch antennas mounted on opposite sides of rectangular chute and connected to...
Ledbetter, C A
2008-09-01
Researchers are currently developing new value-added uses for almond shells, an abundant agricultural by-product. Almond varieties are distinguished by processors as being either hard or soft shelled, but these two broad classes of almond also exhibit varietal diversity in shell morphology and physical characters. By defining more precisely the physical and chemical characteristics of almond shells from different varieties, researchers will better understand which specific shell types are best suited for specific industrial processes. Eight diverse almond accessions were evaluated in two consecutive harvest seasons for nut and kernel weight, kernel percentage and shell cracking strength. Shell bulk density was evaluated in a separate year. Harvest year by almond accession interactions were highly significant (p0.01) for each of the analyzed variables. Significant (p0.01) correlations were noted for average nut weight with kernel weight, kernel percentage and shell cracking strength. A significant (p0.01) negative correlation for shell cracking strength with kernel percentage was noted. In some cases shell cracking strength was independent of the kernel percentage which suggests that either variety compositional differences or shell morphology affect the shell cracking strength. The varietal characterization of almond shell materials will assist in determining the best value-added uses for this abundant agricultural by-product.
NASA Technical Reports Server (NTRS)
Goldberg, Mitchell D.; Fleming, Henry E.
1994-01-01
An algorithm for generating deep-layer mean temperatures from satellite-observed microwave observations is presented. Unlike traditional temperature retrieval methods, this algorithm does not require a first guess temperature of the ambient atmosphere. By eliminating the first guess a potentially systematic source of error has been removed. The algorithm is expected to yield long-term records that are suitable for detecting small changes in climate. The atmospheric contribution to the deep-layer mean temperature is given by the averaging kernel. The algorithm computes the coefficients that will best approximate a desired averaging kernel from a linear combination of the satellite radiometer's weighting functions. The coefficients are then applied to the measurements to yield the deep-layer mean temperature. Three constraints were used in deriving the algorithm: (1) the sum of the coefficients must be one, (2) the noise of the product is minimized, and (3) the shape of the approximated averaging kernel is well-behaved. Note that a trade-off between constraints 2 and 3 is unavoidable. The algorithm can also be used to combine measurements from a future sensor (i.e., the 20-channel Advanced Microwave Sounding Unit (AMSU)) to yield the same averaging kernel as that based on an earlier sensor (i.e., the 4-channel Microwave Sounding Unit (MSU)). This will allow a time series of deep-layer mean temperatures based on MSU measurements to be continued with AMSU measurements. The AMSU is expected to replace the MSU in 1996.
Efficient nonparametric n -body force fields from machine learning
NASA Astrophysics Data System (ADS)
Glielmo, Aldo; Zeni, Claudio; De Vita, Alessandro
2018-05-01
We provide a definition and explicit expressions for n -body Gaussian process (GP) kernels, which can learn any interatomic interaction occurring in a physical system, up to n -body contributions, for any value of n . The series is complete, as it can be shown that the "universal approximator" squared exponential kernel can be written as a sum of n -body kernels. These recipes enable the choice of optimally efficient force models for each target system, as confirmed by extensive testing on various materials. We furthermore describe how the n -body kernels can be "mapped" on equivalent representations that provide database-size-independent predictions and are thus crucially more efficient. We explicitly carry out this mapping procedure for the first nontrivial (three-body) kernel of the series, and we show that this reproduces the GP-predicted forces with meV /Å accuracy while being orders of magnitude faster. These results pave the way to using novel force models (here named "M-FFs") that are computationally as fast as their corresponding standard parametrized n -body force fields, while retaining the nonparametric character, the ease of training and validation, and the accuracy of the best recently proposed machine-learning potentials.
Jabbar, Ahmed Najah
2018-04-13
This letter suggests two new types of asymmetrical higher-order kernels (HOK) that are generated using the orthogonal polynomials Laguerre (positive or right skew) and Bessel (negative or left skew). These skewed HOK are implemented in the blind source separation/independent component analysis (BSS/ICA) algorithm. The tests for these proposed HOK are accomplished using three scenarios to simulate a real environment using actual sound sources, an environment of mixtures of multimodal fast-changing probability density function (pdf) sources that represent a challenge to the symmetrical HOK, and an environment of an adverse case (near gaussian). The separation is performed by minimizing the mutual information (MI) among the mixed sources. The performance of the skewed kernels is compared to the performance of the standard kernels such as Epanechnikov, bisquare, trisquare, and gaussian and the performance of the symmetrical HOK generated using the polynomials Chebyshev1, Chebyshev2, Gegenbauer, Jacobi, and Legendre to the tenth order. The gaussian HOK are generated using the Hermite polynomial and the Wand and Schucany procedure. The comparison among the 96 kernels is based on the average intersymbol interference ratio (AISIR) and the time needed to complete the separation. In terms of AISIR, the skewed kernels' performance is better than that of the standard kernels and rivals most of the symmetrical kernels' performance. The importance of these new skewed HOK is manifested in the environment of the multimodal pdf mixtures. In such an environment, the skewed HOK come in first place compared with the symmetrical HOK. These new families can substitute for symmetrical HOKs in such applications.
General relativistic screening in cosmological simulations
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Paranjape, Aseem
2016-10-01
We revisit the issue of interpreting the results of large volume cosmological simulations in the context of large-scale general relativistic effects. We look for simple modifications to the nonlinear evolution of the gravitational potential ψ that lead on large scales to the correct, fully relativistic description of density perturbations in the Newtonian gauge. We note that the relativistic constraint equation for ψ can be cast as a diffusion equation, with a diffusion length scale determined by the expansion of the Universe. Exploiting the weak time evolution of ψ in all regimes of interest, this equation can be further accurately approximated as a Helmholtz equation, with an effective relativistic "screening" scale ℓ related to the Hubble radius. We demonstrate that it is thus possible to carry out N-body simulations in the Newtonian gauge by replacing Poisson's equation with this Helmholtz equation, involving a trivial change in the Green's function kernel. Our results also motivate a simple, approximate (but very accurate) gauge transformation—δN(k )≈δsim(k )×(k2+ℓ-2)/k2 —to convert the density field δsim of standard collisionless N -body simulations (initialized in the comoving synchronous gauge) into the Newtonian gauge density δN at arbitrary times. A similar conversion can also be written in terms of particle positions. Our results can be interpreted in terms of a Jeans stability criterion induced by the expansion of the Universe. The appearance of the screening scale ℓ in the evolution of ψ , in particular, leads to a natural resolution of the "Jeans swindle" in the presence of superhorizon modes.
Development of FullWave : Hot Plasma RF Simulation Tool
NASA Astrophysics Data System (ADS)
Svidzinski, Vladimir; Kim, Jin-Soo; Spencer, J. Andrew; Zhao, Liangji; Galkin, Sergei
2017-10-01
Full wave simulation tool, modeling RF fields in hot inhomogeneous magnetized plasma, is being developed. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated in configuration space without limiting approximations by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. This approach allows for better resolution of plasma resonances, antenna structures and complex boundaries. The formulation of FullWave and preliminary results will be presented: construction of the finite differences for approximation of derivatives on adaptive cloud of computational points; model and results of nonlocal conductivity kernel calculation in tokamak geometry; results of 2-D full wave simulations in the cold plasma model in tokamak geometry using the formulated approach; results of self-consistent calculations of hot plasma dielectric response and RF fields in 1-D mirror magnetic field; preliminary results of self-consistent simulations of 2-D RF fields in tokamak using the calculated hot plasma conductivity kernel; development of iterative solver for wave equations. Work is supported by the U.S. DOE SBIR program.
NASA Astrophysics Data System (ADS)
Nepal, Niraj K.; Ruzsinszky, Adrienn; Bates, Jefferson E.
2018-03-01
The ground state structural and energetic properties for rocksalt and cesium chloride phases of the cesium halides were explored using the random phase approximation (RPA) and beyond-RPA methods to benchmark the nonempirical SCAN meta-GGA and its empirical dispersion corrections. The importance of nonadditivity and higher-order multipole moments of dispersion in these systems is discussed. RPA generally predicts the equilibrium volume for these halides within 2.4% of the experimental value, while beyond-RPA methods utilizing the renormalized adiabatic LDA (rALDA) exchange-correlation kernel are typically within 1.8%. The zero-point vibrational energy is small and shows that the stability of these halides is purely due to electronic correlation effects. The rAPBE kernel as a correction to RPA overestimates the equilibrium volume and could not predict the correct phase ordering in the case of cesium chloride, while the rALDA kernel consistently predicted results in agreement with the experiment for all of the halides. However, due to its reasonable accuracy with lower computational cost, SCAN+rVV10 proved to be a good alternative to the RPA-like methods for describing the properties of these ionic solids.
NASA Astrophysics Data System (ADS)
Chao, Nan; Liu, Yong-kuo; Xia, Hong; Ayodeji, Abiodun; Bai, Lu
2018-03-01
During the decommissioning of nuclear facilities, a large number of cutting and demolition activities are performed, which results in a frequent change in the structure and produce many irregular objects. In order to assess dose rates during the cutting and demolition process, a flexible dose assessment method for arbitrary geometries and radiation sources was proposed based on virtual reality technology and Point-Kernel method. The initial geometry is designed with the three-dimensional computer-aided design tools. An approximate model is built automatically in the process of geometric modeling via three procedures namely: space division, rough modeling of the body and fine modeling of the surface, all in combination with collision detection of virtual reality technology. Then point kernels are generated by sampling within the approximate model, and when the material and radiometric attributes are inputted, dose rates can be calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The effectiveness and accuracy of the proposed method was verified by means of simulations using different geometries and the dose rate results were compared with that derived from CIDEC code, MCNP code and experimental measurements.
Bose-Einstein condensation on a manifold with non-negative Ricci curvature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akant, Levent, E-mail: levent.akant@boun.edu.tr; Ertuğrul, Emine, E-mail: emine.ertugrul@boun.edu.tr; Tapramaz, Ferzan, E-mail: waskhez@gmail.com
The Bose-Einstein condensation for an ideal Bose gas and for a dilute weakly interacting Bose gas in a manifold with non-negative Ricci curvature is investigated using the heat kernel and eigenvalue estimates of the Laplace operator. The main focus is on the nonrelativistic gas. However, special relativistic ideal gas is also discussed. The thermodynamic limit of the heat kernel and eigenvalue estimates is taken and the results are used to derive bounds for the depletion coefficient. In the case of a weakly interacting gas, Bogoliubov approximation is employed. The ground state is analyzed using heat kernel methods and finite sizemore » effects on the ground state energy are proposed. The justification of the c-number substitution on a manifold is given.« less
StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.
Li, Chenhui; Baciu, George; Han, Yu
2018-03-01
Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.
Experimental validation of tunable features in laser-induced plasma resonators
NASA Astrophysics Data System (ADS)
Colón Quiñones, Roberto A.; Cappelli, Mark A.
2017-08-01
Measurements are presented which examine the use of gaseous plasma elements as highly-tunable resonators. The resonator considered here is a laser-induced plasma kernel generated by focusing the fundamental output from a Q-switched Nd:YAG laser through a lens and into a gas at constant pressure. The near-ellipsoidal plasma element interacts with incoming microwave radiation through excitation of low-order, electric-dipole resonances similar to those seen in metallic spheres. The tunability of these elements stems from the dispersive nature of plasmas arising from their variable electron density, electron momentum transfer collision frequency, and the concomitant e↵ect of these properties on the excited surface plasmon resonance. Experiments were carried out in the Ku band of the microwave spectrum to characterize the scattering properties of these resonators for di↵erent values of electron density. The experimental results are compared with results from theoretical approximations and finite element method electromagnetic simulations. The described tunable resonators have the potential to be used as the building blocks in a new class of all-plasma metamaterials with fully three-dimensional structural flexibility.
Online selective kernel-based temporal difference learning.
Chen, Xingguo; Gao, Yang; Wang, Ruili
2013-12-01
In this paper, an online selective kernel-based temporal difference (OSKTD) learning algorithm is proposed to deal with large scale and/or continuous reinforcement learning problems. OSKTD includes two online procedures: online sparsification and parameter updating for the selective kernel-based value function. A new sparsification method (i.e., a kernel distance-based online sparsification method) is proposed based on selective ensemble learning, which is computationally less complex compared with other sparsification methods. With the proposed sparsification method, the sparsified dictionary of samples is constructed online by checking if a sample needs to be added to the sparsified dictionary. In addition, based on local validity, a selective kernel-based value function is proposed to select the best samples from the sample dictionary for the selective kernel-based value function approximator. The parameters of the selective kernel-based value function are iteratively updated by using the temporal difference (TD) learning algorithm combined with the gradient descent technique. The complexity of the online sparsification procedure in the OSKTD algorithm is O(n). In addition, two typical experiments (Maze and Mountain Car) are used to compare with both traditional and up-to-date O(n) algorithms (GTD, GTD2, and TDC using the kernel-based value function), and the results demonstrate the effectiveness of our proposed algorithm. In the Maze problem, OSKTD converges to an optimal policy and converges faster than both traditional and up-to-date algorithms. In the Mountain Car problem, OSKTD converges, requires less computation time compared with other sparsification methods, gets a better local optima than the traditional algorithms, and converges much faster than the up-to-date algorithms. In addition, OSKTD can reach a competitive ultimate optima compared with the up-to-date algorithms.
Seismic Imaging of VTI, HTI and TTI based on Adjoint Methods
NASA Astrophysics Data System (ADS)
Rusmanugroho, H.; Tromp, J.
2014-12-01
Recent studies show that isotropic seismic imaging based on adjoint method reduces low-frequency artifact caused by diving waves, which commonly occur in two-wave wave-equation migration, such as Reverse Time Migration (RTM). Here, we derive new expressions of sensitivity kernels for Vertical Transverse Isotropy (VTI) using the Thomsen parameters (ɛ, δ, γ) plus the P-, and S-wave speeds (α, β) as well as via the Chen & Tromp (GJI 2005) parameters (A, C, N, L, F). For Horizontal Transverse Isotropy (HTI), these parameters depend on an azimuthal angle φ, where the tilt angle θ is equivalent to 90°, and for Tilted Transverse Isotropy (TTI), these parameters depend on both the azimuth and tilt angles. We calculate sensitivity kernels for each of these two approaches. Individual kernels ("images") are numerically constructed based on the interaction between the regular and adjoint wavefields in smoothed models which are in practice estimated through Full-Waveform Inversion (FWI). The final image is obtained as a result of summing all shots, which are well distributed to sample the target model properly. The impedance kernel, which is a sum of sensitivity kernels of density and the Thomsen or Chen & Tromp parameters, looks crisp and promising for seismic imaging. The other kernels suffer from low-frequency artifacts, similar to traditional seismic imaging conditions. However, all sensitivity kernels are important for estimating the gradient of the misfit function, which, in combination with a standard gradient-based inversion algorithm, is used to minimize the objective function in FWI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okumura, Teppei; Seljak, Uroš; McDonald, Patrick
Measurement of redshift-space distortions (RSD) offers an attractive method to directly probe the cosmic growth history of density perturbations. A distribution function approach where RSD can be written as a sum over density weighted velocity moment correlators has recently been developed. In this paper we use results of N-body simulations to investigate the individual contributions and convergence of this expansion for dark matter. If the series is expanded as a function of powers of μ, cosine of the angle between the Fourier mode and line of sight, then there are a finite number of terms contributing at each order. Wemore » present these terms and investigate their contribution to the total as a function of wavevector k. For μ{sup 2} the correlation between density and momentum dominates on large scales. Higher order corrections, which act as a Finger-of-God (FoG) term, contribute 1% at k ∼ 0.015hMpc{sup −1}, 10% at k ∼ 0.05hMpc{sup −1} at z = 0, while for k > 0.15hMpc{sup −1} they dominate and make the total negative. These higher order terms are dominated by density-energy density correlations which contributes negatively to the power, while the contribution from vorticity part of momentum density auto-correlation adds to the total power, but is an order of magnitude lower. For μ{sup 4} term the dominant term on large scales is the scalar part of momentum density auto-correlation, while higher order terms dominate for k > 0.15hMpc{sup −1}. For μ{sup 6} and μ{sup 8} we find it has very little power for k < 0.15hMpc{sup −1}, shooting up by 2–3 orders of magnitude between k < 0.15hMpc{sup −1} and k < 0.4hMpc{sup −1}. We also compare the expansion to the full 2-d P{sup ss}(k,μ), as well as to the monopole, quadrupole, and hexadecapole integrals of P{sup ss}(k,μ). For these statistics an infinite number of terms contribute and we find that the expansion achieves percent level accuracy for kμ < 0.15hMpc{sup −1} at 6-th order, but breaks down on smaller scales because the series is no longer perturbative. We explore resummation of the terms into FoG kernels, which extend the convergence up to a factor of 2 in scale. We find that the FoG kernels are approximately Lorentzian with velocity dispersions around 600 km/s at z = 0.« less
Influence of moisture content on physical properties of minor millets.
Balasubramanian, S; Viswanathan, R
2010-06-01
Physical properties including 1000 kernel weight, bulk density, true density, porosity, angle of repose, coefficient of static friction, coefficient of internal friction and grain hardness were determined for foxtail millet, little millet, kodo millet, common millet, barnyard millet and finger millet in the moisture content range of 11.1 to 25% db. Thousand kernel weight increased from 2.3 to 6.1 g and angle of repose increased from 25.0 to 38.2°. Bulk density decreased from 868.1 to 477.1 kg/m(3) and true density from 1988.7 to 884.4 kg/m(3) for all minor millets when observed in the moisture range of 11.1 to 25%. Porosity decreased from 63.7 to 32.5%. Coefficient of static friction of minor millets against mild steel surface increased from 0.253 to 0.728 and coefficient of internal friction was in the range of 1.217 and 1.964 in the moisture range studied. Grain hardness decreased from 30.7 to 12.4 for all minor millets when moisture content was increased from 11.1 to 25% db.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacquelin, Mathias; De Jong, Wibe A.; Bylaska, Eric J.
2017-07-03
The Ab Initio Molecular Dynamics (AIMD) method allows scientists to treat the dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. This extremely important method has tremendous computational requirements, because the electronic Schr¨odinger equation, approximated using Kohn-Sham Density Functional Theory (DFT), is solved at every time step. With the advent of manycore architectures, application developers have a significant amount of processing power within each compute node that can only be exploited through massive parallelism. A compute intensive application such as AIMD forms a good candidate to leverage this processing power. In this paper, wemore » focus on adding thread level parallelism to the plane wave DFT methodology implemented in NWChem. Through a careful optimization of tall-skinny matrix products, which are at the heart of the Lagrange multiplier and nonlocal pseudopotential kernels, as well as 3D FFTs, our OpenMP implementation delivers excellent strong scaling on the latest Intel Knights Landing (KNL) processor. We assess the efficiency of our Lagrange multiplier kernels by building a Roofline model of the platform, and verify that our implementation is close to the roofline for various problem sizes. Finally, we present strong scaling results on the complete AIMD simulation for a 64 water molecules test case, that scales up to all 68 cores of the Knights Landing processor.« less
Wartmann, Flurina M; Purves, Ross S; van Schaik, Carel P
2010-04-01
Quantification of the spatial needs of individuals and populations is vitally important for management and conservation. Geographic information systems (GIS) have recently become important analytical tools in wildlife biology, improving our ability to understand animal movement patterns, especially when very large data sets are collected. This study aims at combining the field of GIS with primatology to model and analyse space-use patterns of wild orang-utans. Home ranges of female orang-utans in the Tuanan Mawas forest reserve in Central Kalimantan, Indonesia were modelled with kernel density estimation methods. Kernel results were compared with minimum convex polygon estimates, and were found to perform better, because they were less sensitive to sample size and produced more reliable estimates. Furthermore, daily travel paths were calculated from 970 complete follow days. Annual ranges for the resident females were approximately 200 ha and remained stable over several years; total home range size was estimated to be 275 ha. On average, each female shared a third of her home range with each neighbouring female. Orang-utan females in Tuanan built their night nest on average 414 m away from the morning nest, whereas average daily travel path length was 777 m. A significant effect of fruit availability on day path length was found. Sexually active females covered longer distances per day and may also temporarily expand their ranges.
Carbothermic Synthesis of ~820- m UN Kernels. Investigation of Process Variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindemer, Terrence; Silva, Chinthaka M; Henry, Jr, John James
2015-06-01
This report details the continued investigation of process variables involved in converting sol-gel-derived, urainia-carbon microspheres to ~820-μm-dia. UN fuel kernels in flow-through, vertical refractory-metal crucibles at temperatures up to 2123 K. Experiments included calcining of air-dried UO 3-H 2O-C microspheres in Ar and H 2-containing gases, conversion of the resulting UO 2-C kernels to dense UO 2:2UC in the same gases and vacuum, and its conversion in N 2 to in UC 1-xN x. The thermodynamics of the relevant reactions were applied extensively to interpret and control the process variables. Producing the precursor UO 2:2UC kernel of ~96% theoretical densitymore » was required, but its subsequent conversion to UC 1-xN x at 2123 K was not accompanied by sintering and resulted in ~83-86% of theoretical density. Decreasing the UC 1-xN x kernel carbide component via HCN evolution was shown to be quantitatively consistent with present and past experiments and the only useful application of H2 in the entire process.« less
Phylodynamic Inference with Kernel ABC and Its Application to HIV Epidemiology.
Poon, Art F Y
2015-09-01
The shapes of phylogenetic trees relating virus populations are determined by the adaptation of viruses within each host, and by the transmission of viruses among hosts. Phylodynamic inference attempts to reverse this flow of information, estimating parameters of these processes from the shape of a virus phylogeny reconstructed from a sample of genetic sequences from the epidemic. A key challenge to phylodynamic inference is quantifying the similarity between two trees in an efficient and comprehensive way. In this study, I demonstrate that a new distance measure, based on a subset tree kernel function from computational linguistics, confers a significant improvement over previous measures of tree shape for classifying trees generated under different epidemiological scenarios. Next, I incorporate this kernel-based distance measure into an approximate Bayesian computation (ABC) framework for phylodynamic inference. ABC bypasses the need for an analytical solution of model likelihood, as it only requires the ability to simulate data from the model. I validate this "kernel-ABC" method for phylodynamic inference by estimating parameters from data simulated under a simple epidemiological model. Results indicate that kernel-ABC attained greater accuracy for parameters associated with virus transmission than leading software on the same data sets. Finally, I apply the kernel-ABC framework to study a recent outbreak of a recombinant HIV subtype in China. Kernel-ABC provides a versatile framework for phylodynamic inference because it can fit a broader range of models than methods that rely on the computation of exact likelihoods. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Omnibus risk assessment via accelerated failure time kernel machine modeling.
Sinnott, Jennifer A; Cai, Tianxi
2013-12-01
Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.
Explaining Support Vector Machines: A Color Based Nomogram
Van Belle, Vanya; Van Calster, Ben; Van Huffel, Sabine; Suykens, Johan A. K.; Lisboa, Paulo
2016-01-01
Problem setting Support vector machines (SVMs) are very popular tools for classification, regression and other problems. Due to the large choice of kernels they can be applied with, a large variety of data can be analysed using these tools. Machine learning thanks its popularity to the good performance of the resulting models. However, interpreting the models is far from obvious, especially when non-linear kernels are used. Hence, the methods are used as black boxes. As a consequence, the use of SVMs is less supported in areas where interpretability is important and where people are held responsible for the decisions made by models. Objective In this work, we investigate whether SVMs using linear, polynomial and RBF kernels can be explained such that interpretations for model-based decisions can be provided. We further indicate when SVMs can be explained and in which situations interpretation of SVMs is (hitherto) not possible. Here, explainability is defined as the ability to produce the final decision based on a sum of contributions which depend on one single or at most two input variables. Results Our experiments on simulated and real-life data show that explainability of an SVM depends on the chosen parameter values (degree of polynomial kernel, width of RBF kernel and regularization constant). When several combinations of parameter values yield the same cross-validation performance, combinations with a lower polynomial degree or a larger kernel width have a higher chance of being explainable. Conclusions This work summarizes SVM classifiers obtained with linear, polynomial and RBF kernels in a single plot. Linear and polynomial kernels up to the second degree are represented exactly. For other kernels an indication of the reliability of the approximation is presented. The complete methodology is available as an R package and two apps and a movie are provided to illustrate the possibilities offered by the method. PMID:27723811
Retrieval of the aerosol size distribution in the complex anomalous diffraction approximation
NASA Astrophysics Data System (ADS)
Franssens, Ghislain R.
This contribution reports some recently achieved results in aerosol size distribution retrieval in the complex anomalous diffraction approximation (ADA) to MIE scattering theory. This approximation is valid for spherical particles that are large compared to the wavelength and have a refractive index close to 1. The ADA kernel is compared with the exact MIE kernel. Despite being a simple approximation, the ADA seems to have some practical value for the retrieval of the larger modes of tropospheric and lower stratospheric aerosols. The ADA has the advantage over MIE theory that an analytic inversion of the associated Fredholm integral equation becomes possible. In addition, spectral inversion in the ADA can be formulated as a well-posed problem. In this way, a new inverse formula was obtained, which allows the direct computation of the size distribution as an integral over the spectral extinction function. This formula is valid for particles that both scatter and absorb light and it also takes the spectral dispersion of the refractive index into account. Some details of the numerical implementation of the inverse formula are illustrated using a modified gamma test distribution. Special attention is given to the integration of spectrally truncated discrete extinction data with errors.
Gaussianization for fast and accurate inference from cosmological data
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2016-06-01
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
Detonability of turbulent white dwarf plasma: Hydrodynamical models at low densities
NASA Astrophysics Data System (ADS)
Fenn, Daniel
The origins of Type Ia supernovae (SNe Ia) remain an unsolved problem of contemporary astrophysics. Decades of research indicate that these supernovae arise from thermonuclear runaway in the degenerate material of white dwarf stars; however, the mechanism of these explosions is unknown. Also, it is unclear what are the progenitors of these objects. These missing elements are vital components of the initial conditions of supernova explosions, and are essential to understanding these events. A requirement of any successful SN Ia model is that a sufficient portion of the white dwarf plasma must be brought under conditions conducive to explosive burning. Our aim is to identify the conditions required to trigger detonations in turbulent, carbon-rich degenerate plasma at low densities. We study this problem by modeling the hydrodynamic evolution of a turbulent region filled with a carbon/oxygen mixture at a density, temperature, and Mach number characteristic of conditions found in the 0.8+1.2 solar mass (CO0812) model discussed by Fenn et al. (2016). We probe the ignition conditions for different degrees of compressibility in turbulent driving. We assess the probability of successful detonations based on characteristics of the identified ignition kernels, using Eulerian and Lagrangian statistics of turbulent flow. We found that material with very short ignition times is abundant in the case that turbulence is driven compressively. This material forms contiguous structures that persist over many ignition time scales, and that we identify as prospective detonation kernels. Detailed analysis of the kernels revealed that their central regions are densely filled with material characterized by short ignition times and contain the minimum mass required for self-sustained detonations to form. It is conceivable that ignition kernels will be formed for lower compressibility in the turbulent driving. However, we found no detonation kernels in models driven 87.5 percent compressively. We indirectly confirmed the existence of the lower limit of the degree of compressibility of the turbulent drive for the formation of detonation kernels by analyzing simulation results of the He0609 model of Fenn et al. (2016), which produces a detonation in a helium-rich boundary layer. We found that the amount of energy in the compressible component of the kinetic energy in this model corresponds to about 96 percent compressibility in the turbulent drive. The fact that no detonation was found in the original CO0812 model for nominally the same problem conditions suggests that models with carbon-rich boundary layers may require higher resolution in order to adequately represent the mass distributions in terms of ignition times.
A density-adaptive SPH method with kernel gradient correction for modeling explosive welding
NASA Astrophysics Data System (ADS)
Liu, M. B.; Zhang, Z. L.; Feng, D. L.
2017-09-01
Explosive welding involves processes like the detonation of explosive, impact of metal structures and strong fluid-structure interaction, while the whole process of explosive welding has not been well modeled before. In this paper, a novel smoothed particle hydrodynamics (SPH) model is developed to simulate explosive welding. In the SPH model, a kernel gradient correction algorithm is used to achieve better computational accuracy. A density adapting technique which can effectively treat large density ratio is also proposed. The developed SPH model is firstly validated by simulating a benchmark problem of one-dimensional TNT detonation and an impact welding problem. The SPH model is then successfully applied to simulate the whole process of explosive welding. It is demonstrated that the presented SPH method can capture typical physics in explosive welding including explosion wave, welding surface morphology, jet flow and acceleration of the flyer plate. The welding angle obtained from the SPH simulation agrees well with that from a kinematic analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, Chinthaka M; Lindemer, Terrence; Voit, Stewart L
2014-11-01
Three sets of different experimental conditions by changing the cover gases during the sample preparation were tested to synthesize uranium carbonitride (UC1-xNx) microparticles. In the first two sets of experiments using (N2 to N2-4%H2 to Ar) and (Ar to N2 to Ar) environments, single phase UC1-xNx was synthesized. When reducing environments (Ar-4%H2 to N2-4%H2 to Ar-4%H2) were utilized, theoretical densities up to 97% of single phase UC1-xNx kernels were obtained. Physical and chemical characteristics such as density, phase purity, and chemical compositions of the synthesized UC1-xNx materials for the diferent experimental conditions used are provided. In-depth analysis of the microstruturesmore » of UC1-xNx has been carried out and is discussed with the objective of large batch fabrication of high density UC1-xNx kernels.« less
Structured Kernel Subspace Learning for Autonomous Robot Navigation.
Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai
2018-02-14
This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.
A Fast Reduced Kernel Extreme Learning Machine.
Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua
2016-04-01
In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. Copyright © 2015 Elsevier Ltd. All rights reserved.
Graphical and Numerical Descriptive Analysis: Exploratory Tools Applied to Vietnamese Data
ERIC Educational Resources Information Center
Haughton, Dominique; Phong, Nguyen
2004-01-01
This case study covers several exploratory data analysis ideas, the histogram and boxplot, kernel density estimates, the recently introduced bagplot--a two-dimensional extension of the boxplot--as well as the violin plot, which combines a boxplot with a density shape plot. We apply these ideas and demonstrate how to interpret the output from these…
Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.
Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi
2012-11-08
A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.
Convergence studies in meshfree peridynamic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seleson, Pablo; Littlewood, David J.
2016-04-15
Meshfree methods are commonly applied to discretize peridynamic models, particularly in numerical simulations of engineering problems. Such methods discretize peridynamic bodies using a set of nodes with characteristic volume, leading to particle-based descriptions of systems. In this article, we perform convergence studies of static peridynamic problems. We show that commonly used meshfree methods in peridynamics suffer from accuracy and convergence issues, due to a rough approximation of the contribution to the internal force density of nodes near the boundary of the neighborhood of a given node. We propose two methods to improve meshfree peridynamic simulations. The first method uses accuratemore » computations of volumes of intersections between neighbor cells and the neighborhood of a given node, referred to as partial volumes. The second method employs smooth influence functions with a finite support within peridynamic kernels. Numerical results demonstrate great improvements in accuracy and convergence of peridynamic numerical solutions, when using the proposed methods.« less
Pixel-based meshfree modelling of skeletal muscles.
Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu
2016-01-01
This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.
NASA Astrophysics Data System (ADS)
Güleçyüz, M. Ç.; Şenyiğit, M.; Ersoy, A.
2018-01-01
The Milne problem is studied in one speed neutron transport theory using the linearly anisotropic scattering kernel which combines forward and backward scatterings (extremely anisotropic scattering) for a non-absorbing medium with specular and diffuse reflection boundary conditions. In order to calculate the extrapolated endpoint for the Milne problem, Legendre polynomial approximation (PN method) is applied and numerical results are tabulated for selected cases as a function of different degrees of anisotropic scattering. Finally, some results are discussed and compared with the existing results in literature.
On the solution of integral equations with strongly singular kernels
NASA Technical Reports Server (NTRS)
Kaya, A. C.; Erdogan, F.
1986-01-01
Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m ,m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup -m , terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.
On the solution of integral equations with strong ly singular kernels
NASA Technical Reports Server (NTRS)
Kaya, A. C.; Erdogan, F.
1985-01-01
In this paper some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m or = 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t,x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.
On the solution of integral equations with strongly singular kernels
NASA Technical Reports Server (NTRS)
Kaya, A. C.; Erdogan, F.
1987-01-01
Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.
A ℓ2, 1 norm regularized multi-kernel learning for false positive reduction in Lung nodule CAD.
Cao, Peng; Liu, Xiaoli; Zhang, Jian; Li, Wei; Zhao, Dazhe; Huang, Min; Zaiane, Osmar
2017-03-01
The aim of this paper is to describe a novel algorithm for False Positive Reduction in lung nodule Computer Aided Detection(CAD). In this paper, we describes a new CT lung CAD method which aims to detect solid nodules. Specially, we proposed a multi-kernel classifier with a ℓ 2, 1 norm regularizer for heterogeneous feature fusion and selection from the feature subset level, and designed two efficient strategies to optimize the parameters of kernel weights in non-smooth ℓ 2, 1 regularized multiple kernel learning algorithm. The first optimization algorithm adapts a proximal gradient method for solving the ℓ 2, 1 norm of kernel weights, and use an accelerated method based on FISTA; the second one employs an iterative scheme based on an approximate gradient descent method. The results demonstrates that the FISTA-style accelerated proximal descent method is efficient for the ℓ 2, 1 norm formulation of multiple kernel learning with the theoretical guarantee of the convergence rate. Moreover, the experimental results demonstrate the effectiveness of the proposed methods in terms of Geometric mean (G-mean) and Area under the ROC curve (AUC), and significantly outperforms the competing methods. The proposed approach exhibits some remarkable advantages both in heterogeneous feature subsets fusion and classification phases. Compared with the fusion strategies of feature-level and decision level, the proposed ℓ 2, 1 norm multi-kernel learning algorithm is able to accurately fuse the complementary and heterogeneous feature sets, and automatically prune the irrelevant and redundant feature subsets to form a more discriminative feature set, leading a promising classification performance. Moreover, the proposed algorithm consistently outperforms the comparable classification approaches in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witte, Jonathon; Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720; Neaton, Jeffrey B., E-mail: jbneaton@lbl.gov
Adsorption of gas molecules in metal-organic frameworks is governed by many factors, the most dominant of which are the interaction of the gas with open metal sites, and the interaction of the gas with the ligands. Herein, we examine the latter class of interaction in the context of CO{sub 2} binding to benzene. We begin by clarifying the geometry of the CO{sub 2}–benzene complex. We then generate a benchmark binding curve using a coupled-cluster approach with single, double, and perturbative triple excitations [CCSD(T)] at the complete basis set (CBS) limit. Against this ΔCCSD(T)/CBS standard, we evaluate a plethora of electronicmore » structure approximations: Hartree-Fock, second-order Møller-Plesset perturbation theory (MP2) with the resolution-of-the-identity approximation, attenuated MP2, and a number of density functionals with and without different empirical and nonempirical van der Waals corrections. We find that finite-basis MP2 significantly overbinds the complex. On the other hand, even the simplest empirical correction to standard density functionals is sufficient to bring the binding energies to well within 1 kJ/mol of the benchmark, corresponding to an error of less than 10%; PBE-D in particular performs well. Methods that explicitly include nonlocal correlation kernels, such as VV10, vdW-DF2, and ωB97X-V, perform with similar accuracy for this system, as do ωB97X and M06-L.« less
Yao, Jincao; Yu, Huimin; Hu, Roland
2017-01-01
This paper introduces a new implicit-kernel-sparse-shape-representation-based object segmentation framework. Given an input object whose shape is similar to some of the elements in the training set, the proposed model can automatically find a cluster of implicit kernel sparse neighbors to approximately represent the input shape and guide the segmentation. A distance-constrained probabilistic definition together with a dualization energy term is developed to connect high-level shape representation and low-level image information. We theoretically prove that our model not only derives from two projected convex sets but is also equivalent to a sparse-reconstruction-error-based representation in the Hilbert space. Finally, a "wake-sleep"-based segmentation framework is applied to drive the evolutionary curve to recover the original shape of the object. We test our model on two public datasets. Numerical experiments on both synthetic images and real applications show the superior capabilities of the proposed framework.
Hydroxocobalamin treatment of acute cyanide poisoning from apricot kernels.
Cigolini, Davide; Ricci, Giogio; Zannoni, Massimo; Codogni, Rosalia; De Luca, Manuela; Perfetti, Paola; Rocca, Giampaolo
2011-05-24
Clinical experience with hydroxocobalamin in acute cyanide poisoning via ingestion remains limited. This case concerns a 35-year-old mentally ill woman who consumed more than 20 apricot kernels. Published literature suggests each kernel would have contained cyanide concentrations ranging from 0.122 to 4.09 mg/g (average 2.92 mg/g). On arrival, the woman appeared asymptomatic with a raised pulse rate and slight metabolic acidosis. Forty minutes after admission (approximately 70 min postingestion), the patient experienced headache, nausea and dyspnoea, and was hypotensive, hypoxic and tachypnoeic. Following treatment with amyl nitrite and sodium thiosulphate, her methaemoglobin level was 10%. This prompted the administration of oxygen, which evoked a slight improvement in her vital signs. Hydroxocobalamin was then administered. After 24 h, she was completely asymptomatic with normalised blood pressure and other haemodynamic parameters. This case reinforces the safety and effectiveness of hydroxocobalamin in acute cyanide poisoning by ingestion.
Hydroxocobalamin treatment of acute cyanide poisoning from apricot kernels.
Cigolini, Davide; Ricci, Giogio; Zannoni, Massimo; Codogni, Rosalia; De Luca, Manuela; Perfetti, Paola; Rocca, Giampaolo
2011-09-01
Clinical experience with hydroxocobalamin in acute cyanide poisoning via ingestion remains limited. This case concerns a 35-year-old mentally ill woman who consumed more than 20 apricot kernels. Published literature suggests each kernel would have contained cyanide concentrations ranging from 0.122 to 4.09 mg/g (average 2.92 mg/g). On arrival, the woman appeared asymptomatic with a raised pulse rate and slight metabolic acidosis. Forty minutes after admission (approximately 70 min postingestion), the patient experienced headache, nausea and dyspnoea, and was hypotensive, hypoxic and tachypnoeic. Following treatment with amyl nitrite and sodium thiosulphate, her methaemoglobin level was 10%. This prompted the administration of oxygen, which evoked a slight improvement in her vital signs. Hydroxocobalamin was then administered. After 24 h, she was completely asymptomatic with normalised blood pressure and other haemodynamic parameters. This case reinforces the safety and effectiveness of hydroxocobalamin in acute cyanide poisoning by ingestion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlou, A. T.; Betzler, B. R.; Burke, T. P.
Uncertainties in the composition and fabrication of fuel compacts for the Fort St. Vrain (FSV) high temperature gas reactor have been studied by performing eigenvalue sensitivity studies that represent the key uncertainties for the FSV neutronic analysis. The uncertainties for the TRISO fuel kernels were addressed by developing a suite of models for an 'average' FSV fuel compact that models the fuel as (1) a mixture of two different TRISO fuel particles representing fissile and fertile kernels, (2) a mixture of four different TRISO fuel particles representing small and large fissile kernels and small and large fertile kernels and (3)more » a stochastic mixture of the four types of fuel particles where every kernel has its diameter sampled from a continuous probability density function. All of the discrete diameter and continuous diameter fuel models were constrained to have the same fuel loadings and packing fractions. For the non-stochastic discrete diameter cases, the MCNP compact model arranged the TRISO fuel particles on a hexagonal honeycomb lattice. This lattice-based fuel compact was compared to a stochastic compact where the locations (and kernel diameters for the continuous diameter cases) of the fuel particles were randomly sampled. Partial core configurations were modeled by stacking compacts into fuel columns containing graphite. The differences in eigenvalues between the lattice-based and stochastic models were small but the runtime of the lattice-based fuel model was roughly 20 times shorter than with the stochastic-based fuel model. (authors)« less
An atomistic fingerprint algorithm for learning ab initio molecular force fields
NASA Astrophysics Data System (ADS)
Tang, Yu-Hang; Zhang, Dongkun; Karniadakis, George Em
2018-01-01
Molecular fingerprints, i.e., feature vectors describing atomistic neighborhood configurations, is an important abstraction and a key ingredient for data-driven modeling of potential energy surface and interatomic force. In this paper, we present the density-encoded canonically aligned fingerprint algorithm, which is robust and efficient, for fitting per-atom scalar and vector quantities. The fingerprint is essentially a continuous density field formed through the superimposition of smoothing kernels centered on the atoms. Rotational invariance of the fingerprint is achieved by aligning, for each fingerprint instance, the neighboring atoms onto a local canonical coordinate frame computed from a kernel minisum optimization procedure. We show that this approach is superior over principal components analysis-based methods especially when the atomistic neighborhood is sparse and/or contains symmetry. We propose that the "distance" between the density fields be measured using a volume integral of their pointwise difference. This can be efficiently computed using optimal quadrature rules, which only require discrete sampling at a small number of grid points. We also experiment on the choice of weight functions for constructing the density fields and characterize their performance for fitting interatomic potentials. The applicability of the fingerprint is demonstrated through a set of benchmark problems.
Flood, Jessica S; Porphyre, Thibaud; Tildesley, Michael J; Woolhouse, Mark E J
2013-10-08
When modelling infectious diseases, accurately capturing the pattern of dissemination through space is key to providing optimal recommendations for control. Mathematical models of disease spread in livestock, such as for foot-and-mouth disease (FMD), have done this by incorporating a transmission kernel which describes the decay in transmission rate with increasing Euclidean distance from an infected premises (IP). However, this assumes a homogenous landscape, and is based on the distance between point locations of farms. Indeed, underlying the spatial pattern of spread are the contact networks involved in transmission. Accordingly, area-weighted tessellation around farm point locations has been used to approximate field-contiguity and simulate the effect of contiguous premises (CP) culling for FMD. Here, geographic data were used to determine contiguity based on distance between premises' fields and presence of landscape features for two sample areas in Scotland. Sensitivity, positive predictive value, and the True Skill Statistic (TSS) were calculated to determine how point distance measures and area-weighted tessellation compared to the 'gold standard' of the map-based measures in identifying CPs. In addition, the mean degree and density of the different contact networks were calculated. Utilising point distances <1 km and <5 km as a measure for contiguity resulted in poor discrimination between map-based CPs/non-CPs (TSS 0.279-0.344 and 0.385-0.400, respectively). Point distance <1 km missed a high proportion of map-based CPs; <5 km point distance picked up a high proportion of map-based non-CPs as CPs. Area-weighted tessellation performed best, with reasonable discrimination between map-based CPs/non-CPs (TSS 0.617-0.737) and comparable mean degree and density. Landscape features altered network properties considerably when taken into account. The farming landscape is not homogeneous. Basing contiguity on geographic locations of field boundaries and including landscape features known to affect transmission into FMD models are likely to improve individual farm-level accuracy of spatial predictions in the event of future outbreaks. If a substantial proportion of FMD transmission events are by contiguous spread, and CPs should be assigned an elevated relative transmission rate, the shape of the kernel could be significantly altered since ability to discriminate between map-based CPs and non-CPs is different over different Euclidean distances.
NASA Astrophysics Data System (ADS)
Donlon, Kevan; Ninkov, Zoran; Baum, Stefi
2016-08-01
Interpixel capacitance (IPC) is a deterministic electronic coupling by which signal generated in one pixel is measured in neighboring pixels. Examination of dark frames from test NIRcam arrays corroborates earlier results and simulations illustrating a signal dependent coupling. When the signal on an individual pixel is larger, the fractional coupling to nearest neighbors is lesser than when the signal is lower. Frames from test arrays indicate a drop in average coupling from approximately 1.0% at low signals down to approximately 0.65% at high signals depending on the particular array in question. The photometric ramifications for this non-uniformity are not fully understood. This non-uniformity intro-duces a non-linearity in the current mathematical model for IPC coupling. IPC coupling has been mathematically formalized as convolution by a blur kernel. Signal dependence requires that the blur kernel be locally defined as a function of signal intensity. Through application of a signal dependent coupling kernel, the IPC coupling can be modeled computationally. This method allows for simultaneous knowledge of the intrinsic parameters of the image scene, the result of applying a constant IPC, and the result of a signal dependent IPC. In the age of sub-pixel precision in astronomy these effects must be properly understood and accounted for in order for the data to accurately represent the object of observation. Implementation of this method is done through python scripted processing of images. The introduction of IPC into simulated frames is accomplished through convolution of the image with a blur kernel whose parameters are themselves locally defined functions of the image. These techniques can be used to enhance the data processing pipeline for NIRcam.
Phase space explorations in time dependent density functional theory
NASA Astrophysics Data System (ADS)
Rajam, Aruna K.
Time dependent density functional theory (TDDFT) is one of the useful tools for the study of the dynamic behavior of correlated electronic systems under the influence of external potentials. The success of this formally exact theory practically relies on approximations for the exchange-correlation potential which is a complicated functional of the co-ordinate density, non-local in space and time. Adiabatic approximations (such as ALDA), which are local in time, are most commonly used in the increasing applications of the field. Going beyond ALDA, has been proved difficult leading to mathematical inconsistencies. We explore the regions where the theory faces challenges, and try to answer some of them via the insights from two electron model systems. In this thesis work we propose a phase-space extension of the TDDFT. We want to answer the challenges the theory is facing currently by exploring the one-body phase-space. We give a general introduction to this theory and its mathematical background in the first chapter. In second chapter, we carryout a detailed study of instantaneous phase-space densities and argue that the functionals of distributions can be a better alternative to the nonlocality issue of the exchange-correlation potentials. For this we study in detail the interacting and the non-interacting phase-space distributions for Hookes atom model. The applicability of ALDA-based TDDFT for the dynamics in strongfields can become severely problematic due to the failure of single-Slater determinant picture.. In the third chapter, we analyze how the phase-space distributions can shine some light into this problem. We do a comparative study of Kohn-Sham and interacting phase-space and momentum distributions for single ionization and double ionization systems. Using a simple model of two-electron systems, we have showed that the momentum distribution computed directly from the exact KS system contains spurious oscillations: a non-classical description of the essentially classical two-electron dynamics. In Time dependent density matrix functional theory (TDDMFT), the evolution scheme of the 1RDM (first order reduced density matrix) contains second-order reduced density matrix (2RDM), which has to be expressed in terms of 1RDMs. Any non-correlated approximations (Hartree-Fock) for 2RDM would fail to capture the natural occupations of the system. In our fourth chapter, we show that by applying the quasi-classical and semi-classical approximations one can capture the natural occupations of the excited systems. We study a time-dependent Moshinsky atom model for this. The fifth chapter contains a comparative work on the existing non-local exchange-correlation kernels that are based on current density response frame work and the co-moving frame work. We show that the two approaches though coinciding with each other in linear response regime, actually turn out to be different in non-linear regime.
Ducru, Pablo; Josey, Colin; Dibert, Karia; ...
2017-01-25
This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (T j). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T 0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernelmore » of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (T j). The choice of the L 2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (T j) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [T min,T max]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.« less
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
NASA Astrophysics Data System (ADS)
Sardet, Laure; Patilea, Valentin
When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.
Correction of scatter in megavoltage cone-beam CT
NASA Astrophysics Data System (ADS)
Spies, L.; Ebert, M.; Groh, B. A.; Hesse, B. M.; Bortfeld, T.
2001-03-01
The role of scatter in a cone-beam computed tomography system using the therapeutic beam of a medical linear accelerator and a commercial electronic portal imaging device (EPID) is investigated. A scatter correction method is presented which is based on a superposition of Monte Carlo generated scatter kernels. The kernels are adapted to both the spectral response of the EPID and the dimensions of the phantom being scanned. The method is part of a calibration procedure which converts the measured transmission data acquired for each projection angle into water-equivalent thicknesses. Tomographic reconstruction of the projections then yields an estimate of the electron density distribution of the phantom. It is found that scatter produces cupping artefacts in the reconstructed tomograms. Furthermore, reconstructed electron densities deviate greatly (by about 30%) from their expected values. The scatter correction method removes the cupping artefacts and decreases the deviations from 30% down to about 8%.
Urban Transmission of American Cutaneous Leishmaniasis in Argentina: Spatial Analysis Study
Gil, José F.; Nasser, Julio R.; Cajal, Silvana P.; Juarez, Marisa; Acosta, Norma; Cimino, Rubén O.; Diosque, Patricio; Krolewiecki, Alejandro J.
2010-01-01
We used kernel density and scan statistics to examine the spatial distribution of cases of pediatric and adult American cutaneous leishmaniasis in an urban disease-endemic area in Salta Province, Argentina. Spatial analysis was used for the whole population and stratified by women > 14 years of age (n = 159), men > 14 years of age (n = 667), and children < 15 years of age (n = 213). Although kernel density for adults encompassed nearly the entire city, distribution in children was most prevalent in the peripheral areas of the city. Scan statistic analysis for adult males, adult females, and children found 11, 2, and 8 clusters, respectively. Clusters for children had the highest odds ratios (P < 0.05) and were located in proximity of plantations and secondary vegetation. The data from this study provide further evidence of the potential urban transmission of American cutaneous leishmaniasis in northern Argentina. PMID:20207869
KERNELHR: A program for estimating animal home ranges
Seaman, D.E.; Griffith, B.; Powell, R.A.
1998-01-01
Kernel methods are state of the art for estimating animal home-range area and utilization distribution (UD). The KERNELHR program was developed to provide researchers and managers a tool to implement this extremely flexible set of methods with many variants. KERNELHR runs interactively or from the command line on any personal computer (PC) running DOS. KERNELHR provides output of fixed and adaptive kernel home-range estimates, as well as density values in a format suitable for in-depth statistical and spatial analyses. An additional package of programs creates contour files for plotting in geographic information systems (GIS) and estimates core areas of ranges.
Analysis of the spatial distribution of dengue cases in the city of Rio de Janeiro, 2011 and 2012.
Carvalho, Silvia; Magalhães, Mônica de Avelar Figueiredo Mafra; Medronho, Roberto de Andrade
2017-08-17
Analyze the spatial distribution of classical dengue and severe dengue cases in the city of Rio de Janeiro. Exploratory study, considering cases of classical dengue and severe dengue with laboratory confirmation of the infection in the city of Rio de Janeiro during the years 2011/2012. The georeferencing technique was applied for the cases notified in the Notification Increase Information System in the period of 2011 and 2012. For this process, the fields "street" and "number" were used. The ArcGis10 program's Geocoding tool's automatic process was performed. The spatial analysis was done through the kernel density estimator. Kernel density pointed out hotspots for classic dengue that did not coincide geographically with severe dengue and were in or near favelas. The kernel ratio did not show a notable change in the spatial distribution pattern observed in the kernel density analysis. The georeferencing process showed a loss of 41% of classic dengue registries and 17% of severe dengue registries due to the address in the Notification Increase Information System form. The hotspots near the favelas suggest that the social vulnerability of these localities can be an influencing factor for the occurrence of this aggravation since there is a deficiency of the supply and access to essential goods and services for the population. To reduce this vulnerability, interventions must be related to macroeconomic policies. Analisar a distribuição espacial dos casos de dengue clássico e dengue grave no município do Rio de Janeiro. Estudo exploratório, considerando casos de dengue clássico e de dengue grave com comprovação laboratorial da infecção, ocorridos no município do Rio de Janeiro nos anos de 2011/2012. Foi aplicada a técnica de georreferenciamento dos casos notificados no Sistema de Informação de Agravos de Notificação, no período de 2011 e 2012. Para esse processo, utilizaram-se os campos "logradouro" e "número". Foi realizado o processo automático da ferramenta Geocoding do programa ArcGis10. A análise espacial foi feita a partir do estimador de densidade Kernel. A densidade de Kernel apontou áreas quentes para dengue clássico não coincidente geograficamente a dengue grave, estando localizadas dentro ou próximas de favelas. O cálculo da razão de Kernel não apresentou modificação significativa no padrão de distribuição espacial observados na análise da densidade de Kernel. O processo de georreferenciamento mostrou perda de 41% dos registros de dengue clássico e 17% de dengue grave devido ao endereçamento da ficha do Sistema de Informação de Agravos de Notificação. As áreas quentes próximas às favelas sugerem que a vulnerabilidade social existente nessas localidades pode ser um fator de influência para a ocorrência desse agravo, uma vez que há deficiência da oferta e acesso a bens e serviços essenciais para a população. Para diminuir essa vulnerabilidade, as intervenções devem estar relacionadas a políticas macroeconômicas.
3D local feature BKD to extract road information from mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Liu, Yuan; Dong, Zhen; Liang, Fuxun; Li, Bijun; Peng, Xiangyang
2017-08-01
Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise.
Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin
2014-10-21
To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Learning a peptide-protein binding affinity predictor with kernel ridge regression
2013-01-01
Background The cellular function of a vast majority of proteins is performed through physical interactions with other biomolecules, which, most of the time, are other proteins. Peptides represent templates of choice for mimicking a secondary structure in order to modulate protein-protein interaction. They are thus an interesting class of therapeutics since they also display strong activity, high selectivity, low toxicity and few drug-drug interactions. Furthermore, predicting peptides that would bind to a specific MHC alleles would be of tremendous benefit to improve vaccine based therapy and possibly generate antibodies with greater affinity. Modern computational methods have the potential to accelerate and lower the cost of drug and vaccine discovery by selecting potential compounds for testing in silico prior to biological validation. Results We propose a specialized string kernel for small bio-molecules, peptides and pseudo-sequences of binding interfaces. The kernel incorporates physico-chemical properties of amino acids and elegantly generalizes eight kernels, comprised of the Oligo, the Weighted Degree, the Blended Spectrum, and the Radial Basis Function. We provide a low complexity dynamic programming algorithm for the exact computation of the kernel and a linear time algorithm for it’s approximation. Combined with kernel ridge regression and SupCK, a novel binding pocket kernel, the proposed kernel yields biologically relevant and good prediction accuracy on the PepX database. For the first time, a machine learning predictor is capable of predicting the binding affinity of any peptide to any protein with reasonable accuracy. The method was also applied to both single-target and pan-specific Major Histocompatibility Complex class II benchmark datasets and three Quantitative Structure Affinity Model benchmark datasets. Conclusion On all benchmarks, our method significantly (p-value ≤ 0.057) outperforms the current state-of-the-art methods at predicting peptide-protein binding affinities. The proposed approach is flexible and can be applied to predict any quantitative biological activity. Moreover, generating reliable peptide-protein binding affinities will also improve system biology modelling of interaction pathways. Lastly, the method should be of value to a large segment of the research community with the potential to accelerate the discovery of peptide-based drugs and facilitate vaccine development. The proposed kernel is freely available at http://graal.ift.ulaval.ca/downloads/gs-kernel/. PMID:23497081
Tricoli, Ugo; Macdonald, Callum M; Durduran, Turgut; Da Silva, Anabela; Markel, Vadim A
2018-02-01
Diffuse correlation tomography (DCT) uses the electric-field temporal autocorrelation function to measure the mean-square displacement of light-scattering particles in a turbid medium over a given exposure time. The movement of blood particles is here estimated through a Brownian-motion-like model in contrast to ordered motion as in blood flow. The sensitivity kernel relating the measurable field correlation function to the mean-square displacement of the particles can be derived by applying a perturbative analysis to the correlation transport equation (CTE). We derive an analytical expression for the CTE sensitivity kernel in terms of the Green's function of the radiative transport equation, which describes the propagation of the intensity. We then evaluate the kernel numerically. The simulations demonstrate that, in the transport regime, the sensitivity kernel provides sharper spatial information about the medium as compared with the correlation diffusion approximation. Also, the use of the CTE allows one to explore some additional degrees of freedom in the data such as the collimation direction of sources and detectors. Our results can be used to improve the spatial resolution of DCT, in particular, with applications to blood flow imaging in regions where the Brownian motion is dominant.
NASA Astrophysics Data System (ADS)
Tricoli, Ugo; Macdonald, Callum M.; Durduran, Turgut; Da Silva, Anabela; Markel, Vadim A.
2018-02-01
Diffuse correlation tomography (DCT) uses the electric-field temporal autocorrelation function to measure the mean-square displacement of light-scattering particles in a turbid medium over a given exposure time. The movement of blood particles is here estimated through a Brownian-motion-like model in contrast to ordered motion as in blood flow. The sensitivity kernel relating the measurable field correlation function to the mean-square displacement of the particles can be derived by applying a perturbative analysis to the correlation transport equation (CTE). We derive an analytical expression for the CTE sensitivity kernel in terms of the Green's function of the radiative transport equation, which describes the propagation of the intensity. We then evaluate the kernel numerically. The simulations demonstrate that, in the transport regime, the sensitivity kernel provides sharper spatial information about the medium as compared with the correlation diffusion approximation. Also, the use of the CTE allows one to explore some additional degrees of freedom in the data such as the collimation direction of sources and detectors. Our results can be used to improve the spatial resolution of DCT, in particular, with applications to blood flow imaging in regions where the Brownian motion is dominant.
On the solution of integral equations with a generalized cauchy kernel
NASA Technical Reports Server (NTRS)
Kaya, A. C.; Erdogan, F.
1986-01-01
In this paper a certain class of singular integral equations that may arise from the mixed boundary value problems in nonhomogeneous materials is considered. The distinguishing feature of these equations is that in addition to the Cauchy singularity, the kernels contain terms that are singular only at the end points. In the form of the singular integral equations adopted, the density function is a potential or a displacement and consequently the kernel has strong singularities of the form (t-x) sup-2, x sup n-2 (t+x) sup n, (n or = 2, 0x,tb). The complex function theory is used to determine the fundamental function of the problem for the general case and a simple numerical technique is described to solve the integral equation. Two examples from the theory of elasticity are then considered to show the application of the technique.
Numerical method for solving the nonlinear four-point boundary value problems
NASA Astrophysics Data System (ADS)
Lin, Yingzhen; Lin, Jinnan
2010-12-01
In this paper, a new reproducing kernel space is constructed skillfully in order to solve a class of nonlinear four-point boundary value problems. The exact solution of the linear problem can be expressed in the form of series and the approximate solution of the nonlinear problem is given by the iterative formula. Compared with known investigations, the advantages of our method are that the representation of exact solution is obtained in a new reproducing kernel Hilbert space and accuracy of numerical computation is higher. Meanwhile we present the convergent theorem, complexity analysis and error estimation. The performance of the new method is illustrated with several numerical examples.
Frozen Gaussian approximation for 3D seismic tomography
NASA Astrophysics Data System (ADS)
Chai, Lihui; Tong, Ping; Yang, Xu
2018-05-01
Three-dimensional (3D) wave-equation-based seismic tomography is computationally challenging in large scales and high-frequency regime. In this paper, we apply the frozen Gaussian approximation (FGA) method to compute 3D sensitivity kernels and seismic tomography of high-frequency. Rather than standard ray theory used in seismic inversion (e.g. Kirchhoff migration and Gaussian beam migration), FGA is used to compute the 3D high-frequency sensitivity kernels for travel-time or full waveform inversions. Specifically, we reformulate the equations of the forward and adjoint wavefields for the purpose of convenience to apply FGA, and with this reformulation, one can efficiently compute the Green’s functions whose convolutions with source time function produce wavefields needed for the construction of 3D kernels. Moreover, a fast summation method is proposed based on local fast Fourier transform which greatly improves the speed of reconstruction as the last step of FGA algorithm. We apply FGA to both the travel-time adjoint tomography and full waveform inversion (FWI) on synthetic crosswell seismic data with dominant frequencies as high as those of real crosswell data, and confirm again that FWI requires a more sophisticated initial velocity model for the convergence than travel-time adjoint tomography. We also numerically test the accuracy of applying FGA to local earthquake tomography. This study paves the way to directly apply wave-equation-based seismic tomography methods into real data around their dominant frequencies.
NASA Langley's Approach to the Sandia's Structural Dynamics Challenge Problem
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Kenny, Sean P.; Crespo, Luis G.; Elliott, Kenny B.
2007-01-01
The objective of this challenge is to develop a data-based probabilistic model of uncertainty to predict the behavior of subsystems (payloads) by themselves and while coupled to a primary (target) system. Although this type of analysis is routinely performed and representative of issues faced in real-world system design and integration, there are still several key technical challenges that must be addressed when analyzing uncertain interconnected systems. For example, one key technical challenge is related to the fact that there is limited data on target configurations. Moreover, it is typical to have multiple data sets from experiments conducted at the subsystem level, but often samples sizes are not sufficient to compute high confidence statistics. In this challenge problem additional constraints are placed as ground rules for the participants. One such rule is that mathematical models of the subsystem are limited to linear approximations of the nonlinear physics of the problem at hand. Also, participants are constrained to use these models and the multiple data sets to make predictions about the target system response under completely different input conditions. Our approach involved initially the screening of several different methods. Three of the ones considered are presented herein. The first one is based on the transformation of the modal data to an orthogonal space where the mean and covariance of the data are matched by the model. The other two approaches worked solutions in physical space where the uncertain parameter set is made of masses, stiffnesses and damping coefficients; one matches confidence intervals of low order moments of the statistics via optimization while the second one uses a Kernel density estimation approach. The paper will touch on all the approaches, lessons learned, validation 1 metrics and their comparison, data quantity restriction, and assumptions/limitations of each approach. Keywords: Probabilistic modeling, model validation, uncertainty quantification, kernel density
Kernel methods and flexible inference for complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Capobianco, Enrico
2008-07-01
Approximation theory suggests that series expansions and projections represent standard tools for random process applications from both numerical and statistical standpoints. Such instruments emphasize the role of both sparsity and smoothness for compression purposes, the decorrelation power achieved in the expansion coefficients space compared to the signal space, and the reproducing kernel property when some special conditions are met. We consider these three aspects central to the discussion in this paper, and attempt to analyze the characteristics of some known approximation instruments employed in a complex application domain such as financial market time series. Volatility models are often built ad hoc, parametrically and through very sophisticated methodologies. But they can hardly deal with stochastic processes with regard to non-Gaussianity, covariance non-stationarity or complex dependence without paying a big price in terms of either model mis-specification or computational efficiency. It is thus a good idea to look at other more flexible inference tools; hence the strategy of combining greedy approximation and space dimensionality reduction techniques, which are less dependent on distributional assumptions and more targeted to achieve computationally efficient performances. Advantages and limitations of their use will be evaluated by looking at algorithmic and model building strategies, and by reporting statistical diagnostics.
NASA Astrophysics Data System (ADS)
Jackson, B. V.; Yu, H. S.; Hick, P. P.; Buffington, A.; Odstrcil, D.; Kim, T. K.; Pogorelov, N. V.; Tokumaru, M.; Bisi, M. M.; Kim, J.; Yun, J.
2017-12-01
The University of California, San Diego has developed an iterative remote-sensing time-dependent three-dimensional (3-D) reconstruction technique which provides volumetric maps of density, velocity, and magnetic field. We have applied this technique in near real time for over 15 years with a kinematic model approximation to fit data from ground-based interplanetary scintillation (IPS) observations. Our modeling concept extends volumetric data from an inner boundary placed above the Alfvén surface out to the inner heliosphere. We now use this technique to drive 3-D MHD models at their inner boundary and generate output 3-D data files that are fit to remotely-sensed observations (in this case IPS observations), and iterated. These analyses are also iteratively fit to in-situ spacecraft measurements near Earth. To facilitate this process, we have developed a traceback from input 3-D MHD volumes to yield an updated boundary in density, temperature, and velocity, which also includes magnetic-field components. Here we will show examples of this analysis using the ENLIL 3D-MHD and the University of Alabama Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) heliospheric codes. These examples help refine poorly-known 3-D MHD variables (i.e., density, temperature), and parameters (gamma) by fitting heliospheric remotely-sensed data between the region near the solar surface and in-situ measurements near Earth.
A Novel Approach to Visualizing Dark Matter Simulations.
Kaehler, R; Hahn, O; Abel, T
2012-12-01
In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Rooyen, Isabella Johanna; Demkowicz, Paul Andrew; Riesterer, Jessica Lori
2012-12-01
The electron microscopic examination of selected irradiated TRISO coated particles of the AGR-1 experiment of fuel compact 6-3-2 are presented in this report. Compact 6-3-2 refers to the compact in Capsule 6 at level 3 of Stack 2. The fuel used in capsule 6 compacts, are called the “baseline” fuel as it is fabricated with refined coating process conditions used to fabricate historic German fuel, because of its excellent irradiation performance with UO2 kernels. The AGR-1 fuel is however made of low-enriched uranium oxycarbide (UCO). Kernel diameters are approximately 350 µm with a U-235 enrichment of approximately 19.7%. Compact 6-3-2more » has been irradiated to 11.3% FIMA compact average burn-up with a time average, volume average temperature of 1070.2°C and with a compact average fast fluence of 2.38E21 n/cm« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Rooyen, Isabella Johanna; Demkowicz, Paul Andrew; Riesterer, Jessica Lori
2012-12-01
The electron microscopic examination of selected irradiated TRISO coated particles of the AGR-1 experiment of fuel compact 6-3-2 are presented in this report. Compact 6-3-2 refers to the compact in Capsule 6 at level 3 of Stack 2. The fuel used in capsule 6 compacts, are called the “baseline” fuel as it is fabricated with refined coating process conditions used to fabricate historic German fuel, because of its excellent irradiation performance with UO 2 kernels. The AGR-1 fuel is however made of low-enriched uranium oxycarbide (UCO). Kernel diameters are approximately 350 µm with a U-235 enrichment of approximately 19.7%. Compactmore » 6-3-2 has been irradiated to 11.3% FIMA compact average burn-up with a time average, volume average temperature of 1070.2°C and with a compact average fast fluence of 2.38E21 n/cm« less
On the convergence of local approximations to pseudodifferential operators with applications
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1994-01-01
We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.
New approximate orientation averaging of the water molecule interacting with the thermal neutron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markovic, M.I.; Minic, D.M.; Rakic, A.D.
1992-02-01
This paper reports that exactly describing the time of thermal neutron collisions with water molecules, orientation averaging is performed by an exact method (EOA{sub k}) and four approximate methods (two well known and two less known). Expressions for the microscopic scattering kernel are developed. The two well-known approximate orientation averaging methods are Krieger-Nelkin (K-N) and Koppel-Young (K-Y). The results obtained by one of the two proposed approximate orientation averaging methods agree best with the corresponding results obtained by EOA{sub k}. The largest discrepancies between the EOA{sub k} results and the results of the approximate methods are obtained using the well-knowmore » K-N approximate orientation averaging method.« less
Boisdenghien, Zino; Fias, Stijn; Van Alsenoy, Christian; De Proft, Frank; Geerlings, Paul
2014-07-28
Most of the work done on the linear response kernel χ(r,r') has focussed on its atom-atom condensed form χAB. Our previous work [Boisdenghien et al., J. Chem. Theory Comput., 2013, 9, 1007] was the first effort to truly focus on the non-condensed form of this function for closed (sub)shell atoms in a systematic fashion. In this work, we extend our method to the open shell case. To simplify the plotting of our results, we average our results to a symmetrical quantity χ(r,r'). This allows us to plot the linear response kernel for all elements up to and including argon and to investigate the periodicity throughout the first three rows in the periodic table and in the different representations of χ(r,r'). Within the context of Spin Polarized Conceptual Density Functional Theory, the first two-dimensional plots of spin polarized linear response functions are presented and commented on for some selected cases on the basis of the atomic ground state electronic configurations. Using the relation between the linear response kernel and the polarizability we compare the values of the polarizability tensor calculated using our method to high-level values.
Ruan, Peiying; Hayashida, Morihiro; Maruyama, Osamu; Akutsu, Tatsuya
2013-01-01
Since many proteins express their functional activity by interacting with other proteins and forming protein complexes, it is very useful to identify sets of proteins that form complexes. For that purpose, many prediction methods for protein complexes from protein-protein interactions have been developed such as MCL, MCODE, RNSC, PCP, RRW, and NWE. These methods have dealt with only complexes with size of more than three because the methods often are based on some density of subgraphs. However, heterodimeric protein complexes that consist of two distinct proteins occupy a large part according to several comprehensive databases of known complexes. In this paper, we propose several feature space mappings from protein-protein interaction data, in which each interaction is weighted based on reliability. Furthermore, we make use of prior knowledge on protein domains to develop feature space mappings, domain composition kernel and its combination kernel with our proposed features. We perform ten-fold cross-validation computational experiments. These results suggest that our proposed kernel considerably outperforms the naive Bayes-based method, which is the best existing method for predicting heterodimeric protein complexes. PMID:23776458
Knowledge Driven Image Mining with Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Oza, Nikunj
2004-01-01
This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels; which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper, we present the theory of Mercer Kernels, describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.
A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.
Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying
2015-09-01
Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.
NASA Astrophysics Data System (ADS)
Creusen, I. M.; Hazelhoff, L.; De With, P. H. N.
2013-10-01
In large-scale automatic traffic sign surveying systems, the primary computational effort is concentrated at the traffic sign detection stage. This paper focuses on reducing the computational load of particularly the sliding window object detection algorithm which is employed for traffic sign detection. Sliding-window object detectors often use a linear SVM to classify the features in a window. In this case, the classification can be seen as a convolution of the feature maps with the SVM kernel. It is well known that convolution can be efficiently implemented in the frequency domain, for kernels larger than a certain size. We show that by careful reordering of sliding-window operations, most of the frequency-domain transformations can be eliminated, leading to a substantial increase in efficiency. Additionally, we suggest to use the overlap-add method to keep the memory use within reasonable bounds. This allows us to keep all the transformed kernels in memory, thereby eliminating even more domain transformations, and allows all scales in a multiscale pyramid to be processed using the same set of transformed kernels. For a typical sliding-window implementation, we have found that the detector execution performance improves with a factor of 5.3. As a bonus, many of the detector improvements from literature, e.g. chi-squared kernel approximations, sub-class splitting algorithms etc., can be more easily applied at a lower performance penalty because of an improved scalability.
Zhang, Yanjun; Zhang, Xiangmin; Liu, Wenhui; Luo, Yuxi; Yu, Enjia; Zou, Keju; Liu, Xiaoliang
2014-01-01
This paper employed the clinical Polysomnographic (PSG) data, mainly including all-night Electroencephalogram (EEG), Electrooculogram (EOG) and Electromyogram (EMG) signals of subjects, and adopted the American Academy of Sleep Medicine (AASM) clinical staging manual as standards to realize automatic sleep staging. Authors extracted eighteen different features of EEG, EOG and EMG in time domains and frequency domains to construct the vectors according to the existing literatures as well as clinical experience. By adopting sleep samples self-learning, the linear combination of weights and parameters of multiple kernels of the fuzzy support vector machine (FSVM) were learned and the multi-kernel FSVM (MK-FSVM) was constructed. The overall agreement between the experts' scores and the results presented was 82.53%. Compared with previous results, the accuracy of N1 was improved to some extent while the accuracies of other stages were approximate, which well reflected the sleep structure. The staging algorithm proposed in this paper is transparent, and worth further investigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Wang, Chenyu; Li, Mingjie
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Wang, Chenyu; Li, Mingjie
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...
2018-01-31
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Kernel-Based Approximate Dynamic Programming Using Bellman Residual Elimination
2010-02-01
framework is the ability to utilize stochastic system models, thereby allowing the system to make sound decisions even if there is randomness in the system ...approximate policy when a system model is unavailable. We present theoretical analysis of all BRE algorithms proving convergence to the optimal policy in...policies based on MDPs is that there may be parameters of the system model that are poorly known and/or vary with time as the system operates. System
Maternal and child mortality indicators across 187 countries of the world: converging or diverging.
Goli, Srinivas; Arokiasamy, Perianayagam
2014-01-01
This study reassessed the progress achieved since 1990 in maternal and child mortality indicators to test whether the progress is converging or diverging across countries worldwide. The convergence process is examined using standard parametric and non-parametric econometric models of convergence. The results of absolute convergence estimates reveal that progress in maternal and child mortality indicators is diverging for the entire period of 1990-2010 [maternal mortality ratio (MMR) - β = .00033, p < .574; neonatal mortality rate (NNMR) - β = .04367, p < .000; post-neonatal mortality rate (PNMR) - β = .02677, p < .000; under-five mortality rate (U5MR) - β = .00828, p < .000)]. In the recent period, such divergence is replaced with convergence for MMR but diverged for all the child mortality indicators. The results of Kernel density estimate reveal considerable reduction in divergence of MMR for the recent period; however, the Kernel density distribution plots show more than one 'peak' which indicates the emergence of convergence clubs based on their mortality levels. For child mortality indicators, the Kernel estimates suggest that divergence is in progress across the countries worldwide but tended to converge for countries with low mortality levels. A mere progress in global averages of maternal and child mortality indicators among a global cross-section of countries does not warranty convergence unless there is a considerable reduction in variance, skewness and range of change.
MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD
A predictive screening model was developed for fate and transport
of viruses in the unsaturated zone. A database of input parameters
allowed Monte Carlo analysis with the model. The resulting kernel
densities of predicted attenuation during percolation indicated very ...
SPHYNX: an accurate density-based SPH method for astrophysical applications
NASA Astrophysics Data System (ADS)
Cabezón, R. M.; García-Senz, D.; Figueira, J.
2017-10-01
Aims: Hydrodynamical instabilities and shocks are ubiquitous in astrophysical scenarios. Therefore, an accurate numerical simulation of these phenomena is mandatory to correctly model and understand many astrophysical events, such as supernovas, stellar collisions, or planetary formation. In this work, we attempt to address many of the problems that a commonly used technique, smoothed particle hydrodynamics (SPH), has when dealing with subsonic hydrodynamical instabilities or shocks. To that aim we built a new SPH code named SPHYNX, that includes many of the recent advances in the SPH technique and some other new ones, which we present here. Methods: SPHYNX is of Newtonian type and grounded in the Euler-Lagrange formulation of the smoothed-particle hydrodynamics technique. Its distinctive features are: the use of an integral approach to estimating the gradients; the use of a flexible family of interpolators called sinc kernels, which suppress pairing instability; and the incorporation of a new type of volume element which provides a better partition of the unity. Unlike other modern formulations, which consider volume elements linked to pressure, our volume element choice relies on density. SPHYNX is, therefore, a density-based SPH code. Results: A novel computational hydrodynamic code oriented to Astrophysical applications is described, discussed, and validated in the following pages. The ensuing code conserves mass, linear and angular momentum, energy, entropy, and preserves kernel normalization even in strong shocks. In our proposal, the estimation of gradients is enhanced using an integral approach. Additionally, we introduce a new family of volume elements which reduce the so-called tensile instability. Both features help to suppress the damp which often prevents the growth of hydrodynamic instabilities in regular SPH codes. Conclusions: On the whole, SPHYNX has passed the verification tests described below. For identical particle setting and initial conditions the results were similar (or better in some particular cases) than those obtained with other SPH schemes such as GADGET-2, PSPH or with the recent density-independent formulation (DISPH) and conservative reproducing kernel (CRKSPH) techniques.
Optimal approximation of harmonic growth clusters by orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teodorescu, Razvan
2008-01-01
Interface dynamics in two-dimensional systems with a maximal number of conservation laws gives an accurate theoreticaI model for many physical processes, from the hydrodynamics of immiscible, viscous flows (zero surface-tension limit of Hele-Shaw flows), to the granular dynamics of hard spheres, and even diffusion-limited aggregation. Although a complete solution for the continuum case exists, efficient approximations of the boundary evolution are very useful due to their practical applications. In this article, the approximation scheme based on orthogonal polynomials with a deformed Gaussian kernel is discussed, as well as relations to potential theory.
Evaluation of Statistical Downscaling Skill at Reproducing Extreme Events
NASA Astrophysics Data System (ADS)
McGinnis, S. A.; Tye, M. R.; Nychka, D. W.; Mearns, L. O.
2015-12-01
Climate model outputs usually have much coarser spatial resolution than is needed by impacts models. Although higher resolution can be achieved using regional climate models for dynamical downscaling, further downscaling is often required. The final resolution gap is often closed with a combination of spatial interpolation and bias correction, which constitutes a form of statistical downscaling. We use this technique to downscale regional climate model data and evaluate its skill in reproducing extreme events. We downscale output from the North American Regional Climate Change Assessment Program (NARCCAP) dataset from its native 50-km spatial resolution to the 4-km resolution of University of Idaho's METDATA gridded surface meterological dataset, which derives from the PRISM and NLDAS-2 observational datasets. We operate on the major variables used in impacts analysis at a daily timescale: daily minimum and maximum temperature, precipitation, humidity, pressure, solar radiation, and winds. To interpolate the data, we use the patch recovery method from the Earth System Modeling Framework (ESMF) regridding package. We then bias correct the data using Kernel Density Distribution Mapping (KDDM), which has been shown to exhibit superior overall performance across multiple metrics. Finally, we evaluate the skill of this technique in reproducing extreme events by comparing raw and downscaled output with meterological station data in different bioclimatic regions according to the the skill scores defined by Perkins et al in 2013 for evaluation of AR4 climate models. We also investigate techniques for improving bias correction of values in the tails of the distributions. These techniques include binned kernel density estimation, logspline kernel density estimation, and transfer functions constructed by fitting the tails with a generalized pareto distribution.
Cho, Nahye; Son, Serin
2018-01-01
The purpose of this study is to analyze how the spatiotemporal characteristics of traffic accidents involving the elderly population in Seoul are changing by time period. We applied kernel density estimation and hotspot analyses to analyze the spatial characteristics of elderly people’s traffic accidents, and the space-time cube, emerging hotspot, and space-time kernel density estimation analyses to analyze the spatiotemporal characteristics. In addition, we analyzed elderly people’s traffic accidents by dividing cases into those in which the drivers were elderly people and those in which elderly people were victims of traffic accidents, and used the traffic accidents data in Seoul for 2013 for analysis. The main findings were as follows: (1) the hotspots for elderly people’s traffic accidents differed according to whether they were drivers or victims. (2) The hourly analysis showed that the hotspots for elderly drivers’ traffic accidents are in specific areas north of the Han River during the period from morning to afternoon, whereas the hotspots for elderly victims are distributed over a wide area from daytime to evening. (3) Monthly analysis showed that the hotspots are weak during winter and summer, whereas they are strong in the hiking and climbing areas in Seoul during spring and fall. Further, elderly victims’ hotspots are more sporadic than elderly drivers’ hotspots. (4) The analysis for the entire period of 2013 indicates that traffic accidents involving elderly people are increasing in specific areas on the north side of the Han River. We expect the results of this study to aid in reducing the number of traffic accidents involving elderly people in the future. PMID:29768453
Kang, Youngok; Cho, Nahye; Son, Serin
2018-01-01
The purpose of this study is to analyze how the spatiotemporal characteristics of traffic accidents involving the elderly population in Seoul are changing by time period. We applied kernel density estimation and hotspot analyses to analyze the spatial characteristics of elderly people's traffic accidents, and the space-time cube, emerging hotspot, and space-time kernel density estimation analyses to analyze the spatiotemporal characteristics. In addition, we analyzed elderly people's traffic accidents by dividing cases into those in which the drivers were elderly people and those in which elderly people were victims of traffic accidents, and used the traffic accidents data in Seoul for 2013 for analysis. The main findings were as follows: (1) the hotspots for elderly people's traffic accidents differed according to whether they were drivers or victims. (2) The hourly analysis showed that the hotspots for elderly drivers' traffic accidents are in specific areas north of the Han River during the period from morning to afternoon, whereas the hotspots for elderly victims are distributed over a wide area from daytime to evening. (3) Monthly analysis showed that the hotspots are weak during winter and summer, whereas they are strong in the hiking and climbing areas in Seoul during spring and fall. Further, elderly victims' hotspots are more sporadic than elderly drivers' hotspots. (4) The analysis for the entire period of 2013 indicates that traffic accidents involving elderly people are increasing in specific areas on the north side of the Han River. We expect the results of this study to aid in reducing the number of traffic accidents involving elderly people in the future.
Modeling RF Fields in Hot Plasmas with Parallel Full Wave Code
NASA Astrophysics Data System (ADS)
Spencer, Andrew; Svidzinski, Vladimir; Zhao, Liangji; Galkin, Sergei; Kim, Jin-Soo
2016-10-01
FAR-TECH, Inc. is developing a suite of full wave RF plasma codes. It is based on a meshless formulation in configuration space with adapted cloud of computational points (CCP) capability and using the hot plasma conductivity kernel to model the nonlocal plasma dielectric response. The conductivity kernel is calculated by numerically integrating the linearized Vlasov equation along unperturbed particle trajectories. Work has been done on the following calculations: 1) the conductivity kernel in hot plasmas, 2) a monitor function based on analytic solutions of the cold-plasma dispersion relation, 3) an adaptive CCP based on the monitor function, 4) stencils to approximate the wave equations on the CCP, 5) the solution to the full wave equations in the cold-plasma model in tokamak geometry for ECRH and ICRH range of frequencies, and 6) the solution to the wave equations using the calculated hot plasma conductivity kernel. We will present results on using a meshless formulation on adaptive CCP to solve the wave equations and on implementing the non-local hot plasma dielectric response to the wave equations. The presentation will include numerical results of wave propagation and absorption in the cold and hot tokamak plasma RF models, using DIII-D geometry and plasma parameters. Work is supported by the U.S. DOE SBIR program.
Speeding Up the Bilateral Filter: A Joint Acceleration Way.
Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng
2016-06-01
Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.
Zhang, Duan Z.; Padrino, Juan C.
2017-06-01
The ensemble averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of pockets connected by tortuous channels. Inside a channel, fluid transport is assumed to be governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pocket mass density. The so-called dual-porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem,more » we consider the one-dimensional mass diffusion in a semi-infinite domain. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt $-$1/4 rather than xt $-$1/2 as in the traditional theory. We found this early time similarity can be explained by random walk theory through the network.« less
Smooth time-dependent receiver operating characteristic curve estimators.
Martínez-Camblor, Pablo; Pardo-Fernández, Juan Carlos
2018-03-01
The receiver operating characteristic curve is a popular graphical method often used to study the diagnostic capacity of continuous (bio)markers. When the considered outcome is a time-dependent variable, two main extensions have been proposed: the cumulative/dynamic receiver operating characteristic curve and the incident/dynamic receiver operating characteristic curve. In both cases, the main problem for developing appropriate estimators is the estimation of the joint distribution of the variables time-to-event and marker. As usual, different approximations lead to different estimators. In this article, the authors explore the use of a bivariate kernel density estimator which accounts for censored observations in the sample and produces smooth estimators of the time-dependent receiver operating characteristic curves. The performance of the resulting cumulative/dynamic and incident/dynamic receiver operating characteristic curves is studied by means of Monte Carlo simulations. Additionally, the influence of the choice of the required smoothing parameters is explored. Finally, two real-applications are considered. An R package is also provided as a complement to this article.
NASA Astrophysics Data System (ADS)
Kita, Y.; Waseda, T.
2016-12-01
Explosive cyclones (EXPCs) were investigated in three recent reanalyses. Their tracking methods is diverse among researchers, and additionally reanalysis data they use are various. Reanalysis data are essential as initial conditions to implement a downscale simulation with high accuracy. In this study, characteristics of EXPCs in three recent reanalyses were investigated from several perspectives: track densities, minimum MSLP (Mean Sea Level Pressure), and radius of EXPCs. The tracking method of extratropical cyclones (ECs) is to track local minimum of MSLP. The domain is limited to Eastern Asia and the North Pacific Ocean (lat20°:70°, lon100°:200°), and target period is 2000-2014. Fig.1 shows that the frequencies of EXPCs, which is defined as ECs whose MSLP drops by over 12hPa in 12hours, are greatly different, noting that extracted EXPCs are those whose most deepening phases were located around Japan (lat20°:60°, lon110°:160°). In addition, they are dissimilar to those in a previous EXPCs database (Kawamura et al.) and results in weather map analyses. The differences between each frequency might be caused by MSLP at their centers: there were sometimes small gaps of a few hPa. The minimum MSLP and effective radius were also investigated, but distributions of effective radii of EXPCs did not show significant difference (Fig.2). Thus, the gaps of central MSLP just matter in the differences of their trends. To evaluate the path density of EXPCs, two-dimensional kernel density estimation was conducted. The kernel densities of EXPCs' tracks in three reanalyses seem similar: they accumulated apparently above ocean (not shown). Two-dimensional kernel densities of EXPCs' most deepening points accumulated above Sea of Japan, Kuroshio and Extension. Therefore, it is proved that there are considerable differences in numbers of EXPCs depending on reanalyses, while the general characteristics of EXPCs just have little difference. It is worthwhile to say that careful attention should be paid when researchers investigate an individual EXPC with reanalysis data.
Pearce, Jamie; Rind, Esther; Shortt, Niamh; Tisch, Catherine; Mitchell, Richard
2016-02-01
Many neighborhood characteristics may constrain or enable smoking. This study investigated whether the neighborhood tobacco retail environment was associated with individual-level smoking and cessation in Scottish adults, and whether inequalities in smoking status were related to tobacco retailing. Tobacco outlet density measures were developed for neighborhoods across Scotland using the September 2012 Scottish Tobacco Retailers Register. The outlet data were cleaned and geocoded (n = 10,161) using a Geographic Information System. Kernel density estimation was used to calculate an outlet density measure for each postcode. The kernel density estimation measures were then appended to data on individuals included in the 2008-2011 Scottish Health Surveys (n = 28,751 adults aged ≥16), via their postcode. Two-level logistic regression models examined whether neighborhood density of tobacco retailing was associated with current smoking status and smoking cessation and whether there were differences in the relationship between household income and smoking status, by tobacco outlet density. After adjustment for individual- and area-level confounders, compared to residents of areas with the lowest outlet densities, those living in areas with the highest outlet densities had a 6% higher chance of being a current smoker, and a 5% lower chance of being an ex-smoker. There was little evidence to suggest that inequalities in either current smoking or cessation were narrower in areas with lower availability of tobacco retailing. The findings suggest that residents of environments with a greater availability of tobacco outlets are more likely to start and/or sustain smoking, and less likely to quit. © The Author 2015. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆
Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny
2014-01-01
There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702
Segmentation of the Speaker's Face Region with Audiovisual Correlation
NASA Astrophysics Data System (ADS)
Liu, Yuyu; Sato, Yoichi
The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.
Two-Dimensional Ffowcs Williams/Hawkings Equation Solver
NASA Technical Reports Server (NTRS)
Lockard, David P.
2005-01-01
FWH2D is a Fortran 90 computer program that solves a two-dimensional (2D) version of the equation, derived by J. E. Ffowcs Williams and D. L. Hawkings, for sound generated by turbulent flow. FWH2D was developed especially for estimating noise generated by airflows around such approximately 2D airframe components as slats. The user provides input data on fluctuations of pressure, density, and velocity on some surface. These data are combined with information about the geometry of the surface to calculate histories of thickness and loading terms. These histories are fast-Fourier-transformed into the frequency domain. For each frequency of interest and each observer position specified by the user, kernel functions are integrated over the surface by use of the trapezoidal rule to calculate a pressure signal. The resulting frequency-domain signals are inverse-fast-Fourier-transformed back into the time domain. The output of the code consists of the time- and frequency-domain representations of the pressure signals at the observer positions. Because of its approximate nature, FWH2D overpredicts the noise from a finite-length (3D) component. The advantage of FWH2D is that it requires a fraction of the computation time of a 3D Ffowcs Williams/Hawkings solver.
NASA Technical Reports Server (NTRS)
Milman, M. H.
1985-01-01
A factorization approach is presented for deriving approximations to the optimal feedback gain for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the feedback kernels.
NASA Astrophysics Data System (ADS)
Suwari, Kotta, Herry Z.; Buang, Yohanes
2017-12-01
Optimizing the soxhlet extraction of oil from seed kernel of Feun Kase (Thevetia peruviana) for biodiesel production was carried out in this study. The solvent used was petroleum ether and methanol, as well as their combinations. The effect of three factors namely different solvent combinations (polarity), extraction time and extraction temperature were investigated for achieving maximum oil yield. Each experiment was conducted in 250 mL soxhlet apparatus. The physicochemical properties of the oil yield (density, kinematic viscosity, acid value, iodine value, saponification value, and water content) were also analyzed. The optimum conditions were found after 4.5 h with extraction time, extraction temperature at 65 oC and petroleum ether to methanol ratio of 90 : 10 (polarity index 0.6). The oil extract was found to be 51.88 ± 3.18%. These results revealed that the crop oil from seed kernel of Feun Kase (Thevetia peruviana) is a potential feedstock for biodiesel production.
The Feasibility of Palm Kernel Shell as a Replacement for Coarse Aggregate in Lightweight Concrete
NASA Astrophysics Data System (ADS)
Itam, Zarina; Beddu, Salmia; Liyana Mohd Kamal, Nur; Ashraful Alam, Md; Issa Ayash, Usama
2016-03-01
Implementing sustainable materials into the construction industry is fast becoming a trend nowadays. Palm Kernel Shell is a by-product of Malaysia’s palm oil industry, generating waste as much as 4 million tons per annum. As a means of producing a sustainable, environmental-friendly, and affordable alternative in the lightweight concrete industry, the exploration of the potential of Palm Kernel Shell to be used as an aggregate replacement was conducted which may give a positive impact to the Malaysian construction industry as well as worldwide concrete usage. This research investigates the feasibility of PKS as an aggregate replacement in lightweight concrete in terms of compressive strength, slump test, water absorption, and density. Results indicate that by using PKS for aggregate replacement, it increases the water absorption but decreases the concrete workability and strength. Results however, fall into the range acceptable for lightweight aggregates, hence it can be concluded that there is potential to use PKS as aggregate replacement for lightweight concrete.
NASA Astrophysics Data System (ADS)
Hung, L.; Guedj, C.; Bernier, N.; Blaise, P.; Olevano, V.; Sottile, F.
2016-04-01
We present the valence electron energy-loss spectrum and the dielectric function of monoclinic hafnia (m -HfO2) obtained from time-dependent density-functional theory (TDDFT) predictions and compared to energy-filtered spectroscopic imaging measurements in a high-resolution transmission-electron microscope. Fermi's golden rule density-functional theory (DFT) calculations can capture the qualitative features of the energy-loss spectrum, but we find that TDDFT, which accounts for local-field effects, provides nearly quantitative agreement with experiment. Using the DFT density of states and TDDFT dielectric functions, we characterize the excitations that result in the m -HfO2 energy-loss spectrum. The sole plasmon occurs between 13 and 16 eV, although the peaks ˜28 and above 40 eV are also due to collective excitations. We furthermore elaborate on the first-principles techniques used, their accuracy, and remaining discrepancies among spectra. More specifically, we assess the influence of Hf semicore electrons (5 p and 4 f ) on the energy-loss spectrum, and find that the inclusion of transitions from the 4 f band damps the energy-loss intensity in the region above 13 eV. We study the impact of many-body effects in a DFT framework using the adiabatic local-density approximation (ALDA) exchange-correlation kernel, as well as from a many-body perspective using "scissors operators" matched to an ab initio G W calculation to account for self-energy corrections. These results demonstrate some cancellation of errors between self-energy and excitonic effects, even for excitations from the Hf 4 f shell. We also simulate the dispersion with increasing momentum transfer for plasmon and collective excitation peaks.
NASA Astrophysics Data System (ADS)
Xie, Yiting; Salvatore, Mary; Liu, Shuang; Jirapatnakul, Artit; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.
2017-03-01
A fully-automated computer algorithm has been developed to identify early-stage Usual Interstitial Pneumonia (UIP) using features computed from low-dose CT scans. In each scan, the pre-segmented lung region is divided into N subsections (N = 1, 8, 27, 64) by separating the lung from anterior/posterior, left/right and superior/inferior in 3D space. Each subsection has approximately the same volume. In each subsection, a classic density measurement (fractional high-density volume h) is evaluated to characterize the disease severity in that subsection, resulting in a feature vector of length N for each lung. Features are then combined in two different ways: concatenation (2*N features) and taking the maximum in each of the two corresponding subsections in the two lungs (N features). The algorithm was evaluated on a dataset consisting of 51 UIP and 56 normal cases, a combined feature vector was computed for each case and an SVM classifier (RBF kernel) was used to classify them into UIP or normal using ten-fold cross validation. A receiver operating characteristic (ROC) area under the curve (AUC) was used for evaluation. The highest AUC of 0.95 was achieved by using concatenated features and an N of 27. Using lung partition (N = 27, 64) with concatenated features had significantly better result over not using partitions (N = 1) (p-value < 0.05). Therefore this equal-volume partition fractional high-density volume method is useful in distinguishing early-stage UIP from normal cases.
Wang, Yi-Shan; Potts, Jonathan R
2017-03-07
Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Feature-based Approach to Big Data Analysis of Medical Images
Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.
2015-01-01
This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685
A Feature-Based Approach to Big Data Analysis of Medical Images.
Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M
2015-01-01
This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.
Verheijen, Lieneke M; Aerts, Rien; Bönisch, Gerhard; Kattge, Jens; Van Bodegom, Peter M
2016-01-01
Plant functional types (PFTs) aggregate the variety of plant species into a small number of functionally different classes. We examined to what extent plant traits, which reflect species' functional adaptations, can capture functional differences between predefined PFTs and which traits optimally describe these differences. We applied Gaussian kernel density estimation to determine probability density functions for individual PFTs in an n-dimensional trait space and compared predicted PFTs with observed PFTs. All possible combinations of 1-6 traits from a database with 18 different traits (total of 18 287 species) were tested. A variety of trait sets had approximately similar performance, and 4-5 traits were sufficient to classify up to 85% of the species into PFTs correctly, whereas this was 80% for a bioclimatically defined tree PFT classification. Well-performing trait sets included combinations of correlated traits that are considered functionally redundant within a single plant strategy. This analysis quantitatively demonstrates how structural differences between PFTs are reflected in functional differences described by particular traits. Differentiation between PFTs is possible despite large overlap in plant strategies and traits, showing that PFTs are differently positioned in multidimensional trait space. This study therefore provides the foundation for important applications for predictive ecology. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Fast Query-Optimized Kernel-Machine Classification
NASA Technical Reports Server (NTRS)
Mazzoni, Dominic; DeCoste, Dennis
2004-01-01
A recently developed algorithm performs kernel-machine classification via incremental approximate nearest support vectors. The algorithm implements support-vector machines (SVMs) at speeds 10 to 100 times those attainable by use of conventional SVM algorithms. The algorithm offers potential benefits for classification of images, recognition of speech, recognition of handwriting, and diverse other applications in which there are requirements to discern patterns in large sets of data. SVMs constitute a subset of kernel machines (KMs), which have become popular as models for machine learning and, more specifically, for automated classification of input data on the basis of labeled training data. While similar in many ways to k-nearest-neighbors (k-NN) models and artificial neural networks (ANNs), SVMs tend to be more accurate. Using representations that scale only linearly in the numbers of training examples, while exploring nonlinear (kernelized) feature spaces that are exponentially larger than the original input dimensionality, KMs elegantly and practically overcome the classic curse of dimensionality. However, the price that one must pay for the power of KMs is that query-time complexity scales linearly with the number of training examples, making KMs often orders of magnitude more computationally expensive than are ANNs, decision trees, and other popular machine learning alternatives. The present algorithm treats an SVM classifier as a special form of a k-NN. The algorithm is based partly on an empirical observation that one can often achieve the same classification as that of an exact KM by using only small fraction of the nearest support vectors (SVs) of a query. The exact KM output is a weighted sum over the kernel values between the query and the SVs. In this algorithm, the KM output is approximated with a k-NN classifier, the output of which is a weighted sum only over the kernel values involving k selected SVs. Before query time, there are gathered statistics about how misleading the output of the k-NN model can be, relative to the outputs of the exact KM for a representative set of examples, for each possible k from 1 to the total number of SVs. From these statistics, there are derived upper and lower thresholds for each step k. These thresholds identify output levels for which the particular variant of the k-NN model already leans so strongly positively or negatively that a reversal in sign is unlikely, given the weaker SV neighbors still remaining. At query time, the partial output of each query is incrementally updated, stopping as soon as it exceeds the predetermined statistical thresholds of the current step. For an easy query, stopping can occur as early as step k = 1. For more difficult queries, stopping might not occur until nearly all SVs are touched. A key empirical observation is that this approach can tolerate very approximate nearest-neighbor orderings. In experiments, SVs and queries were projected to a subspace comprising the top few principal- component dimensions and neighbor orderings were computed in that subspace. This approach ensured that the overhead of the nearest-neighbor computations was insignificant, relative to that of the exact KM computation.
Pošćić, Filip; Mattiello, Alessandro; Fellet, Guido; Miceli, Fabiano; Marchiol, Luca
2016-01-01
The implications of metal nanoparticles (MeNPs) are still unknown for many food crops. The purpose of this study was to evaluate the effects of cerium oxide (nCeO2) and titanium oxide (nTiO2) nanoparticles in soil at 0, 500 and 1000 mg·kg−1 on the nutritional parameters of barley (Hordeum vulgare L.) kernels. Mineral nutrients, amylose, β-glucans, amino acid and crude protein (CP) concentrations were measured in kernels. Whole flour samples were analyzed by ICP-AES/MS, HPLC and Elemental CHNS Analyzer. Results showed that Ce and Ti accumulation under MeNPs treatments did not differ from the control treatment. However, nCeO2 and nTiO2 had an impact on composition and nutritional quality of barley kernels in contrasting ways. Both MeNPs left β-glucans unaffected but reduced amylose content by approximately 21%. Most amino acids and CP increased. Among amino acids, lysine followed by proline saw the largest increase (51% and 37%, respectively). Potassium and S were both negatively impacted by MeNPs, while B was only affected by 500 mg nCeO2·kg−1. On the contrary Zn and Mn concentrations were improved by 500 mg nTiO2·kg−1, and Ca by both nTiO2 treatments. Generally, our findings demonstrated that kernels are negatively affected by nCeO2 while nTiO2 can potentially have beneficial effects. However, both MeNPs have the potential to negatively impact malt and feed production. PMID:27294945
Geoboard and Balance Activities for the Gifted Child.
ERIC Educational Resources Information Center
Bondy, Kay W.
1979-01-01
The author describes mathematics activities for gifted children which make use of the geoboard and balance. The problem, solutions, and theoretical backing are provided for determining areas of squares, areas of irregular shapes, the weight of popped and unpopped popcorn, kernels, and liquid mass and density. (SBH)
Estimating peer density effects on oral health for community-based older adults.
Chakraborty, Bibhas; Widener, Michael J; Mirzaei Salehabadi, Sedigheh; Northridge, Mary E; Kum, Susan S; Jin, Zhu; Kunzel, Carol; Palmer, Harvey D; Metcalf, Sara S
2017-12-29
As part of a long-standing line of research regarding how peer density affects health, researchers have sought to understand the multifaceted ways that the density of contemporaries living and interacting in proximity to one another influence social networks and knowledge diffusion, and subsequently health and well-being. This study examined peer density effects on oral health for racial/ethnic minority older adults living in northern Manhattan and the Bronx, New York, NY. Peer age-group density was estimated by smoothing US Census data with 4 kernel bandwidths ranging from 0.25 to 1.50 mile. Logistic regression models were developed using these spatial measures and data from the ElderSmile oral and general health screening program that serves predominantly racial/ethnic minority older adults at community centers in northern Manhattan and the Bronx. The oral health outcomes modeled as dependent variables were ordinal dentition status and binary self-rated oral health. After construction of kernel density surfaces and multiple imputation of missing data, logistic regression analyses were performed to estimate the effects of peer density and other sociodemographic characteristics on the oral health outcomes of dentition status and self-rated oral health. Overall, higher peer density was associated with better oral health for older adults when estimated using smaller bandwidths (0.25 and 0.50 mile). That is, statistically significant relationships (p < 0.01) between peer density and improved dentition status were found when peer density was measured assuming a more local social network. As with dentition status, a positive significant association was found between peer density and fair or better self-rated oral health when peer density was measured assuming a more local social network. This study provides novel evidence that the oral health of community-based older adults is affected by peer density in an urban environment. To the extent that peer density signifies the potential for social interaction and support, the positive significant effects of peer density on improved oral health point to the importance of place in promoting social interaction as a component of healthy aging. Proximity to peers and their knowledge of local resources may facilitate utilization of community-based oral health care.
Three-dimensional Fréchet sensitivity kernels for electromagnetic wave propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strickland, C. E.; Johnson, T. C.; Odom, R. I.
2015-08-28
Electromagnetic imaging methods are useful tools for monitoring subsurface changes in pore-fluid content and the associated changes in electrical permittivity and conductivity. The most common method for georadar tomography uses a high frequency ray-theoretic approximation that is valid when material variations are sufficiently small relative to the wavelength of the propagating wave. Georadar methods, however, often utilize electromagnetic waves that propagate within heterogeneous media at frequencies where ray theory may not be applicable. In this paper we describe the 3-D Fréchet sensitivity kernels for EM wave propagation. Various data functional types are formulated that consider all three components of themore » electric wavefield and incorporate near-, intermediate-, and far-field contributions. We show that EM waves exhibit substantial variations for different relative source-receiver component orientations. The 3-D sensitivities also illustrate out-of-plane effects that are not captured in 2-D sensitivity kernels and can influence results obtained using 2-D inversion methods to image structures that are in reality 3-D.« less
Zhao, Zhibiao
2011-06-01
We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.
Li, Faji; Wen, Weie; He, Zhonghu; Liu, Jindong; Jin, Hui; Cao, Shuanghe; Geng, Hongwei; Yan, Jun; Zhang, Pingzhi; Wan, Yingxiu; Xia, Xianchun
2018-06-01
We identified 21 new and stable QTL, and 11 QTL clusters for yield-related traits in three bread wheat populations using the wheat 90 K SNP assay. Identification of quantitative trait loci (QTL) for yield-related traits and closely linked molecular markers is important in order to identify gene/QTL for marker-assisted selection (MAS) in wheat breeding. The objectives of the present study were to identify QTL for yield-related traits and dissect the relationships among different traits in three wheat recombinant inbred line (RIL) populations derived from crosses Doumai × Shi 4185 (D × S), Gaocheng 8901 × Zhoumai 16 (G × Z) and Linmai 2 × Zhong 892 (L × Z). Using the available high-density linkage maps previously constructed with the wheat 90 K iSelect single nucleotide polymorphism (SNP) array, 65, 46 and 53 QTL for 12 traits were identified in the three RIL populations, respectively. Among them, 34, 23 and 27 were likely to be new QTL. Eighteen common QTL were detected across two or three populations. Eleven QTL clusters harboring multiple QTL were detected in different populations, and the interval 15.5-32.3 cM around the Rht-B1 locus on chromosome 4BS harboring 20 QTL is an important region determining grain yield (GY). Thousand-kernel weight (TKW) is significantly affected by kernel width and plant height (PH), whereas flag leaf width can be used to select lines with large kernel number per spike. Eleven candidate genes were identified, including eight cloned genes for kernel, heading date (HD) and PH-related traits as well as predicted genes for TKW, spike length and HD. The closest SNP markers of stable QTL or QTL clusters can be used for MAS in wheat breeding using kompetitive allele-specific PCR or semi-thermal asymmetric reverse PCR assays for improvement of GY.
Performance of fly ash based geopolymer incorporating palm kernel shell for lightweight concrete
NASA Astrophysics Data System (ADS)
Razak, Rafiza Abd; Abdullah, Mohd Mustafa Al Bakri; Yahya, Zarina; Jian, Ang Zhi; Nasri, Armia
2017-09-01
A concrete which cement is totally replaced by source material such as fly ash and activated by highly alkaline solutions is known as geopolymer concrete. Fly ash is the most common source material for geopolymer because it is a by-product material, so it can get easily from all around the world. An investigation has been carried out to select the most suitable ingredients of geopolymer concrete so that the geopolymer concrete can achieve the desire compressive strength. The samples were prepared to determine the suitable percentage of palm kernel shell used in geopolymer concrete and cured for 7 days in oven. After that, other samples were prepared by using the suitable percentage of palm kernel shell and cured for 3, 14, 21 and 28 days in oven. The control sample consisting of ordinary Portland cement and palm kernel shell and cured for 28 days were prepared too. The NaOH concentration of 12M, ratio Na2SiO3 to NaOH of 2.5, ratio fly ash to alkaline activator solution of 2.0 and ratio water to geopolymer of 0.35 were fixed throughout the research. The density obtained for the samples were 1.78 kg/m3, water absorption of 20.41% and the compressive strength of 14.20 MPa. The compressive strength of geopolymer concrete is still acceptable as lightweight concrete although the compressive strength is lower than OPC concrete. Therefore, the proposed method by using fly ash mixed with 10% of palm kernel shell can be used to design geopolymer concrete.
Javanrouh, Niloufar; Daneshpour, Maryam S; Soltanian, Ali Reza; Tapak, Leili
2018-06-05
Obesity is a serious health problem that leads to low quality of life and early mortality. To the purpose of prevention and gene therapy for such a worldwide disease, genome wide association study is a powerful tool for finding SNPs associated with increased risk of obesity. To conduct an association analysis, kernel machine regression is a generalized regression method, has an advantage of considering the epistasis effects as well as the correlation between individuals due to unknown factors. In this study, information of the people who participated in Tehran cardio-metabolic genetic study was used. They were genotyped for the chromosomal region, evaluation 986 variations located at 16q12.2; build 38hg. Kernel machine regression and single SNP analysis were used to assess the association between obesity and SNPs genotyped data. We found that associated SNP sets with obesity, were almost in the FTO (P = 0.01), AIKTIP (P = 0.02) and MMP2 (P = 0.02) genes. Moreover, two SNPs, i.e., rs10521296 and rs11647470, showed significant association with obesity using kernel regression (P = 0.02). In conclusion, significant sets were randomly distributed throughout the region with more density around the FTO, AIKTIP and MMP2 genes. Furthermore, two intergenic SNPs showed significant association after using kernel machine regression. Therefore, more studies have to be conducted to assess their functionality or precise mechanism. Copyright © 2018 Elsevier B.V. All rights reserved.
Fast metabolite identification with Input Output Kernel Regression.
Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho
2016-06-15
An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. celine.brouard@aalto.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Rapid scatter estimation for CBCT using the Boltzmann transport equation
NASA Astrophysics Data System (ADS)
Sun, Mingshan; Maslowski, Alex; Davis, Ian; Wareing, Todd; Failla, Gregory; Star-Lack, Josh
2014-03-01
Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU's running in parallel.
Predicting receptor-ligand pairs through kernel learning
2011-01-01
Background Regulation of cellular events is, often, initiated via extracellular signaling. Extracellular signaling occurs when a circulating ligand interacts with one or more membrane-bound receptors. Identification of receptor-ligand pairs is thus an important and specific form of PPI prediction. Results Given a set of disparate data sources (expression data, domain content, and phylogenetic profile) we seek to predict new receptor-ligand pairs. We create a combined kernel classifier and assess its performance with respect to the Database of Ligand-Receptor Partners (DLRP) 'golden standard' as well as the method proposed by Gertz et al. Among our findings, we discover that our predictions for the tgfβ family accurately reconstruct over 76% of the supported edges (0.76 recall and 0.67 precision) of the receptor-ligand bipartite graph defined by the DLRP "golden standard". In addition, for the tgfβ family, the combined kernel classifier is able to relatively improve upon the Gertz et al. work by a factor of approximately 1.5 when considering that our method has an F-measure of 0.71 while that of Gertz et al. has a value of 0.48. Conclusions The prediction of receptor-ligand pairings is a difficult and complex task. We have demonstrated that using kernel learning on multiple data sources provides a stronger alternative to the existing method in solving this task. PMID:21834994
Fast metabolite identification with Input Output Kernel Regression
Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho
2016-01-01
Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628
Leimar, Olof; Doebeli, Michael; Dieckmann, Ulf
2008-04-01
We have analyzed the evolution of a quantitative trait in populations that are spatially extended along an environmental gradient, with gene flow between nearby locations. In the absence of competition, there is stabilizing selection toward a locally best-adapted trait that changes gradually along the gradient. According to traditional ideas, gradual spatial variation in environmental conditions is expected to lead to gradual variation in the evolved trait. A contrasting possibility is that the trait distribution instead breaks up into discrete clusters. Doebeli and Dieckmann (2003) argued that competition acting locally in trait space and geographical space can promote such clustering. We have investigated this possibility using deterministic population dynamics for asexual populations, analyzing our model numerically and through an analytical approximation. We examined how the evolution of clusters is affected by the shape of competition kernels, by the presence of Allee effects, and by the strength of gene flow along the gradient. For certain parameter ranges clustering was a robust outcome, and for other ranges there was no clustering. Our analysis shows that the shape of competition kernels is important for clustering: the sign structure of the Fourier transform of a competition kernel determines whether the kernel promotes clustering. Also, we found that Allee effects promote clustering, whereas gene flow can have a counteracting influence. In line with earlier findings, we could demonstrate that phenotypic clustering was favored by gradients of intermediate slope.
The derivation and approximation of coarse-grained dynamics from Langevin dynamics
NASA Astrophysics Data System (ADS)
Ma, Lina; Li, Xiantao; Liu, Chun
2016-11-01
We present a derivation of a coarse-grained description, in the form of a generalized Langevin equation, from the Langevin dynamics model that describes the dynamics of bio-molecules. The focus is placed on the form of the memory kernel function, the colored noise, and the second fluctuation-dissipation theorem that connects them. Also presented is a hierarchy of approximations for the memory and random noise terms, using rational approximations in the Laplace domain. These approximations offer increasing accuracy. More importantly, they eliminate the need to evaluate the integral associated with the memory term at each time step. Direct sampling of the colored noise can also be avoided within this framework. Therefore, the numerical implementation of the generalized Langevin equation is much more efficient.
Electron correlation in Hooke's law atom in the high-density limit.
Gill, P M W; O'Neill, D P
2005-03-01
Closed-form expressions for the first three terms in the perturbation expansion of the exact energy and Hartree-Fock energy of the lowest singlet and triplet states of the Hooke's law atom are found. These yield elementary formulas for the exact correlation energies (-49.7028 and -5.807 65 mE(h)) of the two states in the high-density limit and lead to a pair of necessary conditions on the exact correlation kernel G(w) in Hartree-Fock-Wigner theory.
Space Use and Movement of a Neotropical Top Predator: The Endangered Jaguar
Stabach, Jared A.; Fleming, Chris H.; Calabrese, Justin M.; De Paula, Rogério C.; Ferraz, Kátia M. P. M.; Kantek, Daniel L. Z.; Miyazaki, Selma S.; Pereira, Thadeu D. C.; Araujo, Gediendson R.; Paviolo, Agustin; De Angelo, Carlos; Di Bitetti, Mario S.; Cruz, Paula; Lima, Fernando; Cullen, Laury; Sana, Denis A.; Ramalho, Emiliano E.; Carvalho, Marina M.; Soares, Fábio H. S.; Zimbres, Barbara; Silva, Marina X.; Moraes, Marcela D. F.; Vogliotti, Alexandre; May, Joares A.; Haberfeld, Mario; Rampim, Lilian; Sartorello, Leonardo; Ribeiro, Milton C.; Leimgruber, Peter
2016-01-01
Accurately estimating home range and understanding movement behavior can provide important information on ecological processes. Advances in data collection and analysis have improved our ability to estimate home range and movement parameters, both of which have the potential to impact species conservation. Fitting continuous-time movement model to data and incorporating the autocorrelated kernel density estimator (AKDE), we investigated range residency of forty-four jaguars fit with GPS collars across five biomes in Brazil and Argentina. We assessed home range and movement parameters of range resident animals and compared AKDE estimates with kernel density estimates (KDE). We accounted for differential space use and movement among individuals, sex, region, and habitat quality. Thirty-three (80%) of collared jaguars were range resident. Home range estimates using AKDE were 1.02 to 4.80 times larger than KDE estimates that did not consider autocorrelation. Males exhibited larger home ranges, more directional movement paths, and a trend towards larger distances traveled per day. Jaguars with the largest home ranges occupied the Atlantic Forest, a biome with high levels of deforestation and high human population density. Our results fill a gap in the knowledge of the species’ ecology with an aim towards better conservation of this endangered/critically endangered carnivore—the top predator in the Neotropics. PMID:28030568
Space Use and Movement of a Neotropical Top Predator: The Endangered Jaguar.
Morato, Ronaldo G; Stabach, Jared A; Fleming, Chris H; Calabrese, Justin M; De Paula, Rogério C; Ferraz, Kátia M P M; Kantek, Daniel L Z; Miyazaki, Selma S; Pereira, Thadeu D C; Araujo, Gediendson R; Paviolo, Agustin; De Angelo, Carlos; Di Bitetti, Mario S; Cruz, Paula; Lima, Fernando; Cullen, Laury; Sana, Denis A; Ramalho, Emiliano E; Carvalho, Marina M; Soares, Fábio H S; Zimbres, Barbara; Silva, Marina X; Moraes, Marcela D F; Vogliotti, Alexandre; May, Joares A; Haberfeld, Mario; Rampim, Lilian; Sartorello, Leonardo; Ribeiro, Milton C; Leimgruber, Peter
2016-01-01
Accurately estimating home range and understanding movement behavior can provide important information on ecological processes. Advances in data collection and analysis have improved our ability to estimate home range and movement parameters, both of which have the potential to impact species conservation. Fitting continuous-time movement model to data and incorporating the autocorrelated kernel density estimator (AKDE), we investigated range residency of forty-four jaguars fit with GPS collars across five biomes in Brazil and Argentina. We assessed home range and movement parameters of range resident animals and compared AKDE estimates with kernel density estimates (KDE). We accounted for differential space use and movement among individuals, sex, region, and habitat quality. Thirty-three (80%) of collared jaguars were range resident. Home range estimates using AKDE were 1.02 to 4.80 times larger than KDE estimates that did not consider autocorrelation. Males exhibited larger home ranges, more directional movement paths, and a trend towards larger distances traveled per day. Jaguars with the largest home ranges occupied the Atlantic Forest, a biome with high levels of deforestation and high human population density. Our results fill a gap in the knowledge of the species' ecology with an aim towards better conservation of this endangered/critically endangered carnivore-the top predator in the Neotropics.
An alternative covariance estimator to investigate genetic heterogeneity in populations.
Heslot, Nicolas; Jannink, Jean-Luc
2015-11-26
For genomic prediction and genome-wide association studies (GWAS) using mixed models, covariance between individuals is estimated using molecular markers. Based on the properties of mixed models, using available molecular data for prediction is optimal if this covariance is known. Under this assumption, adding individuals to the analysis should never be detrimental. However, some empirical studies showed that increasing training population size decreased prediction accuracy. Recently, results from theoretical models indicated that even if marker density is high and the genetic architecture of traits is controlled by many loci with small additive effects, the covariance between individuals, which depends on relationships at causal loci, is not always well estimated by the whole-genome kinship. We propose an alternative covariance estimator named K-kernel, to account for potential genetic heterogeneity between populations that is characterized by a lack of genetic correlation, and to limit the information flow between a priori unknown populations in a trait-specific manner. This is similar to a multi-trait model and parameters are estimated by REML and, in extreme cases, it can allow for an independent genetic architecture between populations. As such, K-kernel is useful to study the problem of the design of training populations. K-kernel was compared to other covariance estimators or kernels to examine its fit to the data, cross-validated accuracy and suitability for GWAS on several datasets. It provides a significantly better fit to the data than the genomic best linear unbiased prediction model and, in some cases it performs better than other kernels such as the Gaussian kernel, as shown by an empirical null distribution. In GWAS simulations, alternative kernels control type I errors as well as or better than the classical whole-genome kinship and increase statistical power. No or small gains were observed in cross-validated prediction accuracy. This alternative covariance estimator can be used to gain insight into trait-specific genetic heterogeneity by identifying relevant sub-populations that lack genetic correlation between them. Genetic correlation can be 0 between identified sub-populations by performing automatic selection of relevant sets of individuals to be included in the training population. It may also increase statistical power in GWAS.
Validation of Born Traveltime Kernels
NASA Astrophysics Data System (ADS)
Baig, A. M.; Dahlen, F. A.; Hung, S.
2001-12-01
Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.
Evaluation of human exposure to single electromagnetic pulses of arbitrary shape.
Jelínek, Lukás; Pekárek, Ludĕk
2006-03-01
Transient current density J(t) induced in the body of a person exposed to a single magnetic pulse of arbitrary shape or to a magnetic jump is filtered by a convolution integral containing in its kernel the frequency and phase dependence of the basic limit value adopted in a way similar to that used for reference values in the International Commission on Non-lonising Radiation Protection statement. From the obtained time-dependent dimensionless impact function W(J)(t) can immediately be determined whether the exposure to the analysed single event complies with the basic limit. For very slowly varying field, the integral kernel is extended to include the softened ICNIRP basic limit for frequencies lower than 4 Hz.
NASA Astrophysics Data System (ADS)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.
2017-11-01
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; ...
2017-10-24
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P
2014-01-01
Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.
Oil and gas development footprint in the Piceance Basin, western Colorado
Martinez, Cericia D.; Preston, Todd M.
2018-01-01
Understanding long-term implications of energy development on ecosystem functionrequires establishing regional datasets to quantify past development and determine relationships to predict future development. The Piceance Basin in western Colorado has a history of energy production and development is expected to continue into the foreseeable future due to abundant natural gas resources. To facilitate analyses of regional energy development we digitized all well pads in the Colorado portion of the basin, determined the previous land cover of areas converted to well pads over three time periods (2002–2006, 2007–2011, and 2012–2016), and explored the relationship between number of wells per pad and pad area to model future development. We also calculated the area of pads constructed prior to 2002. Over 21 million m2 has been converted to well pads with approximately 13 million m2 converted since 2002. The largest land conversion since 2002 occurred in shrub/scrub (7.9 million m2), evergreen (2.1 million m2), and deciduous (1.3 million m2) forest environments based on National Land Cover Database classifications. Operational practices have transitioned from single well pads to multi-well pads, increasing the average number of wells per pad from 2.5 prior to 2002, to 9.1 between 2012 and 2016. During the same time period the pad area per well has increased from 2030 m2 to 3504 m2. Kernel density estimation was used to model the relationship between the number of wells per pad and pad area, with these curves exhibiting a lognormal distribution. Therefore, either kernel density estimation or lognormal probability distributions may potentially be used to model land use requirements for future development. Digitized well pad locations in the Piceance Basin contribute to a growing body of spatial data on energy infrastructure and, coupled with study results, will facilitate future regional and national studies assessing the spatial and temporal effects of energy development on ecosystem function.
NASA Astrophysics Data System (ADS)
Chmiel, Malgorzata; Roux, Philippe; Herrmann, Philippe; Rondeleux, Baptiste; Wathelet, Marc
2018-05-01
We investigated the construction of diffraction kernels for surface waves using two-point convolution and/or correlation from land active seismic data recorded in the context of exploration geophysics. The high density of controlled sources and receivers, combined with the application of the reciprocity principle, allows us to retrieve two-dimensional phase-oscillation diffraction kernels (DKs) of surface waves between any two source or receiver points in the medium at each frequency (up to 15 Hz, at least). These DKs are purely data-based as no model calculations and no synthetic data are needed. They naturally emerge from the interference patterns of the recorded wavefields projected on the dense array of sources and/or receivers. The DKs are used to obtain multi-mode dispersion relations of Rayleigh waves, from which near-surface shear velocity can be extracted. Using convolution versus correlation with a grid of active sources is an important step in understanding the physics of the retrieval of surface wave Green's functions. This provides the foundation for future studies based on noise sources or active sources with a sparse spatial distribution.
NASA Astrophysics Data System (ADS)
Luo, Y.; Nissen-Meyer, T.; Morency, C.; Tromp, J.
2008-12-01
Seismic imaging in the exploration industry is often based upon ray-theoretical migration techniques (e.g., Kirchhoff) or other ideas which neglect some fraction of the seismic wavefield (e.g., wavefield continuation for acoustic-wave first arrivals) in the inversion process. In a companion paper we discuss the possibility of solving the full physical forward problem (i.e., including visco- and poroelastic, anisotropic media) using the spectral-element method. With such a tool at hand, we can readily apply the adjoint method to tomographic inversions, i.e., iteratively improving an initial 3D background model to fit the data. In the context of this inversion process, we draw connections between kernels in adjoint tomography and basic imaging principles in migration. We show that the images obtained by migration are nothing but particular kinds of adjoint kernels (mainly density kernels). Migration is basically a first step in the iterative inversion process of adjoint tomography. We apply the approach to basic 2D problems involving layered structures, overthrusting faults, topography, salt domes, and poroelastic regions.
Spatial Relative Risk Patterns of Autism Spectrum Disorders in Utah
ERIC Educational Resources Information Center
Bakian, Amanda V.; Bilder, Deborah A.; Coon, Hilary; McMahon, William M.
2015-01-01
Heightened areas of spatial relative risk for autism spectrum disorders (ASD), or ASD hotspots, in Utah were identified using adaptive kernel density functions. Children ages four, six, and eight with ASD from multiple birth cohorts were identified by the Utah Registry of Autism and Developmental Disabilities. Each ASD case was gender-matched to…
Boundary Kernel Estimation of the Two Sample Comparison Density Function
1989-05-01
not for the understand- ing, love, and steadfast support of my wife, Catheryn . She supported my move to statistics a mere fortnight after we were...school one learns things of a narrow and technical nature; 0 Catheryn has shown me much of what is fundamentally true and important in this world. To her
Serra-Sogas, Norma; O'Hara, Patrick D; Canessa, Rosaline; Keller, Peter; Pelot, Ronald
2008-05-01
This paper examines the use of exploratory spatial analysis for identifying hotspots of shipping-based oil pollution in the Pacific Region of Canada's Exclusive Economic Zone. It makes use of data collected from fiscal years 1997/1998 to 2005/2006 by the National Aerial Surveillance Program, the primary tool for monitoring and enforcing the provisions imposed by MARPOL 73/78. First, we present oil spill data as points in a "dot map" relative to coastlines, harbors and the aerial surveillance distribution. Then, we explore the intensity of oil spill events using the Quadrat Count method, and the Kernel Density Estimation methods with both fixed and adaptive bandwidths. We found that oil spill hotspots where more clearly defined using Kernel Density Estimation with an adaptive bandwidth, probably because of the "clustered" distribution of oil spill occurrences. Finally, we discuss the importance of standardizing oil spill data by controlling for surveillance effort to provide a better understanding of the distribution of illegal oil spills, and how these results can ultimately benefit a monitoring program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Qiang, E-mail: jyanghkbu@gmail.com; Yang, Jiang, E-mail: qd2125@columbia.edu
This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simplemore » ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge–Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge–Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen–Cahn equations, nonlocal Cahn–Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.« less
Stable computations with flat radial basis functions using vector-valued rational approximations
NASA Astrophysics Data System (ADS)
Wright, Grady B.; Fornberg, Bengt
2017-02-01
One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.
Predicting spatial patterns of plant recruitment using animal-displacement kernels.
Santamaría, Luis; Rodríguez-Pérez, Javier; Larrinaga, Asier R; Pias, Beatriz
2007-10-10
For plants dispersed by frugivores, spatial patterns of recruitment are primarily influenced by the spatial arrangement and characteristics of parent plants, the digestive characteristics, feeding behaviour and movement patterns of animal dispersers, and the structure of the habitat matrix. We used an individual-based, spatially-explicit framework to characterize seed dispersal and seedling fate in an endangered, insular plant-disperser system: the endemic shrub Daphne rodriguezii and its exclusive disperser, the endemic lizard Podarcis lilfordi. Plant recruitment kernels were chiefly determined by the disperser's patterns of space utilization (i.e. the lizard's displacement kernels), the position of the various plant individuals in relation to them, and habitat structure (vegetation cover vs. bare soil). In contrast to our expectations, seed gut-passage rate and its effects on germination, and lizard speed-of-movement, habitat choice and activity rhythm were of minor importance. Predicted plant recruitment kernels were strongly anisotropic and fine-grained, preventing their description using one-dimensional, frequency-distance curves. We found a general trade-off between recruitment probability and dispersal distance; however, optimal recruitment sites were not necessarily associated to sites of maximal adult-plant density. Conservation efforts aimed at enhancing the regeneration of endangered plant-disperser systems may gain in efficacy by manipulating the spatial distribution of dispersers (e.g. through the creation of refuges and feeding sites) to create areas favourable to plant recruitment.
Reconciling Long-Wavelength Dynamic Topography, Geoid Anomalies and Mass Distribution on Earth
NASA Astrophysics Data System (ADS)
Hoggard, M.; Richards, F. D.; Ghelichkhan, S.; Austermann, J.; White, N.
2017-12-01
Since the first satellite observations in the late 1950s, we have known that that the Earth's non-hydrostatic geoid is dominated by spherical harmonic degree 2 (wavelengths of 16,000 km). Peak amplitudes are approximately ± 100 m, with highs centred on the Pacific Ocean and Africa, encircled by lows in the vicinity of the Pacific Ring of Fire and at the poles. Initial seismic tomography models revealed that the shear-wave velocity, and therefore presumably the density structure, of the lower mantle is also dominated by degree 2. Anti-correlation of slow, probably low density regions beneath geoid highs indicates that the mantle is affected by large-scale flow. Thus, buoyant features are rising and exert viscous normal stresses that act to deflect the surface and core-mantle boundary (CMB). Pioneering studies in the 1980s showed that a viscosity jump between the upper and lower mantle is required to reconcile these geoid and tomographically inferred density anomalies. These studies also predict 1-2 km of dynamic topography at the surface, dominated by degree 2. In contrast to this prediction, a global observational database of oceanic residual depth measurements indicates that degree 2 dynamic topography has peak amplitudes of only 500 m. Here, we attempt to reconcile observations of dynamic topography, geoid, gravity anomalies and CMB topography using instantaneous flow kernels. We exploit a density structure constructed from blended seismic tomography models, combining deep mantle imaging with higher resolution upper mantle features. Radial viscosity structure is discretised, and we invert for the best-fitting viscosity profile using a conjugate gradient search algorithm, subject to damping. Our results suggest that, due to strong sensitivity to radial viscosity structure, the Earth's geoid seems to be compatible with only ± 500 m of degree 2 dynamic topography.
On Hilbert-Schmidt norm convergence of Galerkin approximation for operator Riccati equations
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1988-01-01
An abstract approximation framework for the solution of operator algebraic Riccati equations is developed. The approach taken is based on a formulation of the Riccati equation as an abstract nonlinear operator equation on the space of Hilbert-Schmidt operators. Hilbert-Schmidt norm convergence of solutions to generic finite dimensional Galerkin approximations to the Riccati equation to the solution of the original infinite dimensional problem is argued. The application of the general theory is illustrated via an operator Riccati equation arising in the linear-quadratic design of an optimal feedback control law for a 1-D heat/diffusion equation. Numerical results demonstrating the convergence of the associated Hilbert-Schmidt kernels are included.
Diffusion in random networks: Asymptotic properties, and numerical and engineering approximations
NASA Astrophysics Data System (ADS)
Padrino, Juan C.; Zhang, Duan Z.
2016-11-01
The ensemble phase averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of a set of pockets connected by tortuous channels. Inside a channel, we assume that fluid transport is governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pores mass density. The so-called dual porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem, we consider the one-dimensional mass diffusion in a semi-infinite domain, whose solution is sought numerically. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt- 1 / 4 rather than xt- 1 / 2 as in the traditional theory. This early time sub-diffusive similarity can be explained by random walk theory through the network. In addition, by applying concepts of fractional calculus, we show that, for small time, the governing equation reduces to a fractional diffusion equation with known solution. We recast this solution in terms of special functions easier to compute. Comparison of the numerical and exact solutions shows excellent agreement.
Supernova Neutrino Opacity from Nucleon-Nucleon Bremsstrahlung and Related Processes
NASA Astrophysics Data System (ADS)
Hannestad, Steen; Raffelt, Georg
1998-11-01
Elastic scattering on nucleons, νN --> Nν, is the dominant supernova (SN) opacity source for μ and τ neutrinos. The dominant energy- and number-changing processes were thought to be νe- --> e-ν and νν¯<-->e+e- until Suzuki showed that the bremsstrahlung process νν¯NN<-->NN was actually more important. We find that for energy exchange, the related ``inelastic scattering process'' νNN<-->NNν is even more effective by about a factor of 10. A simple estimate implies that the νμ and ντ spectra emitted during the Kelvin-Helmholtz cooling phase are much closer to that of ν¯e than had been thought previously. To facilitate a numerical study of the spectra formation we derive a scattering kernel that governs both bremsstrahlung and inelastic scattering and give an analytic approximation formula. We consider only neutron-neutron interactions; we use a one-pion exchange potential in Born approximation, nonrelativistic neutrons, and the long-wavelength limit, simplifications that appear justified for the surface layers of an SN core. We include the pion mass in the potential, and we allow for an arbitrary degree of neutron degeneracy. Our treatment does not include the neutron-proton process and does not include nucleon-nucleon correlations. Our perturbative approach applies only to the SN surface layers, i.e., to densities below about 1014 g cm-3.
Nonparametric model validations for hidden Markov models with applications in financial econometrics
Zhao, Zhibiao
2011-01-01
We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601
NASA Astrophysics Data System (ADS)
Gao, Zhiwen; Zhou, Youhe
2015-04-01
Real fundamental solution for fracture problem of transversely isotropic high temperature superconductor (HTS) strip is obtained. The superconductor E-J constitutive law is characterized by the Bean model where the critical current density is independent of the flux density. Fracture analysis is performed by the methods of singular integral equations which are solved numerically by Gauss-Lobatto-Chybeshev (GSL) collocation method. To guarantee a satisfactory accuracy, the convergence behavior of the kernel function is investigated. Numerical results of fracture parameters are obtained and the effects of the geometric characteristics, applied magnetic field and critical current density on the stress intensity factors (SIF) are discussed.
Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko
2012-11-01
To compare the performance of model-based iterative reconstruction (MBIR) with that of standard filtered back projection (FBP) for measuring vascular wall attenuation. After subjecting 9 vascular models (actual attenuation value of wall, 89 HU) with wall thickness of 0.5, 1.0, or 1.5 mm that we filled with contrast material of 275, 396, or 542 HU to scanning using 64-detector computed tomography (CT), we reconstructed images using MBIR and FBP (Bone, Detail kernels) and measured wall attenuation at the center of the wall for each model. We performed attenuation measurements for each model and additional supportive measurements by a differentiation curve. We analyzed statistics using analyzes of variance with repeated measures. Using the Bone kernel, standard deviation of the measurement exceeded 30 HU in most conditions. In measurements at the wall center, the attenuation values obtained using MBIR were comparable to or significantly closer to the actual wall attenuation than those acquired using Detail kernel. Using differentiation curves, we could measure attenuation for models with walls of 1.0- or 1.5-mm thickness using MBIR but only those of 1.5-mm thickness using Detail kernel. We detected no significant differences among the attenuation values of the vascular walls of either thickness (MBIR, P=0.1606) or among the 3 densities of intravascular contrast material (MBIR, P=0.8185; Detail kernel, P=0.0802). Compared with FBP, MBIR reduces both reconstruction blur and image noise simultaneously, facilitates recognition of vascular wall boundaries, and can improve accuracy in measuring wall attenuation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Vokoun, Jason C.; Rabeni, Charles F.
2005-01-01
Flathead catfish Pylodictis olivaris were radio-tracked in the Grand River and Cuivre River, Missouri, from late July until they moved to overwintering habitats in late October. Fish moved within a definable area, and although occasional long-distance movements occurred, the fish typically returned to the previously occupied area. Seasonal home range was calculated with the use of kernel density estimation, which can be interpreted as a probabilistic utilization distribution that documents the internal structure of the estimate by delineating portions of the range that was used a specified percentage of the time. A traditional linear range also was reported. Most flathead catfish (89%) had one 50% kernel-estimated core area, whereas 11% of the fish split their time between two core areas. Core areas were typically in the middle of the 90% kernel-estimated home range (58%), although several had core areas in upstream (26%) and downstream (16%) portions of the home range. Home-range size did not differ based on river, sex, or size and was highly variable among individuals. The median 95% kernel estimate was 1,085 m (range, 70– 69,090 m) for all fish. The median 50% kernel-estimated core area was 135 m (10–2,260 m). The median linear range was 3,510 m (150–50,400 m). Fish pairs with core areas in the same and neighboring pools had static joint space use values of up to 49% (area of intersection index), indicating substantial overlap and use of the same area. However, all fish pairs had low dynamic joint space use values (<0.07; coefficient of association), indicating that fish pairs were temporally segregated, rarely occurring in the same location at the same time.
Church, Cody; Mawko, George; Archambault, John Paul; Lewandowski, Robert; Liu, David; Kehoe, Sharon; Boyd, Daniel; Abraham, Robert; Syme, Alasdair
2018-02-01
Radiopaque microspheres may provide intraprocedural and postprocedural feedback during transarterial radioembolization (TARE). Furthermore, the potential to use higher resolution x-ray imaging techniques as opposed to nuclear medicine imaging suggests that significant improvements in the accuracy and precision of radiation dosimetry calculations could be realized for this type of therapy. This study investigates the absorbed dose kernel for novel radiopaque microspheres including contributions of both short and long-lived contaminant radionuclides while concurrently quantifying the self-shielding of the glass network. Monte Carlo simulations using EGSnrc were performed to determine the dose kernels for all monoenergetic electron emissions and all beta spectra for radionuclides reported in a neutron activation study of the microspheres. Simulations were benchmarked against an accepted 90 Y dose point kernel. Self-shielding was quantified for the microspheres by simulating an isotropically emitting, uniformly distributed source, in glass and in water. The ratio of the absorbed doses was scored as a function of distance from a microsphere. The absorbed dose kernel for the microspheres was calculated for (a) two bead formulations following (b) two different durations of neutron activation, at (c) various time points following activation. Self-shielding varies with time postremoval from the reactor. At early time points, it is less pronounced due to the higher energies of the emissions. It is on the order of 0.4-2.8% at a radial distance of 5.43 mm with increased size from 10 to 50 μm in diameter during the time that the microspheres would be administered to a patient. At long time points, self-shielding is more pronounced and can reach values in excess of 20% near the end of the range of the emissions. Absorbed dose kernels for 90 Y, 90m Y, 85m Sr, 85 Sr, 87m Sr, 89 Sr, 70 Ga, 72 Ga, and 31 Si are presented and used to determine an overall kernel for the microspheres based on weighted activities. The shapes of the absorbed dose kernels are dominated at short times postactivation by the contributions of 70 Ga and 72 Ga. Following decay of the short-lived contaminants, the absorbed dose kernel is effectively that of 90 Y. After approximately 1000 h postactivation, the contributions of 85 Sr and 89 Sr become increasingly dominant, though the absorbed dose-rate around the beads drops by roughly four orders of magnitude. The introduction of high atomic number elements for the purpose of increasing radiopacity necessarily leads to the production of radionuclides other than 90 Y in the microspheres. Most of the radionuclides in this study are short-lived and are likely not of any significant concern for this therapeutic agent. The presence of small quantities of longer lived radionuclides will change the shape of the absorbed dose kernel around a microsphere at long time points postadministration when activity levels are significantly reduced. © 2017 American Association of Physicists in Medicine.
Stein, Hans Henrik; Casas, Gloria Amparo; Abelilla, Jerubella Jerusalem; Liu, Yanhong; Sulabo, Rommel Casilda
2015-01-01
High fiber co-products from the copra and palm kernel industries are by-products of the production of coconut oil and palm kernel oil. The co-products include copra meal, copra expellers, palm kernel meal, and palm kernel expellers. All 4 ingredients are very high in fiber and the energy value is relatively low when fed to pigs. The protein concentration is between 14 and 22 % and the protein has a low biological value and a very high Arg:Lys ratio. Digestibility of most amino acids is less than in soybean meal but close to that in corn. However, the digestibility of Lys is sometimes low due to Maillard reactions that are initiated due to overheating during drying. Copra and palm kernel ingredients contain 0.5 to 0.6 % P. Most of the P in palm kernel meal and palm kernel expellers is bound to phytate, but in copra products less than one third of the P is bound to phytate. The digestibility of P is, therefore, greater in copra meal and copra expellers than in palm kernel ingredients. Inclusion of copra meal should be less than 15 % in diets fed to weanling pigs and less than 25 % in diets for growing-finishing pigs. Palm kernel meal may be included by 15 % in diets for weanling pigs and 25 % in diets for growing and finishing pigs. Rice bran contains the pericarp and aleurone layers of brown rice that is removed before polished rice is produced. Rice bran contains approximately 25 % neutral detergent fiber and 25 to 30 % starch. Rice bran has a greater concentration of P than most other plant ingredients, but 75 to 90 % of the P is bound in phytate. Inclusion of microbial phytase in the diets is, therefore, necessary if rice bran is used. Rice bran may contain 15 to 24 % fat, but it may also have been defatted in which case the fat concentration is less than 5 %. Concentrations of digestible energy (DE) and metabolizable energy (ME) are slightly less in full fat rice bran than in corn, but defatted rice bran contains less than 75 % of the DE and ME in corn. The concentration of crude protein is 15 to 18 % in rice bran and the protein has a high biological value and most amino acids are well digested by pigs. Inclusion of rice bran in diets fed to pigs has yielded variable results and based on current research it is recommended that inclusion levels are less than 25 to 30 % in diets for growing-finishing pigs, and less than 20 % in diets for weanling pigs. However, there is a need for additional research to determine the inclusion rates that may be used for both full fat and defatted rice bran.
NASA Astrophysics Data System (ADS)
Deng, Xiao-Le; Shen, Wen-Bin
2018-01-01
The forward modeling of the topographic effects of the gravitational parameters in the gravity field is a fundamental topic in geodesy and geophysics. Since the gravitational effects, including for instance the gravitational potential (GP), the gravity vector (GV) and the gravity gradient tensor (GGT), of the topographic (or isostatic) mass reduction have been expanded by adding the gravitational curvatures (GC) in geoscience, it is crucial to find efficient numerical approaches to evaluate these effects. In this paper, the GC formulas of a tesseroid in Cartesian integral kernels are derived in 3D/2D forms. Three generally used numerical approaches for computing the topographic effects (e.g., GP, GV, GGT, GC) of a tesseroid are studied, including the Taylor Series Expansion (TSE), Gauss-Legendre Quadrature (GLQ) and Newton-Cotes Quadrature (NCQ) approaches. Numerical investigations show that the GC formulas in Cartesian integral kernels are more efficient if compared to the previously given GC formulas in spherical integral kernels: by exploiting the 3D TSE second-order formulas, the computational burden associated with the former is 46%, as an average, of that associated with the latter. The GLQ behaves better than the 3D/2D TSE and NCQ in terms of accuracy and computational time. In addition, the effects of a spherical shell's thickness and large-scale geocentric distance on the GP, GV, GGT and GC functionals have been studied with the 3D TSE second-order formulas as well. The relative approximation errors of the GC functionals are larger with the thicker spherical shell, which are the same as those of the GP, GV and GGT. Finally, the very-near-area problem and polar singularity problem have been considered by the numerical methods of the 3D TSE, GLQ and NCQ. The relative approximation errors of the GC components are larger than those of the GP, GV and GGT, especially at the very near area. Compared to the GC formulas in spherical integral kernels, these new GC formulas can avoid the polar singularity problem.
Non-linear 3-D Born shear waveform tomography in Southeast Asia
NASA Astrophysics Data System (ADS)
Panning, Mark P.; Cao, Aimin; Kim, Ahyi; Romanowicz, Barbara A.
2012-07-01
Southeast (SE) Asia is a tectonically complex region surrounded by many active source regions, thus an ideal test bed for developments in seismic tomography. Much recent development in tomography has been based on 3-D sensitivity kernels based on the first-order Born approximation, but there are potential problems with this approach when applied to waveform data. In this study, we develop a radially anisotropic model of SE Asia using long-period multimode waveforms. We use a theoretical 'cascade' approach, starting with a large-scale Eurasian model developed using 2-D Non-linear Asymptotic Coupling Theory (NACT) sensitivity kernels, and then using a modified Born approximation (nBorn), shown to be more accurate at modelling waveforms, to invert a subset of the data for structure in a subregion (longitude 75°-150° and latitude 0°-45°). In this subregion, the model is parametrized at a spherical spline level 6 (˜200 km). The data set is also inverted using NACT and purely linear 3-D Born kernels. All three final models fit the data well, with just under 80 per cent variance reduction as calculated using the corresponding theory, but the nBorn model shows more detailed structure than the NACT model throughout and has much better resolution at depths greater than 250 km. Based on variance analysis, the purely linear Born kernels do not provide as good a fit to the data due to deviations from linearity for the waveform data set used in this modelling. The nBorn isotropic model shows a stronger fast velocity anomaly beneath the Tibetan Plateau in the depth range of 150-250 km, which disappears at greater depth, consistent with other studies. It also indicates moderate thinning of the high-velocity plate in the middle of Tibet, consistent with a model where Tibet is underplated by Indian lithosphere from the south and Eurasian lithosphere from the north, in contrast to a model with continuous underplating by Indian lithosphere across the entire plateau. The nBorn anisotropic model detects negative ξ anomalies suggestive of vertical deformation associated with subducted slabs and convergent zones at the Himalayan front and Tien Shan at depths near 150 km.
Reference-point-independent dynamics of molecular liquids and glasses in the tensorial formalism
NASA Astrophysics Data System (ADS)
Schilling, Rolf
2002-05-01
We apply the tensorial formalism to the dynamics of molecular liquids and glasses. This formalism separates the degrees of freedom into translational and orientational ones. Using the Mori-Zwanzig projection formalism, the equations of motion for the tensorial density correlators Slmn,l'm'n'(q-->,t) are derived. For this we show how to choose the slow variables such that the resulting Mori-Zwanzig equations are covariant under a change of the reference point of the body fixed frame. We also prove that the memory kernels obtained from mode-coupling theory (MCT) including all approximations preserve the covariance. This covariance makes, e.g., the glass transition point, the two universal scaling laws and particularly the corresponding exponents independent on the reference point and on the mass and moments of inertia, i.e., they only depend on the properties of the potential energy landscape. Finally, we show that the corresponding MCT questions for linear molecules can be obtained from those for arbitrary molecules and that they differ from earlier equations that are not covariant.
Two-level system in spin baths: Non-adiabatic dynamics and heat transport
NASA Astrophysics Data System (ADS)
Segal, Dvira
2014-04-01
We study the non-adiabatic dynamics of a two-state subsystem in a bath of independent spins using the non-interacting blip approximation, and derive an exact analytic expression for the relevant memory kernel. We show that in the thermodynamic limit, when the subsystem-bath coupling is diluted (uniformly) over many (infinite) degrees of freedom, our expression reduces to known results, corresponding to the harmonic bath with an effective, temperature-dependent, spectral density function. We then proceed and study the heat current characteristics in the out-of-equilibrium spin-spin-bath model, with a two-state subsystem bridging two thermal spin-baths of different temperatures. We compare the behavior of this model to the case of a spin connecting boson baths, and demonstrate pronounced qualitative differences between the two models. Specifically, we focus on the development of the thermal diode effect, and show that the spin-spin-bath model cannot support it at weak (subsystem-bath) coupling, while in the intermediate-strong coupling regime its rectifying performance outplays the spin-boson model.
Biochemical and molecular characterization of Avena indolines and their role in kernel texture.
Gazza, Laura; Taddei, Federica; Conti, Salvatore; Gazzelloni, Gloria; Muccilli, Vera; Janni, Michela; D'Ovidio, Renato; Alfieri, Michela; Redaelli, Rita; Pogna, Norberto E
2015-02-01
Among cereals, Avena sativa is characterized by an extremely soft endosperm texture, which leads to some negative agronomic and technological traits. On the basis of the well-known softening effect of puroindolines in wheat kernel texture, in this study, indolines and their encoding genes are investigated in Avena species at different ploidy levels. Three novel 14 kDa proteins, showing a central hydrophobic domain with four tryptophan residues and here named vromindoline (VIN)-1,2 and 3, were identified. Each VIN protein in diploid oat species was found to be synthesized by a single Vin gene whereas, in hexaploid A. sativa, three Vin-1, three Vin-2 and two Vin-3 genes coding for VIN-1, VIN-2 and VIN-3, respectively, were described and assigned to the A, C or D genomes based on similarity to their counterparts in diploid species. Expression of oat vromindoline transgenes in the extra-hard durum wheat led to accumulation of vromindolines in the endosperm and caused an approximate 50 % reduction of grain hardness, suggesting a central role for vromindolines in causing the extra-soft texture of oat grain. Further, hexaploid oats showed three orthologous genes coding for avenoindolines A and B, with five or three tryptophan residues, respectively, but very low amounts of avenoindolines were found in mature kernels. The present results identify a novel protein family affecting cereal kernel texture and would further elucidate the phylogenetic evolution of Avena genus.
Surface-from-gradients without discrete integrability enforcement: A Gaussian kernel approach.
Ng, Heung-Sun; Wu, Tai-Pang; Tang, Chi-Keung
2010-11-01
Representative surface reconstruction algorithms taking a gradient field as input enforce the integrability constraint in a discrete manner. While enforcing integrability allows the subsequent integration to produce surface heights, existing algorithms have one or more of the following disadvantages: They can only handle dense per-pixel gradient fields, smooth out sharp features in a partially integrable field, or produce severe surface distortion in the results. In this paper, we present a method which does not enforce discrete integrability and reconstructs a 3D continuous surface from a gradient or a height field, or a combination of both, which can be dense or sparse. The key to our approach is the use of kernel basis functions, which transfer the continuous surface reconstruction problem into high-dimensional space, where a closed-form solution exists. By using the Gaussian kernel, we can derive a straightforward implementation which is able to produce results better than traditional techniques. In general, an important advantage of our kernel-based method is that the method does not suffer discretization and finite approximation, both of which lead to surface distortion, which is typical of Fourier or wavelet bases widely adopted by previous representative approaches. We perform comparisons with classical and recent methods on benchmark as well as challenging data sets to demonstrate that our method produces accurate surface reconstruction that preserves salient and sharp features. The source code and executable of the system are available for downloading.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
Influence of Initial Correlations on Evolution of a Subsystem in a Heat Bath and Polaron Mobility
NASA Astrophysics Data System (ADS)
Los, Victor F.
2017-08-01
A regular approach to accounting for initial correlations, which allows to go beyond the unrealistic random phase (initial product state) approximation in deriving the evolution equations, is suggested. An exact homogeneous (time-convolution and time-convolutionless) equations for a relevant part of the two-time equilibrium correlation function for the dynamic variables of a subsystem interacting with a boson field (heat bath) are obtained. No conventional approximation like RPA or Bogoliubov's principle of weakening of initial correlations is used. The obtained equations take into account the initial correlations in the kernel governing their evolution. The solution to these equations is found in the second order of the kernel expansion in the electron-phonon interaction, which demonstrates that generally the initial correlations influence the correlation function's evolution in time. It is explicitly shown that this influence vanishes on a large timescale (actually at t→ ∞) and the evolution process enters an irreversible kinetic regime. The developed approach is applied to the Fröhlich polaron and the low-temperature polaron mobility (which was under a long-time debate) is found with a correction due to initial correlations.
Chen, Yumin; Fritz, Ronald D; Kock, Lindsay; Garg, Dinesh; Davis, R Mark; Kasturi, Prabhakar
2018-02-01
A step-wise, 'test-all-positive-gluten' analytical methodology has been developed and verified to assess kernel-based gluten contamination (i.e., wheat, barley and rye kernels) during gluten-free (GF) oat production. It targets GF claim compliance at the serving-size level (of a pouch or approximately 40-50g). Oat groats are collected from GF oat production following a robust attribute-based sampling plan then split into 75-g subsamples, and ground. R-Biopharm R5 sandwich ELISA R7001 is used for analysis of all the first15-g portions of the ground sample. A >20-ppm result disqualifies the production lot, while a >5 to <20-ppm result triggers complete analysis of the remaining 60-g of ground sample, analyzed in 15-g portions. If all five 15-g test results are <20ppm, and their average is <10.67ppm (since a 20-ppm contaminant in 40g of oats would dilute to 10.67ppm in 75-g), the lot is passed. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Efficacy of ozone as a fungicidal and detoxifying agent of aflatoxins in peanuts.
de Alencar, Ernandes Rodrigues; Faroni, Lêda Rita D'Antonino; Soares, Nilda de Fátima Ferreira; da Silva, Washington Azevedo; Carvalho, Marta Cristina da Silva
2012-03-15
Peanut contamination by fungi is a concern of processors and consumers owing to the association of these micro-organisms with quality deterioration and aflatoxin production. In this study the fungicidal and detoxifying effects of ozone on aflatoxins in peanuts was investigated. Peanut kernels were ozonated at concentrations of 13 and 21 mg L⁻¹ for periods of 0, 24, 48, 72 and 96 h. Ozone was effective in controlling total fungi and potentially aflatoxigenic species in peanuts, with a reduction in colony-forming units per gram greater than 3 log cycles at the concentration of 21 mg L⁻¹ after 96 h of exposure. A reduction in the percentage of peanuts with internal fungal populations was also observed, particularly after exposure to ozone at 21 mg L⁻¹. A reduction in the concentrations of total aflatoxins and aflatoxin B1 of approximately 30 and 25% respectively was observed for kernels exposed to ozone at 21 mg L⁻¹ for 96 h. It was concluded that ozone is an important alternative for peanut detoxification because it is effective in controlling potentially aflatoxigenic fungi and also acts in the reduction of aflatoxin levels in kernels. Copyright © 2011 Society of Chemical Industry.
Blind motion image deblurring using nonconvex higher-order total variation model
NASA Astrophysics Data System (ADS)
Li, Weihong; Chen, Rui; Xu, Shangwen; Gong, Weiguo
2016-09-01
We propose a nonconvex higher-order total variation (TV) method for blind motion image deblurring. First, we introduce a nonconvex higher-order TV differential operator to define a new model of the blind motion image deblurring, which can effectively eliminate the staircase effect of the deblurred image; meanwhile, we employ an image sparse prior to improve the edge recovery quality. Second, to improve the accuracy of the estimated motion blur kernel, we use L1 norm and H1 norm as the blur kernel regularization term, considering the sparsity and smoothing of the motion blur kernel. Third, because it is difficult to solve the numerically computational complexity problem of the proposed model owing to the intrinsic nonconvexity, we propose a binary iterative strategy, which incorporates a reweighted minimization approximating scheme in the outer iteration, and a split Bregman algorithm in the inner iteration. And we also discuss the convergence of the proposed binary iterative strategy. Last, we conduct extensive experiments on both synthetic and real-world degraded images. The results demonstrate that the proposed method outperforms the previous representative methods in both quality of visual perception and quantitative measurement.
Exact analytic solution for non-linear density fluctuation in a ΛCDM universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jaiyul; Gong, Jinn-Ouk, E-mail: jyoo@physik.uzh.ch, E-mail: jinn-ouk.gong@apctp.org
We derive the exact third-order analytic solution of the matter density fluctuation in the proper-time hypersurface in a ΛCDM universe, accounting for the explicit time-dependence and clarifying the relation to the initial condition. Furthermore, we compare our analytic solution to the previous calculation in the comoving gauge, and to the standard Newtonian perturbation theory by providing Fourier kernels for the relativistic effects. Our results provide an essential ingredient for a complete description of galaxy bias in the relativistic context.
Local Subspace Classifier with Transform-Invariance for Image Classification
NASA Astrophysics Data System (ADS)
Hotta, Seiji
A family of linear subspace classifiers called local subspace classifier (LSC) outperforms the k-nearest neighbor rule (kNN) and conventional subspace classifiers in handwritten digit classification. However, LSC suffers very high sensitivity to image transformations because it uses projection and the Euclidean distances for classification. In this paper, I present a combination of a local subspace classifier (LSC) and a tangent distance (TD) for improving accuracy of handwritten digit recognition. In this classification rule, we can deal with transform-invariance easily because we are able to use tangent vectors for approximation of transformations. However, we cannot use tangent vectors in other type of images such as color images. Hence, kernel LSC (KLSC) is proposed for incorporating transform-invariance into LSC via kernel mapping. The performance of the proposed methods is verified with the experiments on handwritten digit and color image classification.
The intrinsic matter bispectrum in ΛCDM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tram, Thomas; Crittenden, Robert; Koyama, Kazuya
2016-05-01
We present a fully relativistic calculation of the matter bispectrum at second order in cosmological perturbation theory assuming a Gaussian primordial curvature perturbation. For the first time we perform a full numerical integration of the bispectrum for both baryons and cold dark matter using the second-order Einstein-Boltzmann code, SONG. We review previous analytical results and provide an improved analytic approximation for the second-order kernel in Poisson gauge which incorporates Newtonian nonlinear evolution, relativistic initial conditions, the effect of radiation at early times and the cosmological constant at late times. Our improved kernel provides a percent level fit to the fullmore » numerical result at late times for most configurations, including both equilateral shapes and the squeezed limit. We show that baryon acoustic oscillations leave an imprint in the matter bispectrum, making a significant impact on squeezed shapes.« less
NASA Astrophysics Data System (ADS)
Voytishek, Anton V.; Shipilov, Nikolay M.
2017-11-01
In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.
Testosterone and androstanediol glucuronide among men in NHANES III.
Duan, Chuan Wei; Xu, Lin
2018-03-09
Most of the androgen replacement therapies were based on serum testosterone and without measurements of total androgen activities. Whether those with low testosterone also have low levels of androgen activity is largely unknown. We hence examined the association between testosterone and androstanediol glucuronide (AG), a reliable measure of androgen activity, in a nationally representative sample of US men. Cross-sectional analysis was based on 1493 men from the Third National Health and Nutrition examination Survey (NHANES III) conducted from 1988 to 1991. Serum testosterone and AG were measured by immunoassay. Kernel density was used to estimate the average density of serum AG concentrations by quartiles of testosterone. Testosterone was weakly and positively correlated with AG (correlation coefficient = 0.18). The kernel density estimates show that the distributions are quite similar between the quartiles of testosterone. After adjustment for age, the distributions of AG in quartiles of testosterone did not change. The correlation between testosterone and AG was stronger in men with younger age, lower body mass index, non-smoking and good self-rated health and health status. Serum testosterone is weakly correlated with total androgen activities, and the correlation is even weaker for those with poor self-rated health. Our results suggest that measurement of total androgen activity in addition to testosterone is necessary in clinical practice, especially before administration of androgen replacement therapy.
Congested Aggregation via Newtonian Interaction
NASA Astrophysics Data System (ADS)
Craig, Katy; Kim, Inwon; Yao, Yao
2018-01-01
We consider a congested aggregation model that describes the evolution of a density through the competing effects of nonlocal Newtonian attraction and a hard height constraint. This provides a counterpoint to existing literature on repulsive-attractive nonlocal interaction models, where the repulsive effects instead arise from an interaction kernel or the addition of diffusion. We formulate our model as the Wasserstein gradient flow of an interaction energy, with a penalization to enforce the constraint on the height of the density. From this perspective, the problem can be seen as a singular limit of the Keller-Segel equation with degenerate diffusion. Two key properties distinguish our problem from previous work on height constrained equations: nonconvexity of the interaction kernel (which places the model outside the scope of classical gradient flow theory) and nonlocal dependence of the velocity field on the density (which causes the problem to lack a comparison principle). To overcome these obstacles, we combine recent results on gradient flows of nonconvex energies with viscosity solution theory. We characterize the dynamics of patch solutions in terms of a Hele-Shaw type free boundary problem and, using this characterization, show that in two dimensions patch solutions converge to a characteristic function of a disk in the long-time limit, with an explicit rate on the decay of the energy. We believe that a key contribution of the present work is our blended approach, combining energy methods with viscosity solution theory.
Production of LEU Fully Ceramic Microencapsulated Fuel for Irradiation Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terrani, Kurt A; Kiggans Jr, James O; McMurray, Jake W
2016-01-01
Fully Ceramic Microencapsulated (FCM) fuel consists of tristructural isotropic (TRISO) fuel particles embedded inside a SiC matrix. This fuel inherently possesses multiple barriers to fission product release, namely the various coating layers in the TRISO fuel particle as well as the dense SiC matrix that hosts these particles. This coupled with the excellent oxidation resistance of the SiC matrix and the SiC coating layer in the TRISO particle designate this concept as an accident tolerant fuel (ATF). The FCM fuel takes advantage of uranium nitride kernels instead of oxide or oxide-carbide kernels used in high temperature gas reactors to enhancemore » heavy metal loading in the highly moderated LWRs. Production of these kernels with appropriate density, coating layer development to produce UN TRISO particles, and consolidation of these particles inside a SiC matrix have been codified thanks to significant R&D supported by US DOE Fuel Cycle R&D program. Also, surrogate FCM pellets (pellets with zirconia instead of uranium-bearing kernels) have been neutron irradiated and the stability of the matrix and coating layer under LWR irradiation conditions have been established. Currently the focus is on production of LEU (7.3% U-235 enrichment) FCM pellets to be utilized for irradiation testing. The irradiation is planned at INL s Advanced Test Reactor (ATR). This is a critical step in development of this fuel concept to establish the ability of this fuel to retain fission products under prototypical irradiation conditions.« less
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-08-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision-making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps, i.e. the spatial probability of a future vent opening given the past eruptive activity of a volcano. This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source Geographic Information System Quantum GIS, that is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows to select an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input datasets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
Comparative analysis of genetic architectures for nine developmental traits of rye.
Masojć, Piotr; Milczarski, P; Kruszona, P
2017-08-01
Genetic architectures of plant height, stem thickness, spike length, awn length, heading date, thousand-kernel weight, kernel length, leaf area and chlorophyll content were aligned on the DArT-based high-density map of the 541 × Ot1-3 RILs population of rye using the genes interaction assorting by divergent selection (GIABDS) method. Complex sets of QTL for particular traits contained 1-5 loci of the epistatic D class and 10-28 loci of the hypostatic, mostly R and E classes controlling traits variation through D-E or D-R types of two-loci interactions. QTL were distributed on each of the seven rye chromosomes in unique positions or as a coinciding loci for 2-8 traits. Detection of considerable numbers of the reversed (D', E' and R') classes of QTL might be attributed to the transgression effects observed for most of the studied traits. First examples of E* and F QTL classes, defined in the model, are reported for awn length, leaf area, thousand-kernel weight and kernel length. The results of this study extend experimental data to 11 quantitative traits (together with pre-harvest sprouting and alpha-amylase activity) for which genetic architectures fit the model of mechanism underlying alleles distribution within tails of bi-parental populations. They are also a valuable starting point for map-based search of genes underlying detected QTL and for planning advanced marker-assisted multi-trait breeding strategies.
NASA Technical Reports Server (NTRS)
Milman, Mark H.
1987-01-01
The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary systems. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.
NASA Technical Reports Server (NTRS)
Milman, Mark H.
1988-01-01
The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary schemes. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.
Small-scale modification to the lensing kernel
NASA Astrophysics Data System (ADS)
Hadzhiyska, Boryana; Spergel, David; Dunkley, Joanna
2018-02-01
Calculations of the cosmic microwave background (CMB) lensing power implemented into the standard cosmological codes such as camb and class usually treat the surface of last scatter as an infinitely thin screen. However, since the CMB anisotropies are smoothed out on scales smaller than the diffusion length due to the effect of Silk damping, the photons which carry information about the small-scale density distribution come from slightly earlier times than the standard recombination time. The dominant effect is the scale dependence of the mean redshift associated with the fluctuations during recombination. We find that fluctuations at k =0.01 Mpc-1 come from a characteristic redshift of z ≈1090 , while fluctuations at k =0.3 Mpc-1 come from a characteristic redshift of z ≈1130 . We then estimate the corrections to the lensing kernel and the related power spectra due to this effect. We conclude that neglecting it would result in a deviation from the true value of the lensing kernel at the half percent level at small CMB scales. For an all-sky, noise-free experiment, this corresponds to a ˜0.1 σ shift in the observed temperature power spectrum on small scales (2500 ≲l ≲4000 ).
Knowledge Driven Image Mining with Mixture Density Mercer Kernals
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Oza, Nikunj
2004-01-01
This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper we present the theory of Mercer Kernels; describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.
Li, Zhendong; Liu, Wenjian
2010-08-14
The spin-adaptation of single-reference quantum chemical methods for excited states of open-shell systems has been nontrivial. The primary reason is that the configuration space, generated by a truncated rank of excitations from only one component of a reference multiplet, is spin-incomplete. Those "missing" configurations are of higher ranks and can, in principle, be recaptured by a particular class of excitation operators. However, the resulting formalisms are then quite involved and there are situations [e.g., time-dependent density functional theory (TD-DFT) under the adiabatic approximation] that prevent one from doing so. To solve this issue, we propose here a tensor-coupling scheme that invokes all the components of a reference multiplet (i.e., a tensor reference) rather than increases the excitation ranks. A minimal spin-adapted n-tuply excited configuration space can readily be constructed by tensor products between the n-tuple tensor excitation operators and the chosen tensor reference. Further combined with the tensor equation-of-motion formalism, very compact expressions for excitation energies can be obtained. As a first application of this general idea, a spin-adapted open-shell random phase approximation is first developed. The so-called "translation rule" is then adopted to formulate a spin-adapted, restricted open-shell Kohn-Sham (ROKS)-based TD-DFT (ROKS-TD-DFT). Here, a particular symmetry structure has to be imposed on the exchange-correlation kernel. While the standard ROKS-TD-DFT can access only excited states due to singlet-coupled single excitations, i.e., only some of the singly excited states of the same spin (S(i)) as the reference, the new scheme can capture all the excited states of spin S(i)-1, S(i), or S(i)+1 due to both singlet- and triplet-coupled single excitations. The actual implementation and computation are very much like the (spin-contaminated) unrestricted Kohn-Sham-based TD-DFT. It is also shown that spin-contaminated spin-flip configuration interaction approaches can easily be spin-adapted via the tensor-coupling scheme.
Snake River Plain Geothermal Play Fairway Analysis - Phase 1 Raster Files
John Shervais
2015-10-09
Snake River Plain Play Fairway Analysis - Phase 1 CRS Raster Files. This dataset contains raster files created in ArcGIS. These raster images depict Common Risk Segment (CRS) maps for HEAT, PERMEABILITY, AND SEAL, as well as selected maps of Evidence Layers. These evidence layers consist of either Bayesian krige functions or kernel density functions, and include: (1) HEAT: Heat flow (Bayesian krige map), Heat flow standard error on the krige function (data confidence), volcanic vent distribution as function of age and size, groundwater temperature (equivalue interval and natural breaks bins), and groundwater T standard error. (2) PERMEABILTY: Fault and lineament maps, both as mapped and as kernel density functions, processed for both dilational tendency (TD) and slip tendency (ST), along with data confidence maps for each data type. Data types include mapped surface faults from USGS and Idaho Geological Survey data bases, as well as unpublished mapping; lineations derived from maximum gradients in magnetic, deep gravity, and intermediate depth gravity anomalies. (3) SEAL: Seal maps based on presence and thickness of lacustrine sediments and base of SRP aquifer. Raster size is 2 km. All files generated in ArcGIS.
Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing
2012-01-01
Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.
Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E
2015-01-07
Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners-the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [(11)C]AFM rats imaged on the HRRT and [(11)C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.
Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E
2016-01-01
Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners - the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods. PMID:25490063
Sugar Efflux from Maize (Zea mays L.) Pedicel Tissue 1
Porter, Gregory A.; Knievel, Daniel P.; Shannon, Jack C.
1985-01-01
Sugar release from the pedicel tissue of maize (Zea mays L.) kernels was studied by removing the distal portion of the kernel and the lower endosperm, followed by replacement of the endosperm with an agar solute trap. Sugars were unloaded into the apoplast of the pedicel and accumulated in the agar trap while the ear remained attached to the maize plant. The kinetics of 14C-assimilate movement into treated versus intact kernels were comparable. The rate of unloading declined with time, but sugar efflux from the pedicel continued for at least 6 hours and in most experiments the unloading rates approximated those necessary to support normal kernel growth rates. The unloading process was challenged with a variety of buffers, inhibitors, and solutes in order to characterize sugar unloading from this tissue. Unloading was not affected by apoplastic pH or a variety of metabolic inhibitors. Although p-chloromercuribenzene sulfonic acid (PCMBS), a nonpenetrating sulfhydryl group reagent, did not affect sugar unloading, it effectively inhibited extracellular acid invertase. When the pedicel cups were pretreated with PCMBS, at least 60% of sugars unloaded from the pedicel could be identified as sucrose. Unloading was inhibited up to 70% by 10 millimolar CaCl2. Unloading was stimulated by 15 millimolar ethyleneglycol-bis(β-aminoethyl ether)-N,N,N′,N′-tetraacetic acid which partially reversed the inhibitory effects of Ca2+. Based on these results, we suggest that passive efflux of sucrose occurs from the maize pedicel symplast followed by extracellular hydrolysis to hexoses. Images Fig. 1 Fig. 2 PMID:16664091
Tao, Chenyang; Feng, Jianfeng
2016-03-15
Quantifying associations in neuroscience (and many other scientific disciplines) is often challenged by high-dimensionality, nonlinearity and noisy observations. Many classic methods have either poor power or poor scalability on data sets of the same or different scales such as genetical, physiological and image data. Based on the framework of reproducing kernel Hilbert spaces we proposed a new nonlinear association criteria (NAC) with an efficient numerical algorithm and p-value approximation scheme. We also presented mathematical justification that links the proposed method to related methods such as kernel generalized variance, kernel canonical correlation analysis and Hilbert-Schmidt independence criteria. NAC allows the detection of association between arbitrary input domain as long as a characteristic kernel is defined. A MATLAB package was provided to facilitate applications. Extensive simulation examples and four real world neuroscience examples including functional MRI causality, Calcium imaging and imaging genetic studies on autism [Brain, 138(5):13821393 (2015)] and alcohol addiction [PNAS, 112(30):E4085-E4093 (2015)] are used to benchmark NAC. It demonstrates the superior performance over the existing procedures we tested and also yields biologically significant results for the real world examples. NAC beats its linear counterparts when nonlinearity is presented in the data. It also shows more robustness against different experimental setups compared with its nonlinear counterparts. In this work we presented a new and robust statistical approach NAC for measuring associations. It could serve as an interesting alternative to the existing methods for datasets where nonlinearity and other confounding factors are present. Copyright © 2016 Elsevier B.V. All rights reserved.
Generation of a novel phase-space-based cylindrical dose kernel for IMRT optimization.
Zhong, Hualiang; Chetty, Indrin J
2012-05-01
Improving dose calculation accuracy is crucial in intensity-modulated radiation therapy (IMRT). We have developed a method for generating a phase-space-based dose kernel for IMRT planning of lung cancer patients. Particle transport in the linear accelerator treatment head of a 21EX, 6 MV photon beam (Varian Medical Systems, Palo Alto, CA) was simulated using the EGSnrc/BEAMnrc code system. The phase space information was recorded under the secondary jaws. Each particle in the phase space file was associated with a beamlet whose index was calculated and saved in the particle's LATCH variable. The DOSXYZnrc code was modified to accumulate the energy deposited by each particle based on its beamlet index. Furthermore, the central axis of each beamlet was calculated from the orientation of all the particles in this beamlet. A cylinder was then defined around the central axis so that only the energy deposited within the cylinder was counted. A look-up table was established for each cylinder during the tallying process. The efficiency and accuracy of the cylindrical beamlet energy deposition approach was evaluated using a treatment plan developed on a simulated lung phantom. Profile and percentage depth doses computed in a water phantom for an open, square field size were within 1.5% of measurements. Dose optimized with the cylindrical dose kernel was found to be within 0.6% of that computed with the nontruncated 3D kernel. The cylindrical truncation reduced optimization time by approximately 80%. A method for generating a phase-space-based dose kernel, using a truncated cylinder for scoring dose, in beamlet-based optimization of lung treatment planning was developed and found to be in good agreement with the standard, nontruncated scoring approach. Compared to previous techniques, our method significantly reduces computational time and memory requirements, which may be useful for Monte-Carlo-based 4D IMRT or IMAT treatment planning.
Pure endmember extraction using robust kernel archetypoid analysis for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Yang, Gang; Wu, Ke; Li, Weiyue; Zhang, Dianfa
2017-09-01
A robust kernel archetypoid analysis (RKADA) method is proposed to extract pure endmembers from hyperspectral imagery (HSI). The RKADA assumes that each pixel is a sparse linear mixture of all endmembers and each endmember corresponds to a real pixel in the image scene. First, it improves the re8gular archetypal analysis with a new binary sparse constraint, and the adoption of the kernel function constructs the principal convex hull in an infinite Hilbert space and enlarges the divergences between pairwise pixels. Second, the RKADA transfers the pure endmember extraction problem into an optimization problem by minimizing residual errors with the Huber loss function. The Huber loss function reduces the effects from big noises and outliers in the convergence procedure of RKADA and enhances the robustness of the optimization function. Third, the random kernel sinks for fast kernel matrix approximation and the two-stage algorithm for optimizing initial pure endmembers are utilized to improve its computational efficiency in realistic implementations of RKADA, respectively. The optimization equation of RKADA is solved by using the block coordinate descend scheme and the desired pure endmembers are finally obtained. Six state-of-the-art pure endmember extraction methods are employed to make comparisons with the RKADA on both synthetic and real Cuprite HSI datasets, including three geometrical algorithms vertex component analysis (VCA), alternative volume maximization (AVMAX) and orthogonal subspace projection (OSP), and three matrix factorization algorithms the preconditioning for successive projection algorithm (PreSPA), hierarchical clustering based on rank-two nonnegative matrix factorization (H2NMF) and self-dictionary multiple measurement vector (SDMMV). Experimental results show that the RKADA outperforms all the six methods in terms of spectral angle distance (SAD) and root-mean-square-error (RMSE). Moreover, the RKADA has short computational times in offline operations and shows significant improvement in identifying pure endmembers for ground objects with smaller spectrum differences. Therefore, the RKADA could be an alternative for pure endmember extraction from hyperspectral images.
Mueck, F G; Michael, L; Deak, Z; Scherr, M K; Maxien, D; Geyer, L L; Reiser, M; Wirth, S
2013-07-01
To compare the image quality in dose-reduced 64-row CT of the chest at different levels of adaptive statistical iterative reconstruction (ASIR) to full-dose baseline examinations reconstructed solely with filtered back projection (FBP) in a realistic upgrade scenario. A waiver of consent was granted by the institutional review board (IRB). The noise index (NI) relates to the standard deviation of Hounsfield units in a water phantom. Baseline exams of the chest (NI = 29; LightSpeed VCT XT, GE Healthcare) were intra-individually compared to follow-up studies on a CT with ASIR after system upgrade (NI = 45; Discovery HD750, GE Healthcare), n = 46. Images were calculated in slice and volume mode with ASIR levels of 0 - 100 % in the standard and lung kernel. Three radiologists independently compared the image quality to the corresponding full-dose baseline examinations (-2: diagnostically inferior, -1: inferior, 0: equal, + 1: superior, + 2: diagnostically superior). Statistical analysis used Wilcoxon's test, Mann-Whitney U test and the intraclass correlation coefficient (ICC). The mean CTDIvol decreased by 53 % from the FBP baseline to 8.0 ± 2.3 mGy for ASIR follow-ups; p < 0.001. The ICC was 0.70. Regarding the standard kernel, the image quality in dose-reduced studies was comparable to the baseline at ASIR 70 % in volume mode (-0.07 ± 0.29, p = 0.29). Concerning the lung kernel, every ASIR level outperformed the baseline image quality (p < 0.001), with ASIR 30 % rated best (slice: 0.70 ± 0.6, volume: 0.74 ± 0.61). Vendors' recommendation of 50 % ASIR is fair. In detail, the ASIR 70 % in volume mode for the standard kernel and ASIR 30 % for the lung kernel performed best, allowing for a dose reduction of approximately 50 %. © Georg Thieme Verlag KG Stuttgart · New York.
Arimboor, Ranjith; Kumar, K Sarin; Arumughan, C
2008-05-12
A RP-HPLC-DAD method was developed and validated for the simultaneous analysis of nine phenolic acids including gallic acid, protocatechuic acid, p-hydroxybenzoic acid, vanillic acid, salicylic acid, p-coumaric acid, cinnamic acid, caffiec acid and ferulic acid in sea buckthorn (SB) (Hippophaë rhamnoides) berries and leaves. The method was validated in terms of linearity, LOD, precision, accuracy and recovery and found to be satisfactory. Phenolic acid derivatives in anatomical parts of SB berries and leaves were separated into free phenolic acids, phenolic acids bound as esters and phenolic acids bound as glycosides and profiled in HPLC. Berry pulp contained a total of 1068 mg/kg phenolic acids, of which 58.8% was derived from phenolic glycosides. Free phenolic acids and phenolic acid esters constituted 20.0% and 21.2%, respectively, of total phenolic acids in SB berry pulp. The total phenolic acid content in seed kernel (5741 mg/kg) was higher than that in berry pulp and seed coat (Table 2). Phenolic acids liberated from soluble esters constituted the major fraction of phenolic acids (57.3% of total phenolic acids) in seed kernel. 8.4% and 34.3% of total phenolic acids in seed kernel were, respectively contributed by free and phenolic acids liberated from glycosidic bonds. The total soluble phenolic acids content in seed coat (448 mg/kg) was lower than that in seed kernel and pulp (Table 2). Proportion of free phenolic acids in total phenolic acids in seed coat was higher than that in seed kernel and pulp. Phenolic acids bound as esters and glycosides, respectively contributed 49.1% and 20.3% of total phenolic acids in seed coat. The major fraction (approximately 70%) of phenolic acids in SB berries was found to be concentrated in the seeds. Gallic acid was the predominant phenolic acid both in free and bound forms in SB berry parts and leaves.
Quantitative evaluation of first-order retardation corrections to the quarkonium spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brambilla, N.; Prosperi, G.M.
1992-08-01
We evaluate numerically first-order retardation corrections for some charmonium and bottomonium masses under the usual assumption of a Bethe-Salpeter purely scalar confinement kernel. The result depends strictly on the use of an additional effective potential to express the corrections (rather than to resort to Kato perturbation theory) and on an appropriate regularization prescription. The kernel has been chosen in order to reproduce in the instantaneous approximation a semirelativistic potential suggested by the Wilson loop method. The calculations are performed for two sets of parameters determined by fits in potential theory. The corrections turn out to be typically of the ordermore » of a few hundred MeV and depend on an additional scale parameter introduced in the regularization. A conjecture existing in the literature on the origin of the constant term in the potential is also discussed.« less
Online Distributed Learning Over Networks in RKH Spaces Using Random Fourier Features
NASA Astrophysics Data System (ADS)
Bouboulis, Pantelis; Chouvardas, Symeon; Theodoridis, Sergios
2018-04-01
We present a novel diffusion scheme for online kernel-based learning over networks. So far, a major drawback of any online learning algorithm, operating in a reproducing kernel Hilbert space (RKHS), is the need for updating a growing number of parameters as time iterations evolve. Besides complexity, this leads to an increased need of communication resources, in a distributed setting. In contrast, the proposed method approximates the solution as a fixed-size vector (of larger dimension than the input space) using Random Fourier Features. This paves the way to use standard linear combine-then-adapt techniques. To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented. Conditions for asymptotic convergence and boundness of the networkwise regret are also provided. The simulated tests illustrate the performance of the proposed scheme.
A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.
Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D
2014-02-01
In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants.
SOM-based nonlinear least squares twin SVM via active contours for noisy image segmentation
NASA Astrophysics Data System (ADS)
Xie, Xiaomin; Wang, Tingting
2017-02-01
In this paper, a nonlinear least square twin support vector machine (NLSTSVM) with the integration of active contour model (ACM) is proposed for noisy image segmentation. Efforts have been made to seek the kernel-generated surfaces instead of hyper-planes for the pixels belonging to the foreground and background, respectively, using the kernel trick to enhance the performance. The concurrent self organizing maps (SOMs) are applied to approximate the intensity distributions in a supervised way, so as to establish the original training sets for the NLSTSVM. Further, the two sets are updated by adding the global region average intensities at each iteration. Moreover, a local variable regional term rather than edge stop function is adopted in the energy function to ameliorate the noise robustness. Experiment results demonstrate that our model holds the higher segmentation accuracy and more noise robustness.
Single image super-resolution via an iterative reproducing kernel Hilbert space method.
Deng, Liang-Jian; Guo, Weihong; Huang, Ting-Zhu
2016-11-01
Image super-resolution, a process to enhance image resolution, has important applications in satellite imaging, high definition television, medical imaging, etc. Many existing approaches use multiple low-resolution images to recover one high-resolution image. In this paper, we present an iterative scheme to solve single image super-resolution problems. It recovers a high quality high-resolution image from solely one low-resolution image without using a training data set. We solve the problem from image intensity function estimation perspective and assume the image contains smooth and edge components. We model the smooth components of an image using a thin-plate reproducing kernel Hilbert space (RKHS) and the edges using approximated Heaviside functions. The proposed method is applied to image patches, aiming to reduce computation and storage. Visual and quantitative comparisons with some competitive approaches show the effectiveness of the proposed method.
Learning molecular energies using localized graph kernels.
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-21
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Learning molecular energies using localized graph kernels
NASA Astrophysics Data System (ADS)
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-01
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
A new treatment of nonlocality in scattering process
NASA Astrophysics Data System (ADS)
Upadhyay, N. J.; Bhagwat, A.; Jain, B. K.
2018-01-01
Nonlocality in the scattering potential leads to an integro-differential equation. In this equation nonlocality enters through an integral over the nonlocal potential kernel. The resulting Schrödinger equation is usually handled by approximating r,{r}{\\prime }-dependence of the nonlocal kernel. The present work proposes a novel method to solve the integro-differential equation. The method, using the mean value theorem of integral calculus, converts the nonhomogeneous term to a homogeneous term. The effective local potential in this equation turns out to be energy independent, but has relative angular momentum dependence. This method is accurate and valid for any form of nonlocality. As illustrative examples, the total and differential cross sections for neutron scattering off 12C, 56Fe and 100Mo nuclei are calculated with this method in the low energy region (up to 10 MeV) and are found to be in reasonable accord with the experiments.
Meshfree truncated hierarchical refinement for isogeometric analysis
NASA Astrophysics Data System (ADS)
Atri, H. R.; Shojaee, S.
2018-05-01
In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.
Optimized Quasi-Interpolators for Image Reconstruction.
Sacht, Leonardo; Nehab, Diego
2015-12-01
We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.
GPU-Powered Coherent Beamforming
NASA Astrophysics Data System (ADS)
Magro, A.; Adami, K. Zarb; Hickish, J.
2015-03-01
Graphics processing units (GPU)-based beamforming is a relatively unexplored area in radio astronomy, possibly due to the assumption that any such system will be severely limited by the PCIe bandwidth required to transfer data to the GPU. We have developed a CUDA-based GPU implementation of a coherent beamformer, specifically designed and optimized for deployment at the BEST-2 array which can generate an arbitrary number of synthesized beams for a wide range of parameters. It achieves ˜1.3 TFLOPs on an NVIDIA Tesla K20, approximately 10x faster than an optimized, multithreaded CPU implementation. This kernel has been integrated into two real-time, GPU-based time-domain software pipelines deployed at the BEST-2 array in Medicina: a standalone beamforming pipeline and a transient detection pipeline. We present performance benchmarks for the beamforming kernel as well as the transient detection pipeline with beamforming capabilities as well as results of test observation.
Garza-Gisholt, Eduardo; Hemmi, Jan M.; Hart, Nathan S.; Collin, Shaun P.
2014-01-01
Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed ‘by eye’. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation ‘respects’ the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the ‘noise’ caused by artefacts and permits a clearer representation of the dominant, ‘real’ distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome. PMID:24747568
Hanft, J M; Jones, R J
1986-06-01
Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.
NASA Astrophysics Data System (ADS)
Wittek, Peter; Calderaro, Luca
2015-12-01
We extended a parallel and distributed implementation of the Trotter-Suzuki algorithm for simulating quantum systems to study a wider range of physical problems and to make the library easier to use. The new release allows periodic boundary conditions, many-body simulations of non-interacting particles, arbitrary stationary potential functions, and imaginary time evolution to approximate the ground state energy. The new release is more resilient to the computational environment: a wider range of compiler chains and more platforms are supported. To ease development, we provide a more extensive command-line interface, an application programming interface, and wrappers from high-level languages.
Detecting Genomic Clustering of Risk Variants from Sequence Data: Cases vs. Controls
Schaid, Daniel J.; Sinnwell, Jason P.; McDonnell, Shannon K.; Thibodeau, Stephen N.
2013-01-01
As the ability to measure dense genetic markers approaches the limit of the DNA sequence itself, taking advantage of possible clustering of genetic variants in, and around, a gene would benefit genetic association analyses, and likely provide biological insights. The greatest benefit might be realized when multiple rare variants cluster in a functional region. Several statistical tests have been developed, one of which is based on the popular Kulldorff scan statistic for spatial clustering of disease. We extended another popular spatial clustering method – Tango’s statistic – to genomic sequence data. An advantage of Tango’s method is that it is rapid to compute, and when single test statistic is computed, its distribution is well approximated by a scaled chi-square distribution, making computation of p-values very rapid. We compared the Type-I error rates and power of several clustering statistics, as well as the omnibus sequence kernel association test (SKAT). Although our version of Tango’s statistic, which we call “Kernel Distance” statistic, took approximately half the time to compute than the Kulldorff scan statistic, it had slightly less power than the scan statistic. Our results showed that the Ionita-Laza version of Kulldorff’s scan statistic had the greatest power over a range of clustering scenarios. PMID:23842950
Fernandes, N M; Pinto, B D L; Almeida, L O B; Slaets, J F W; Köberle, R
2010-10-01
We study the reconstruction of visual stimuli from spike trains, representing the reconstructed stimulus by a Volterra series up to second order. We illustrate this procedure in a prominent example of spiking neurons, recording simultaneously from the two H1 neurons located in the lobula plate of the fly Chrysomya megacephala. The fly views two types of stimuli, corresponding to rotational and translational displacements. Second-order reconstructions require the manipulation of potentially very large matrices, which obstructs the use of this approach when there are many neurons. We avoid the computation and inversion of these matrices using a convenient set of basis functions to expand our variables in. This requires approximating the spike train four-point functions by combinations of two-point functions similar to relations, which would be true for gaussian stochastic processes. In our test case, this approximation does not reduce the quality of the reconstruction. The overall contribution to stimulus reconstruction of the second-order kernels, measured by the mean squared error, is only about 5% of the first-order contribution. Yet at specific stimulus-dependent instants, the addition of second-order kernels represents up to 100% improvement, but only for rotational stimuli. We present a perturbative scheme to facilitate the application of our method to weakly correlated neurons.
7 CFR 810.602 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...
Near-field limitations of Fresnel-regime coherent diffraction imaging
Pound, Benjamin A.; Barber, John L.; Nguyen, Kimberly; ...
2017-08-04
Coherent diffraction imaging (CDI) is a rapidly developing form of imaging that offers the potential of wavelength-limited resolution without image-forming lenses. In CDI, the intensity of the diffraction pattern is measured directly by the detector, and various iterative phase retrieval algorithms are used to “invert” the diffraction pattern and reconstruct a high-resolution image of the sample. But, there are certain requirements in CDI that must be met to reconstruct the object. Although most experiments are conducted in the “far-field”—or Fraunhofer—regime where the requirements are not as stringent, some experiments must be conducted in the “near field” where Fresnel diffraction mustmore » be considered. According to the derivation of Fresnel diffraction, successful reconstructions can only be obtained when the small-angle number, a derived quantity, is much less than one. We show, however, that it is not actually necessary to fulfill the small-angle condition. The Fresnel kernel well approximates the exact kernel in regions where the phase oscillates slowly, and in regions of fast oscillations, indicated by large A n , the error between kernels should be negligible due to stationary-phase arguments. Finally we verify, by experiment, this conclusion with a helium neon laser setup and show that it should hold at x-ray wavelengths as well.« less
A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks
Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong
2016-01-01
In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298
Near-field limitations of Fresnel-regime coherent diffraction imaging
NASA Astrophysics Data System (ADS)
Pound, Benjamin A.; Barber, John L.; Nguyen, Kimberly; Tyson, Matthew C.; Sandberg, Richard L.
2017-08-01
Coherent diffraction imaging (CDI) is a rapidly developing form of imaging that offers the potential of wavelength-limited resolution without image-forming lenses. In CDI, the intensity of the diffraction pattern is measured directly by the detector, and various iterative phase retrieval algorithms are used to "invert" the diffraction pattern and reconstruct a high-resolution image of the sample. However, there are certain requirements in CDI that must be met to reconstruct the object. Although most experiments are conducted in the "far-field"—or Fraunhofer—regime where the requirements are not as stringent, some experiments must be conducted in the "near field" where Fresnel diffraction must be considered. According to the derivation of Fresnel diffraction, successful reconstructions can only be obtained when the small-angle number, a derived quantity, is much less than one. We show, however, that it is not actually necessary to fulfill the small-angle condition. The Fresnel kernel well approximates the exact kernel in regions where the phase oscillates slowly, and in regions of fast oscillations, indicated by large A n , the error between kernels should be negligible due to stationary-phase arguments. We experimentally verify this conclusion with a helium neon laser setup and show that it should hold at x-ray wavelengths as well.
Hanft, Jonathan M.; Jones, Robert J.
1986-01-01
Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846
QCDNUM: Fast QCD evolution and convolution
NASA Astrophysics Data System (ADS)
Botje, M.
2011-02-01
The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline coefficients by solving (coupled) triangular matrix equations with a forward substitution algorithm. Fast computation of convolution integrals as weighted sums of spline coefficients, with weights derived from user-given convolution kernels. Restrictions: Accuracy and speed are determined by the density of the evolution grid. Running time: Less than 10 ms on a 2 GHz Intel Core 2 Duo processor to evolve the gluon density and 12 quark densities at next-to-next-to-leading order over a large kinematic range.
Out-of-Sample Extensions for Non-Parametric Kernel Methods.
Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang
2017-02-01
Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.
7 CFR 810.1202 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...
NASA Astrophysics Data System (ADS)
Fine, Dana S.; Sawin, Stephen
2017-01-01
Feynman's time-slicing construction approximates the path integral by a product, determined by a partition of a finite time interval, of approximate propagators. This paper formulates general conditions to impose on a short-time approximation to the propagator in a general class of imaginary-time quantum mechanics on a Riemannian manifold which ensure that these products converge. The limit defines a path integral which agrees pointwise with the heat kernel for a generalized Laplacian. The result is a rigorous construction of the propagator for supersymmetric quantum mechanics, with potential, as a path integral. Further, the class of Laplacians includes the square of the twisted Dirac operator, which corresponds to an extension of N = 1/2 supersymmetric quantum mechanics. General results on the rate of convergence of the approximate path integrals suffice in this case to derive the local version of the Atiyah-Singer index theorem.
Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang
2016-01-01
Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.
Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang
2016-01-01
Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143
Statistics of primordial density perturbations from discrete seed masses
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.; Bertschinger, Edmund
1991-01-01
The statistics of density perturbations for general distributions of seed masses with arbitrary matter accretion is examined. Formal expressions for the power spectrum, the N-point correlation functions, and the density distribution function are derived. These results are applied to the case of uncorrelated seed masses, and power spectra are derived for accretion of both hot and cold dark matter plus baryons. The reduced moments (cumulants) of the density distribution are computed and used to obtain a series expansion for the density distribution function. Analytic results are obtained for the density distribution function in the case of a distribution of seed masses with a spherical top-hat accretion pattern. More generally, the formalism makes it possible to give a complete characterization of the statistical properties of any random field generated from a discrete linear superposition of kernels. In particular, the results can be applied to density fields derived by smoothing a discrete set of points with a window function.
Egidi, Franco; Sun, Shichao; Goings, Joshua J; Scalmani, Giovanni; Frisch, Michael J; Li, Xiaosong
2017-06-13
We present a linear response formalism for the description of the electronic excitations of a noncollinear reference defined via Kohn-Sham spin density functional methods. A set of auxiliary variables, defined using the density and noncollinear magnetization density vector, allows the generalization of spin density functional kernels commonly used in collinear DFT to noncollinear cases, including local density, GGA, meta-GGA and hybrid functionals. Working equations and derivations of functional second derivatives with respect to the noncollinear density, required in the linear response noncollinear TDDFT formalism, are presented in this work. This formalism takes all components of the spin magnetization into account independent of the type of reference state (open or closed shell). As a result, the method introduced here is able to afford a nonzero local xc torque on the spin magnetization while still satisfying the zero-torque theorem globally. The formalism is applied to a few test cases using the variational exact-two-component reference including spin-orbit coupling to illustrate the capabilities of the method.
Density Deconvolution With EPI Splines
2015-09-01
effects of various substances on test subjects [11], [12]. Whereas in geophysics, a shot may be fired into the ground, in pharmacokinetics, a signal is...be significant, including medicine, bioinformatics, chemistry, as- tronomy, and econometrics , as well as an extensive review of kernel based methods...demonstrate the effectiveness of our model in simulations motivated by test instances in [32]. We consider an additive measurement model scenario where
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, N.S.V.
The classical Nadaraya-Watson estimator is shown to solve a generic sensor fusion problem where the underlying sensor error densities are not known but a sample is available. By employing Haar kernels this estimator is shown to yield finite sample guarantees and also to be efficiently computable. Two simulation examples, and a robotics example involving the detection of a door using arrays of ultrasonic and infrared sensors, are presented to illustrate the performance.
Nowcasting Cloud Fields for U.S. Air Force Special Operations
2017-03-01
application of Bayes’ Rule offers many advantages over Kernel Density Estimation (KDE) and other commonly used statistical post-processing methods...reflectance and probability of cloud. A statistical post-processing technique is applied using Bayesian estimation to train the system from a set of past...nowcasting, low cloud forecasting, cloud reflectance, ISR, Bayesian estimation, statistical post-processing, machine learning 15. NUMBER OF PAGES
Using kernel density estimates to investigate lymphatic filariasis in northeast Brazil
Medeiros, Zulma; Bonfim, Cristine; Brandão, Eduardo; Netto, Maria José Evangelista; Vasconcellos, Lucia; Ribeiro, Liany; Portugal, José Luiz
2012-01-01
After more than 10 years of the Global Program to Eliminate Lymphatic Filariasis (GPELF) in Brazil, advances have been seen, but the endemic disease persists as a public health problem. The aim of this study was to describe the spatial distribution of lymphatic filariasis in the municipality of Jaboatão dos Guararapes, Pernambuco, Brazil. An epidemiological survey was conducted in the municipality, and positive filariasis cases identified in this survey were georeferenced in point form, using the GPS. A kernel intensity estimator was applied to identify clusters with greater intensity of cases. We examined 23 673 individuals and 323 individuals with microfilaremia were identified, representing a mean prevalence rate of 1.4%. Around 88% of the districts surveyed presented cases of filarial infection, with prevalences of 0–5.6%. The male population was more affected by the infection, with 63.8% of the cases (P<0.005). Positive cases were found in all age groups examined. The kernel intensity estimator identified the areas of greatest intensity and least intensity of filarial infection cases. The case distribution was heterogeneous across the municipality. The kernel estimator identified spatial clusters of cases, thus indicating locations with greater intensity of transmission. The main advantage of this type of analysis lies in its ability to rapidly and easily show areas with the highest concentration of cases, thereby contributing towards planning, monitoring, and surveillance of filariasis elimination actions. Incorporation of geoprocessing and spatial analysis techniques constitutes an important tool for use within the GPELF. PMID:22943547
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-11-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps (i.e., the spatial probability of a future vent opening given the past eruptive activity of a volcano). This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source geographic information system Quantum GIS, which is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows the selection of an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input data sets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikell, J; Siman, W; Kappadath, S
2014-06-15
Purpose: 90Y microsphere therapy in liver presents a situation where beta transport is dominant and the tissue is relatively homogenous. We compare voxel-based absorbed doses from a 90Y kernel to Monte Carlo (MC) using quantitative 90Y bremsstrahlung SPECT/CT as source distribution. Methods: Liver, normal liver, and tumors were delineated by an interventional radiologist using contrast-enhanced CT registered with 90Y SPECT/CT scans for 14 therapies. Right lung was segmented via region growing. The kernel was generated with 1.04 g/cc soft tissue for 4.8 mm voxel matching the SPECT. MC simulation materials included air, lung, soft tissue, and bone with varying densities.more » We report percent difference between kernel and MC (%Δ(K,MC)) for mean absorbed dose, D70, and V20Gy in total liver, normal liver, tumors, and right lung. We also report %Δ(K,MC) for heterogeneity metrics: coefficient of variation (COV) and D10/D90. The impact of spatial resolution (0, 10, 20 mm FWHM) and lung shunt fraction (LSF) (1,5,10,20%) on the accuracy of MC and kernel doses near the liver-lung interface was modeled in 1D. We report the distance from the interface where errors become <10% of unblurred MC as d10(side of interface, dose calculation, FWHM blurring, LSF). Results: The %Δ(K,MC) for mean, D70, and V20Gy in tumor and liver was <7% while right lung differences varied from 60–90%. The %Δ(K,MC) for COV was <4.8% for tumor and liver and <54% for the right lung. The %Δ(K,MC) for D10/D90 was <5% for 22/23 tumors. d10(liver,MC,10,1–20) awere <9mm and d10(liver,MC,20,1–20) awere <15mm; both agreed within 3mm to the kernel. d10(lung,MC,10,20), d10(lung,MC,10,1), d10(lung,MC,20,20), and d10(lung,MC,20,1) awere 6, 25, 15, and 34mm, respectively. Kernel calculations on blurred distributions in lung had errors > 10%. Conclusions: Liver and tumor voxel doses with 90Y kernel and MC agree within 7%. Large differences exist between the two methods in right lung. Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA138986. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
Inferring probabilistic stellar rotation periods using Gaussian processes
NASA Astrophysics Data System (ADS)
Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh
2018-02-01
Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.
THREE-POINT PHASE CORRELATIONS: A NEW MEASURE OF NONLINEAR LARGE-SCALE STRUCTURE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolstenhulme, Richard; Bonvin, Camille; Obreschkow, Danail
2015-05-10
We derive an analytical expression for a novel large-scale structure observable: the line correlation function. The line correlation function, which is constructed from the three-point correlation function of the phase of the density field, is a robust statistical measure allowing the extraction of information in the nonlinear and non-Gaussian regime. We show that, in perturbation theory, the line correlation is sensitive to the coupling kernel F{sub 2}, which governs the nonlinear gravitational evolution of the density field. We compare our analytical expression with results from numerical simulations and find a 1σ agreement for separations r ≳ 30 h{sup −1} Mpc.more » Fitting formulae for the power spectrum and the nonlinear coupling kernel at small scales allow us to extend our prediction into the strongly nonlinear regime, where we find a 1σ agreement with the simulations for r ≳ 2 h{sup −1} Mpc. We discuss the advantages of the line correlation relative to standard statistical measures like the bispectrum. Unlike the latter, the line correlation is independent of the bias, in the regime where the bias is local and linear. Furthermore, the variance of the line correlation is independent of the Gaussian variance on the modulus of the density field. This suggests that the line correlation can probe more precisely the nonlinear regime of gravity, with less contamination from the power spectrum variance.« less
7 CFR 810.802 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
NASA Astrophysics Data System (ADS)
Chakraborty, Ahana; Sensarma, Rajdeep
2018-03-01
The Born-Markov approximation is widely used to study the dynamics of open quantum systems coupled to external baths. Using Keldysh formalism, we show that the dynamics of a system of bosons (fermions) linearly coupled to a noninteracting bosonic (fermionic) bath falls outside this paradigm if the bath spectral function has nonanalyticities as a function of frequency. In this case, we show that the dissipative and noise kernels governing the dynamics have distinct power-law tails. The Green's functions show a short-time "quasi"-Markovian exponential decay before crossing over to a power-law tail governed by the nonanalyticity of the spectral function. We study a system of bosons (fermions) hopping on a one-dimensional lattice, where each site is coupled linearly to an independent bath of noninteracting bosons (fermions). We obtain exact expressions for the Green's functions of this system, which show power-law decay ˜|t - t'|-3 /2 . We use these to calculate the density and current profile, as well as unequal-time current-current correlators. While the density and current profiles show interesting quantitative deviations from Markovian results, the current-current correlators show qualitatively distinct long-time power-law tails |t - t'|-3 characteristic of non-Markovian dynamics. We show that the power-law decays survive in the presence of interparticle interaction in the system, but the crossover time scale is shifted to larger values with increasing interaction strength.
GLASS 2.0: An Operational, Multimodal, Bayesian Earthquake Data Association Engine
NASA Astrophysics Data System (ADS)
Benz, H.; Johnson, C. E.; Patton, J. M.; McMahon, N. D.; Earle, P. S.
2015-12-01
The legacy approach to automated detection and determination of hypocenters is arrival time stacking algorithms. Examples of such algorithms are the associator, Binder, which has been in continuous use in many USGS-supported regional seismic networks since the 1980s and the spherical earth successor, GLASS 1.0, currently in service at the USGS National Earthquake Information Center for over 10 years. The principle short-comings of the legacy approach are 1) it can only use phase arrival times, 2) it does not adequately address the problems of extreme variations in station density worldwide, 3) it cannot incorporate multiple phase models or statistical attributes of phases with distance, and 4) it cannot incorporate noise model attributes of individual stations. Previously we introduced a theoretical framework of a new associator using a Bayesian kernel stacking approach to approximate a joint probability density function for hypocenter localization. More recently we added station- and phase-specific Bayesian constraints to the association process. GLASS 2.0 incorporates a multiplicity of earthquake related data including phase arrival times, back-azimuth and slowness information from array beamforming, arrival times from waveform cross correlation processing, and geographic constraints from real-time social media reports of ground shaking. We demonstrate its application by modeling an aftershock sequence using dozens of stations that recorded tens of thousands of earthquakes over a period of one month. We also demonstrate Glass 2.0 performance regionally and teleseismically using the globally distributed real-time monitoring system at NEIC.
Quantum dynamics in continuum for proton transport—Generalized correlation
NASA Astrophysics Data System (ADS)
Chen, Duan; Wei, Guo-Wei
2012-04-01
As a key process of many biological reactions such as biological energy transduction or human sensory systems, proton transport has attracted much research attention in biological, biophysical, and mathematical fields. A quantum dynamics in continuum framework has been proposed to study proton permeation through membrane proteins in our earlier work and the present work focuses on the generalized correlation of protons with their environment. Being complementary to electrostatic potentials, generalized correlations consist of proton-proton, proton-ion, proton-protein, and proton-water interactions. In our approach, protons are treated as quantum particles while other components of generalized correlations are described classically and in different levels of approximations upon simulation feasibility and difficulty. Specifically, the membrane protein is modeled as a group of discrete atoms, while ion densities are approximated by Boltzmann distributions, and water molecules are represented as a dielectric continuum. These proton-environment interactions are formulated as convolutions between number densities of species and their corresponding interaction kernels, in which parameters are obtained from experimental data. In the present formulation, generalized correlations are important components in the total Hamiltonian of protons, and thus is seamlessly embedded in the multiscale/multiphysics total variational model of the system. It takes care of non-electrostatic interactions, including the finite size effect, the geometry confinement induced channel barriers, dehydration and hydrogen bond effects, etc. The variational principle or the Euler-Lagrange equation is utilized to minimize the total energy functional, which includes the total Hamiltonian of protons, and obtain a new version of generalized Laplace-Beltrami equation, generalized Poisson-Boltzmann equation and generalized Kohn-Sham equation. A set of numerical algorithms, such as the matched interface and boundary method, the Dirichlet to Neumann mapping, Gummel iteration, and Krylov space techniques, is employed to improve the accuracy, efficiency, and robustness of model simulations. Finally, comparisons between the present model predictions and experimental data of current-voltage curves, as well as current-concentration curves of the Gramicidin A channel, verify our new model.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
Haviland, David R; Beede, Robert H; Daane, Kent M
2015-12-01
Ferrisia gilli Gullan (Hemiptera: Pseudococcidae) is a new pest in California pistachios, Pistacea vera L. We conducted a 3-yr field study to determine the type and amount of damage caused by F. gilli. Using pesticides, we established gradients of F. gilli densities in a commercial pistachio orchard near Tipton, CA, from 2005 to 2007. Each year, mealybug densities on pistachio clusters were recorded from May through September and cumulative mealybug-days were determined. At harvest time, nut yield per tree (5% dried weight) was determined, and subsamples of nuts were evaluated for market quality. Linear regression analysis of cumulative mealybug-days against fruit yield and nut quality measurements showed no relationships in 2005 and 2006, when mealybug densities were moderate. However, in 2007, when mealybug densities were very high, there was a negative correlation with yield (for every 1,000 mealybug-days, there was a decrease in total dry weight per tree of 0.105 kg) and percentage of split unstained nuts (for every 1,000 mealybug-days, there was a decrease in the percentage of split unstained of 0.560%), and a positive correlation between the percentage of closed kernel and closed blank nuts (for every 1,000 mealybug-days, there is an increase in the percentage of closed kernel and closed blank of 0.176 and 0.283%, respectively). The data were used to determine economic injury levels, showing that for each mealybug per cluster in May there was a 4.73% reduction in crop value associated with quality and a 0.866 kg reduction in yield per tree (4.75%). © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
Test particle propagation in magnetostatic turbulence. 2: The local approximation method
NASA Technical Reports Server (NTRS)
Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.
1976-01-01
An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.
Some comparisons of complexity in dictionary-based and linear computational models.
Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello
2011-03-01
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.
Production of Low Enriched Uranium Nitride Kernels for TRISO Particle Irradiation Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMurray, J. W.; Silva, C. M.; Helmreich, G. W.
2016-06-01
A large batch of UN microspheres to be used as kernels for TRISO particle fuel was produced using carbothermic reduction and nitriding of a sol-gel feedstock bearing tailored amounts of low-enriched uranium (LEU) oxide and carbon. The process parameters, established in a previous study, produced phasepure NaCl structure UN with dissolved C on the N sublattice. The composition, calculated by refinement of the lattice parameter from X-ray diffraction, was determined to be UC 0.27N 0.73. The final accepted product weighed 197.4 g. The microspheres had an average diameter of 797±1.35 μm and a composite mean theoretical density of 89.9±0.5% formore » a solid solution of UC and UN with the same atomic ratio; both values are reported with their corresponding calculated standard error.« less
Is there a single best estimator? selection of home range estimators using area- under- the-curve
Walter, W. David; Onorato, Dave P.; Fischer, Justin W.
2015-01-01
Comparisons of fit of home range contours with locations collected would suggest that use of VHF technology is not as accurate as GPS technology to estimate size of home range for large mammals. Estimators of home range collected with GPS technology performed better than those estimated with VHF technology regardless of estimator used. Furthermore, estimators that incorporate a temporal component (third-generation estimators) appeared to be the most reliable regardless of whether kernel-based or Brownian bridge-based algorithms were used and in comparison to first- and second-generation estimators. We defined third-generation estimators of home range as any estimator that incorporates time, space, animal-specific parameters, and habitat. Such estimators would include movement-based kernel density, Brownian bridge movement models, and dynamic Brownian bridge movement models among others that have yet to be evaluated.
Subramanian, Sundarraman
2008-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423
Subramanian, Sundarraman
2006-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.
NASA Astrophysics Data System (ADS)
Lin, J. Y. Y.; Aczel, A. A.; Abernathy, D. L.; Nagler, S. E.; Buyers, W. J. L.; Granroth, G. E.
2014-04-01
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of-flight chopper spectrometers [A. A. Aczel et al., Nat. Commun. 3, 1124 (2012), 10.1038/ncomms2117]. These modes are well described by three-dimensional isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states, and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature-dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T dependence of the scattering from these modes is strongly influenced by the uranium lattice.
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brabec, Jiri; Lin, Lin; Shao, Meiyue
We present a special symmetric Lanczos algorithm and a kernel polynomial method (KPM) for approximating the absorption spectrum of molecules within the linear response time-dependent density functional theory (TDDFT) framework in the product form. In contrast to existing algorithms, the new algorithms are based on reformulating the original non-Hermitian eigenvalue problem as a product eigenvalue problem and the observation that the product eigenvalue problem is self-adjoint with respect to an appropriately chosen inner product. This allows a simple symmetric Lanczos algorithm to be used to compute the desired absorption spectrum. The use of a symmetric Lanczos algorithm only requires halfmore » of the memory compared with the nonsymmetric variant of the Lanczos algorithm. The symmetric Lanczos algorithm is also numerically more stable than the nonsymmetric version. The KPM algorithm is also presented as a low-memory alternative to the Lanczos approach, but the algorithm may require more matrix-vector multiplications in practice. We discuss the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost. Applications to a set of small and medium-sized molecules are also presented.« less
Island size distribution with hindered aggregation
NASA Astrophysics Data System (ADS)
González, Diego Luis; Camargo, Manuel; Sánchez, Julián A.
2018-05-01
We study the effect of hindered aggregation on the island formation processes for a one-dimensional model of epitaxial growth with arbitrary nucleus size i . In the proposed model, the attachment of monomers to islands is hindered by an aggregation barrier, ɛa, which decreases the hopping rate of monomers to the islands. As ɛa increases, the system exhibits a crossover between two different regimes; namely, from diffusion-limited aggregation to attachment-limited aggregation. The island size distribution, P (s ) , is calculated for different values of ɛa by a self-consistent approach involving the nucleation and aggregation capture kernels. The results given by the analytical model are compared with those from kinetic Monte Carlo simulations, finding a close agreement between both sets of data for all considered values of i and ɛa. As the aggregation barrier increases, the spatial effect of fluctuations on the density of monomers can be neglected and P (s ) smoothly approximates to the limit distribution P (s ) =δs ,i +1 . In the crossover regime the system features a complex and rich behavior, which can be explained in terms of the characteristic timescales of different microscopic processes.
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
Brabec, Jiri; Lin, Lin; Shao, Meiyue; ...
2015-10-06
We present a special symmetric Lanczos algorithm and a kernel polynomial method (KPM) for approximating the absorption spectrum of molecules within the linear response time-dependent density functional theory (TDDFT) framework in the product form. In contrast to existing algorithms, the new algorithms are based on reformulating the original non-Hermitian eigenvalue problem as a product eigenvalue problem and the observation that the product eigenvalue problem is self-adjoint with respect to an appropriately chosen inner product. This allows a simple symmetric Lanczos algorithm to be used to compute the desired absorption spectrum. The use of a symmetric Lanczos algorithm only requires halfmore » of the memory compared with the nonsymmetric variant of the Lanczos algorithm. The symmetric Lanczos algorithm is also numerically more stable than the nonsymmetric version. The KPM algorithm is also presented as a low-memory alternative to the Lanczos approach, but the algorithm may require more matrix-vector multiplications in practice. We discuss the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost. Applications to a set of small and medium-sized molecules are also presented.« less
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
Electromagnetics. Volume 1, Number 4, October-December 1981.
1981-01-01
terms. 1.6 Matrix and Operator Theory Integral equations have been cast in approximate numerical form by the moment method (MoM). In this numerical...introduced the eigenmode expansion method to find more properties of the SEM [3.4]. One defines eigenvalues and eigenmodes for the integral operator (kernel...exterior surface of the system. Mechanisms that play a role in the penetration are (1) diffusion through metal skins , (2) field leakage through
CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties
2017-03-01
inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse
Paes, Geísa Pinheiro; Viana, José Marcelo Soriano; Silva, Fabyano Fonseca e; Mundim, Gabriel Borges
2016-01-01
Abstract The objectives of this study were to assess linkage disequilibrium (LD) and selection-induced changes in single nucleotide polymorphism (SNP) frequency, and to perform association mapping in popcorn chromosome regions containing quantitative trait loci (QTLs) for quality traits. Seven tropical and two temperate popcorn populations were genotyped for 96 SNPs chosen in chromosome regions containing QTLs for quality traits. The populations were phenotyped for expansion volume, 100-kernel weight, kernel sphericity, and kernel density. The LD statistics were the difference between the observed and expected haplotype frequencies (D), the proportion of D relative to the expected maximum value in the population, and the square of the correlation between the values of alleles at two loci. Association mapping was based on least squares and Bayesian approaches. In the tropical populations, D-values greater than 0.10 were observed for SNPs separated by 100-150 Mb, while most of the D-values in the temperate populations were less than 0.05. Selection for expansion volume indirectly led to increase in LD values, population differentiation, and significant changes in SNP frequency. Some associations were observed for expansion volume and the other quality traits. The candidate genes are involved with starch, storage protein, lipid, and cell wall polysaccharides synthesis. PMID:27007903
Paes, Geísa Pinheiro; Viana, José Marcelo Soriano; Silva, Fabyano Fonseca E; Mundim, Gabriel Borges
2016-03-01
The objectives of this study were to assess linkage disequilibrium (LD) and selection-induced changes in single nucleotide polymorphism (SNP) frequency, and to perform association mapping in popcorn chromosome regions containing quantitative trait loci (QTLs) for quality traits. Seven tropical and two temperate popcorn populations were genotyped for 96 SNPs chosen in chromosome regions containing QTLs for quality traits. The populations were phenotyped for expansion volume, 100-kernel weight, kernel sphericity, and kernel density. The LD statistics were the difference between the observed and expected haplotype frequencies (D), the proportion of D relative to the expected maximum value in the population, and the square of the correlation between the values of alleles at two loci. Association mapping was based on least squares and Bayesian approaches. In the tropical populations, D-values greater than 0.10 were observed for SNPs separated by 100-150 Mb, while most of the D-values in the temperate populations were less than 0.05. Selection for expansion volume indirectly led to increase in LD values, population differentiation, and significant changes in SNP frequency. Some associations were observed for expansion volume and the other quality traits. The candidate genes are involved with starch, storage protein, lipid, and cell wall polysaccharides synthesis.
Rapid simulation of spatial epidemics: a spectral method.
Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J
2015-04-07
Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. Copyright © 2015 Elsevier Ltd. All rights reserved.
Unraveling multiple changes in complex climate time series using Bayesian inference
NASA Astrophysics Data System (ADS)
Berner, Nadine; Trauth, Martin H.; Holschneider, Matthias
2016-04-01
Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of observations. Unraveling such transitions yields essential information for the understanding of the observed system. The precise detection and basic characterization of underlying changes is therefore of particular importance in environmental sciences. We present a kernel-based Bayesian inference approach to investigate direct as well as indirect climate observations for multiple generic transition events. In order to develop a diagnostic approach designed to capture a variety of natural processes, the basic statistical features of central tendency and dispersion are used to locally approximate a complex time series by a generic transition model. A Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of such a transition. To systematically investigate time series for multiple changes occurring at different temporal scales, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. Thus, based on a generic transition model a probability expression is derived that is capable to indicate multiple changes within a complex time series. We discuss the method's performance by investigating direct and indirect climate observations. The approach is applied to environmental time series (about 100 a), from the weather station in Tuscaloosa, Alabama, and confirms documented instrumentation changes. Moreover, the approach is used to investigate a set of complex terrigenous dust records from the ODP sites 659, 721/722 and 967 interpreted as climate indicators of the African region of the Plio-Pleistocene period (about 5 Ma). The detailed inference unravels multiple transitions underlying the indirect climate observations coinciding with established global climate events.
Lossy Wavefield Compression for Full-Waveform Inversion
NASA Astrophysics Data System (ADS)
Boehm, C.; Fichtner, A.; de la Puente, J.; Hanzich, M.
2015-12-01
We present lossy compression techniques, tailored to the inexact computation of sensitivity kernels, that significantly reduce the memory requirements of adjoint-based minimization schemes. Adjoint methods are a powerful tool to solve tomography problems in full-waveform inversion (FWI). Yet they face the challenge of massive memory requirements caused by the opposite directions of forward and adjoint simulations and the necessity to access both wavefields simultaneously during the computation of the sensitivity kernel. Thus, storage, I/O operations, and memory bandwidth become key topics in FWI. In this talk, we present strategies for the temporal and spatial compression of the forward wavefield. This comprises re-interpolation with coarse time steps and an adaptive polynomial degree of the spectral element shape functions. In addition, we predict the projection errors on a hierarchy of grids and re-quantize the residuals with an adaptive floating-point accuracy to improve the approximation. Furthermore, we use the first arrivals of adjoint waves to identify "shadow zones" that do not contribute to the sensitivity kernel at all. Updating and storing the wavefield within these shadow zones is skipped, which reduces memory requirements and computational costs at the same time. Compared to check-pointing, our approach has only a negligible computational overhead, utilizing the fact that a sufficiently accurate sensitivity kernel does not require a fully resolved forward wavefield. Furthermore, we use adaptive compression thresholds during the FWI iterations to ensure convergence. Numerical experiments on the reservoir scale and for the Western Mediterranean prove the high potential of this approach with an effective compression factor of 500-1000. Furthermore, it is computationally cheap and easy to integrate in both, finite-differences and finite-element wave propagation codes.
Digestibility of solvent-treated Jatropha curcas kernel by broiler chickens in Senegal.
Nesseim, Thierry Daniel Tamsir; Dieng, Abdoulaye; Mergeai, Guy; Ndiaye, Saliou; Hornick, Jean-Luc
2015-12-01
Jatropha curcas is a drought-resistant shrub belonging to the Euphorbiaceae family. The kernel contains approximately 60 % lipid in dry matter, and the meal obtained after oil extraction could be an exceptional source of protein for family poultry farming, in the absence of curcin and, especially, some diterpene derivatives phorbol esters that are partially lipophilic. The nutrient digestibility of J. curcas kernel meal (JKM), obtained after partial physicochemical deoiling was thus evaluated in broiler chickens. Twenty broiler chickens, 6 weeks old, were maintained in individual metabolic cages and divided into four groups of five animals, according to a 4 × 4 Latin square design where deoiled JKM was incorporated into grinded corn at 0, 4, 8, and 12 % levels (diets 0, 4, 8, and 12 J), allowing measurement of nutrient digestibility by the differential method. The dry matter (DM) and organic matter (OM) digestibility of diets was affected to a low extent by JKM (85 and 86 % in 0 J and 81 % in 12 J, respectively) in such a way that DM and OM digestibility of JKM was estimated to be close to 50 %. The ether extract (EE) digestibility of JKM remained high, at about 90 %, while crude protein (CP) and crude fiber (CF) digestibility were largely impacted by JKM, with values closed to 40 % at the highest levels of incorporation. J. curcas kernel presents various nutrient digestibilities but has adverse effects on CP and CF digestibility of the diet. The effects of an additional heat or biological treatment on JKM remain to be assessed.
Asada, Toshio; Ando, Kanta; Sakurai, Koji; Koseki, Shiro; Nagaoka, Masataka
2015-10-28
An efficient approach to evaluate free energy gradients (FEGs) within the quantum mechanical/molecular mechanical (QM/MM) framework has been proposed to clarify reaction processes on the free energy surface (FES) in molecular assemblies. The method is based on response kernel approximations denoted as the charge and the atom dipole response kernel (CDRK) model that include explicitly induced atom dipoles. The CDRK model was able to reproduce polarization effects for both electrostatic interactions between QM and MM regions and internal energies in the QM region obtained by conventional QM/MM methods. In contrast to charge response kernel (CRK) models, CDRK models could be applied to various kinds of molecules, even linear or planer molecules, without using imaginary interaction sites. Use of the CDRK model enabled us to obtain FEGs on QM atoms in significantly reduced computational time. It was also clearly demonstrated that the time development of QM forces of the solvated propylene carbonate radical cation (PC˙(+)) provided reliable results for 1 ns molecular dynamics (MD) simulation, which were quantitatively in good agreement with expensive QM/MM results. Using FEG and nudged elastic band (NEB) methods, we found two optimized reaction paths on the FES for decomposition reactions to generate CO2 molecules from PC˙(+), whose reaction is known as one of the degradation mechanisms in the lithium-ion battery. Two of these reactions proceed through an identical intermediate structure whose molecular dipole moment is larger than that of the reactant to be stabilized in the solvent, which has a high relative dielectric constant. Thus, in order to prevent decomposition reactions, PC˙(+) should be modified to have a smaller dipole moment along two reaction paths.
Gabor-based kernel PCA with fractional power polynomial models for face recognition.
Liu, Chengjun
2004-05-01
This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.
A multi-label learning based kernel automatic recommendation method for support vector machine.
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.
A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...
D'Amours, Michel; Pouliot, Jean; Dagnault, Anne; Verhaegen, Frank; Beaulieu, Luc
2011-12-01
Brachytherapy planning software relies on the Task Group report 43 dosimetry formalism. This formalism, based on a water approximation, neglects various heterogeneous materials present during treatment. Various studies have suggested that these heterogeneities should be taken into account to improve the treatment quality. The present study sought to demonstrate the feasibility of incorporating Monte Carlo (MC) dosimetry within an inverse planning algorithm to improve the dose conformity and increase the treatment quality. The method was based on precalculated dose kernels in full patient geometries, representing the dose distribution of a brachytherapy source at a single dwell position using MC simulations and the Geant4 toolkit. These dose kernels are used by the inverse planning by simulated annealing tool to produce a fast MC-based plan. A test was performed for an interstitial brachytherapy breast treatment using two different high-dose-rate brachytherapy sources: the microSelectron iridium-192 source and the electronic brachytherapy source Axxent operating at 50 kVp. A research version of the inverse planning by simulated annealing algorithm was combined with MC to provide a method to fully account for the heterogeneities in dose optimization, using the MC method. The effect of the water approximation was found to depend on photon energy, with greater dose attenuation for the lower energies of the Axxent source compared with iridium-192. For the latter, an underdosage of 5.1% for the dose received by 90% of the clinical target volume was found. A new method to optimize afterloading brachytherapy plans that uses MC dosimetric information was developed. Including computed tomography-based information in MC dosimetry in the inverse planning process was shown to take into account the full range of scatter and heterogeneity conditions. This led to significant dose differences compared with the Task Group report 43 approach for the Axxent source. Copyright © 2011 Elsevier Inc. All rights reserved.
Locality of correlation in density functional theory.
Burke, Kieron; Cancio, Antonio; Gould, Tim; Pittalis, Stefano
2016-08-07
The Hohenberg-Kohn density functional was long ago shown to reduce to the Thomas-Fermi (TF) approximation in the non-relativistic semiclassical (or large-Z) limit for all matter, i.e., the kinetic energy becomes local. Exchange also becomes local in this limit. Numerical data on the correlation energy of atoms support the conjecture that this is also true for correlation, but much less relevant to atoms. We illustrate how expansions around a large particle number are equivalent to local density approximations and their strong relevance to density functional approximations. Analyzing highly accurate atomic correlation energies, we show that EC → -AC ZlnZ + BCZ as Z → ∞, where Z is the atomic number, AC is known, and we estimate BC to be about 37 mhartree. The local density approximation yields AC exactly, but a very incorrect value for BC, showing that the local approximation is less relevant for the correlation alone. This limit is a benchmark for the non-empirical construction of density functional approximations. We conjecture that, beyond atoms, the leading correction to the local density approximation in the large-Z limit generally takes this form, but with BC a functional of the TF density for the system. The implications for the construction of approximate density functionals are discussed.
Resource-Constrained Spatial Hot Spot Identification
2011-01-01
into three categories ( Cameron and Leitner, 2005):2 Thematic Mapping. Concentrations of events are color-coded in discrete geo- graphic areas that...of Boston burglary events in 1999 and provided by Cameron and Leitner (2005). The first map reflects burglary rates per 100,000 residents by Census...Burglary Rates, 1999 RAND A8567-22 1 0 1 2 Miles Thematic mapping Kernel density interpolation Hierarchical clustering Source: Cameron and Leitner, 2005. For
Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.
2016-01-01
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures. PMID:27265878
Galindo, I; Romero, M C; Sánchez, N; Morales, J M
2016-06-06
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.
NASA Astrophysics Data System (ADS)
Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.
2016-06-01
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.
Effect of Aspergillus niger xylanase on dough characteristics and bread quality attributes.
Ahmad, Zulfiqar; Butt, Masood Sadiq; Ahmed, Anwaar; Riaz, Muhammad; Sabir, Syed Mubashar; Farooq, Umar; Rehman, Fazal Ur
2014-10-01
The present study was conducted to investigate the impact of various treatments of xylanase produced by Aspergillus niger applied in bread making processes like during tempering of wheat kernels and dough mixing on the dough quality characteristics i.e. dryness, stiffness, elasticity, extensibility, coherency and bread quality parameters i.e. volume, specific volume, density, moisture retention and sensory attributes. Different doses (200, 400, 600, 800 and 1,000 IU) of purified enzyme were applied to 1 kg of wheat grains during tempering and 1 kg of flour (straight grade flour) during mixing of dough in parallel. The samples of wheat kernels were agitated at different intervals for uniformity in tempering. After milling and dough making of both types of flour (having enzyme treatment during tempering and flour mixing) showed improved dough characteristics but the improvement was more prominent in the samples receiving enzyme treatment during tempering. Moreover, xylanase decreased dryness and stiffness of the dough whereas, resulted in increased elasticity, extensibility and coherency and increase in volume & decrease in bread density. Xylanase treatments also resulted in higher moisture retention and improvement of sensory attributes of bread. From the results, it is concluded that dough characteristics and bread quality improved significantly in response to enzyme treatments during tempering as compared to application during mixing.
Food Environment and Weight Change: Does Residential Mobility Matter?
Laraia, Barbara A.; Downing, Janelle M.; Zhang, Y. Tara; Dow, William H.; Kelly, Maggi; Blanchard, Samuel D.; Adler, Nancy; Schillinger, Dean; Moffet, Howard; Warton, E. Margaret; Karter, Andrew J.
2017-01-01
Abstract Associations between neighborhood food environment and adult body mass index (BMI; weight (kg)/height (m)2) derived using cross-sectional or longitudinal random-effects models may be biased due to unmeasured confounding and measurement and methodological limitations. In this study, we assessed the within-individual association between change in food environment from 2006 to 2011 and change in BMI among adults with type 2 diabetes using clinical data from the Kaiser Permanente Diabetes Registry collected from 2007 to 2011. Healthy food environment was measured using the kernel density of healthful food venues. Fixed-effects models with a 1-year-lagged BMI were estimated. Separate models were fitted for persons who moved and those who did not. Sensitivity analysis using different lag times and kernel density bandwidths were tested to establish the consistency of findings. On average, patients lost 1 pound (0.45 kg) for each standard-deviation improvement in their food environment. This relationship held for persons who remained in the same location throughout the 5-year study period but not among persons who moved. Proximity to food venues that promote nutritious foods alone may not translate into clinically meaningful diet-related health changes. Community-level policies for improving the food environment need multifaceted strategies to invoke clinically meaningful change in BMI among adult patients with diabetes. PMID:28387785
7 CFR 810.2202 - Definition of other terms.
Code of Federal Regulations, 2014 CFR
2014-01-01
... kernels, foreign material, and shrunken and broken kernels. The sum of these three factors may not exceed... the removal of dockage and shrunken and broken kernels. (g) Heat-damaged kernels. Kernels, pieces of... sample after the removal of dockage and shrunken and broken kernels. (h) Other grains. Barley, corn...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...
7 CFR 51.1415 - Inedible kernels.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...
Geodesic regression for image time-series.
Niethammer, Marc; Huang, Yang; Vialard, François-Xavier
2011-01-01
Registration of image-time series has so far been accomplished (i) by concatenating registrations between image pairs, (ii) by solving a joint estimation problem resulting in piecewise geodesic paths between image pairs, (iii) by kernel based local averaging or (iv) by augmenting the joint estimation with additional temporal irregularity penalties. Here, we propose a generative model extending least squares linear regression to the space of images by using a second-order dynamic formulation for image registration. Unlike previous approaches, the formulation allows for a compact representation of an approximation to the full spatio-temporal trajectory through its initial values. The method also opens up possibilities to design image-based approximation algorithms. The resulting optimization problem is solved using an adjoint method.
NASA Astrophysics Data System (ADS)
Rabinskiy, L. N.; Zhavoronok, S. I.
2018-04-01
The transient interaction of acoustic media and elastic shells is considered on the basis of the transition function approach. The three-dimensional hyperbolic initial boundary-value problem is reduced to a two-dimensional problem of shell theory with integral operators approximating the acoustic medium effect on the shell dynamics. The kernels of these integral operators are determined by the elementary solution of the problem of acoustic waves diffraction at a rigid obstacle with the same boundary shape as the wetted shell surface. The closed-form elementary solution for arbitrary convex obstacles can be obtained at the initial interaction stages on the background of the so-called “thin layer hypothesis”. Thus, the shell–wave interaction model defined by integro-differential dynamic equations with analytically determined kernels of integral operators becomes hence two-dimensional but nonlocal in time. On the other hand, the initial interaction stage results in localized dynamic loadings and consequently in complex strain and stress states that require higher-order shell theories. Here the modified theory of I.N.Vekua–A.A.Amosov-type is formulated in terms of analytical continuum dynamics. The shell model is constructed on a two-dimensional manifold within a set of field variables, Lagrangian density, and constraint equations following from the boundary conditions “shifted” from the shell faces to its base surface. Such an approach allows one to construct consistent low-order shell models within a unified formal hierarchy. The equations of the N th-order shell theory are singularly perturbed and contain second-order partial derivatives with respect to time and surface coordinates whereas the numerical integration of systems of first-order equations is more efficient. Such systems can be obtained as Hamilton–de Donder–Weyl-type equations for the Lagrangian dynamical system. The Hamiltonian formulation of the elementary N th-order shell theory is here briefly described.
NASA Astrophysics Data System (ADS)
Nielsen, M. B.; Schunker, H.; Gizon, L.; Schou, J.; Ball, W. H.
2017-06-01
Context. Rotational shear in Sun-like stars is thought to be an important ingredient in models of stellar dynamos. Thanks to helioseismology, rotation in the Sun is characterized well, but the interior rotation profiles of other Sun-like stars are not so well constrained. Until recently, measurements of rotation in Sun-like stars have focused on the mean rotation, but little progress has been made on measuring or even placing limits on differential rotation. Aims: Using asteroseismic measurements of rotation we aim to constrain the radial shear in five Sun-like stars observed by the NASA Kepler mission: KIC 004914923, KIC 005184732, KIC 006116048, KIC 006933899, and KIC 010963065. Methods: We used stellar structure models for these five stars from previous works. These models provide the mass density, mode eigenfunctions, and the convection zone depth, which we used to compute the sensitivity kernels for the rotational frequency splitting of the modes. We used these kernels as weights in a parametric model of the stellar rotation profile of each star, where we allowed different rotation rates for the radiative interior and the convective envelope. This parametric model was incorporated into a fit to the oscillation power spectrum of each of the five Kepler stars. This fit included a prior on the rotation of the envelope, estimated from the rotation of surface magnetic activity measured from the photometric variability. Results: The asteroseismic measurements without the application of priors are unable to place meaningful limits on the radial shear. Using a prior on the envelope rotation enables us to constrain the interior rotation rate and thus the radial shear. In the five cases that we studied, the interior rotation rate does not differ from the envelope by more than approximately ± 30%. Uncertainties in the rotational splittings are too large to unambiguously determine the sign of the radial shear.
Unconventional protein sources: apricot seed kernels.
Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M
1981-09-01
Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.
An introduction to kernel-based learning algorithms.
Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B
2001-01-01
This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.