Sample records for kernel equating framework

  1. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  2. A Comparison between Linear IRT Observed-Score Equating and Levine Observed-Score Equating under the Generalized Kernel Equating Framework

    ERIC Educational Resources Information Center

    Chen, Haiwen

    2012-01-01

    In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…

  3. New Equating Methods and Their Relationships with Levine Observed Score Linear Equating under the Kernel Equating Framework

    ERIC Educational Resources Information Center

    Chen, Haiwen; Holland, Paul

    2010-01-01

    In this paper, we develop a new curvilinear equating for the nonequivalent groups with anchor test (NEAT) design under the assumption of the classical test theory model, that we name curvilinear Levine observed score equating. In fact, by applying both the kernel equating framework and the mean preserving linear transformation of…

  4. Construction of Chained True Score Equipercentile Equatings under the Kernel Equating (KE) Framework and Their Relationship to Levine True Score Equating. Research Report. ETS RR-09-24

    ERIC Educational Resources Information Center

    Chen, Haiwen; Holland, Paul

    2009-01-01

    In this paper, we develop a new chained equipercentile equating procedure for the nonequivalent groups with anchor test (NEAT) design under the assumptions of the classical test theory model. This new equating is named chained true score equipercentile equating. We also apply the kernel equating framework to this equating design, resulting in a…

  5. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    ERIC Educational Resources Information Center

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  6. The Continuized Log-Linear Method: An Alternative to the Kernel Method of Continuization in Test Equating

    ERIC Educational Resources Information Center

    Wang, Tianyou

    2008-01-01

    Von Davier, Holland, and Thayer (2004) laid out a five-step framework of test equating that can be applied to various data collection designs and equating methods. In the continuization step, they presented an adjusted Gaussian kernel method that preserves the first two moments. This article proposes an alternative continuization method that…

  7. Notes on a General Framework for Observed Score Equating. Research Report. ETS RR-08-59

    ERIC Educational Resources Information Center

    Moses, Tim; Holland, Paul

    2008-01-01

    The purpose of this paper is to extend von Davier, Holland, and Thayer's (2004b) framework of kernel equating so that it can incorporate raw data and traditional equipercentile equating methods. One result of this more general framework is that previous equating methodology research can be viewed more comprehensively. Another result is that the…

  8. Convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation and rate constants: Case study of the spin-boson model.

    PubMed

    Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang

    2018-04-28

    The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.

  9. Convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation and rate constants: Case study of the spin-boson model

    NASA Astrophysics Data System (ADS)

    Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang

    2018-04-01

    The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.

  10. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  11. The Kernel Levine Equipercentile Observed-Score Equating Function. Research Report. ETS RR-13-38

    ERIC Educational Resources Information Center

    von Davier, Alina A.; Chen, Haiwen

    2013-01-01

    In the framework of the observed-score equating methods for the nonequivalent groups with anchor test design, there are 3 fundamentally different ways of using the information provided by the anchor scores to equate the scores of a new form to those of an old form. One method uses the anchor scores as a conditioning variable, such as the Tucker…

  12. Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods

    NASA Astrophysics Data System (ADS)

    Liu, Qinya; Tromp, Jeroen

    2008-07-01

    We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.

  13. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  14. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  15. Contact interaction of thin-walled elements with an elastic layer and an infinite circular cylinder under torsion

    NASA Astrophysics Data System (ADS)

    Kanetsyan, E. G.; Mkrtchyan, M. S.; Mkhitaryan, S. M.

    2018-04-01

    We consider a class of contact torsion problems on interaction of thin-walled elements shaped as an elastic thin washer – a flat circular plate of small height – with an elastic layer, in particular, with a half-space, and on interaction of thin cylindrical shells with a solid elastic cylinder, infinite in both directions. The governing equations of the physical models of elastic thin washers and thin circular cylindrical shells under torsion are derived from the exact equations of mathematical theory of elasticity using the Hankel and Fourier transforms. Within the framework of the accepted physical models, the solution of the contact problem between an elastic washer and an elastic layer is reduced to solving the Fredholm integral equation of the first kind with a kernel representable as a sum of the Weber–Sonin integral and some integral regular kernel, while solving the contact problem between a cylindrical shell and solid cylinder is reduced to a singular integral equation (SIE). An effective method for solving the governing integral equations of these problems are specified.

  16. Generalization Analysis of Fredholm Kernel Regularized Classifiers.

    PubMed

    Gong, Tieliang; Xu, Zongben; Chen, Hong

    2017-07-01

    Recently, a new framework, Fredholm learning, was proposed for semisupervised learning problems based on solving a regularized Fredholm integral equation. It allows a natural way to incorporate unlabeled data into learning algorithms to improve their prediction performance. Despite rapid progress on implementable algorithms with theoretical guarantees, the generalization ability of Fredholm kernel learning has not been studied. In this letter, we focus on investigating the generalization performance of a family of classification algorithms, referred to as Fredholm kernel regularized classifiers. We prove that the corresponding learning rate can achieve [Formula: see text] ([Formula: see text] is the number of labeled samples) in a limiting case. In addition, a representer theorem is provided for the proposed regularized scheme, which underlies its applications.

  17. On Hilbert-Schmidt norm convergence of Galerkin approximation for operator Riccati equations

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1988-01-01

    An abstract approximation framework for the solution of operator algebraic Riccati equations is developed. The approach taken is based on a formulation of the Riccati equation as an abstract nonlinear operator equation on the space of Hilbert-Schmidt operators. Hilbert-Schmidt norm convergence of solutions to generic finite dimensional Galerkin approximations to the Riccati equation to the solution of the original infinite dimensional problem is argued. The application of the general theory is illustrated via an operator Riccati equation arising in the linear-quadratic design of an optimal feedback control law for a 1-D heat/diffusion equation. Numerical results demonstrating the convergence of the associated Hilbert-Schmidt kernels are included.

  18. On one solution of Volterra integral equations of second kind

    NASA Astrophysics Data System (ADS)

    Myrhorod, V.; Hvozdeva, I.

    2016-10-01

    A solution of Volterra integral equations of the second kind with separable and difference kernels based on solutions of corresponding equations linking the kernel and resolvent is suggested. On the basis of a discrete functions class, the equations linking the kernel and resolvent are obtained and the methods of their analytical solutions are proposed. A mathematical model of the gas-turbine engine state modification processes in the form of Volterra integral equation of the second kind with separable kernel is offered.

  19. Kernel Equating Under the Non-Equivalent Groups With Covariates Design

    PubMed Central

    Bränberg, Kenny

    2015-01-01

    When equating two tests, the traditional approach is to use common test takers and/or common items. Here, the idea is to use variables correlated with the test scores (e.g., school grades and other test scores) as a substitute for common items in a non-equivalent groups with covariates (NEC) design. This is performed in the framework of kernel equating and with an extension of the method developed for post-stratification equating in the non-equivalent groups with anchor test design. Real data from a college admissions test were used to illustrate the use of the design. The equated scores from the NEC design were compared with equated scores from the equivalent group (EG) design, that is, equating with no covariates as well as with equated scores when a constructed anchor test was used. The results indicate that the NEC design can produce lower standard errors compared with an EG design. When covariates were used together with an anchor test, the smallest standard errors were obtained over a large range of test scores. The results obtained, that an EG design equating can be improved by adjusting for differences in test score distributions caused by differences in the distribution of covariates, are useful in practice because not all standardized tests have anchor tests. PMID:29881012

  20. Kernel Equating Under the Non-Equivalent Groups With Covariates Design.

    PubMed

    Wiberg, Marie; Bränberg, Kenny

    2015-07-01

    When equating two tests, the traditional approach is to use common test takers and/or common items. Here, the idea is to use variables correlated with the test scores (e.g., school grades and other test scores) as a substitute for common items in a non-equivalent groups with covariates (NEC) design. This is performed in the framework of kernel equating and with an extension of the method developed for post-stratification equating in the non-equivalent groups with anchor test design. Real data from a college admissions test were used to illustrate the use of the design. The equated scores from the NEC design were compared with equated scores from the equivalent group (EG) design, that is, equating with no covariates as well as with equated scores when a constructed anchor test was used. The results indicate that the NEC design can produce lower standard errors compared with an EG design. When covariates were used together with an anchor test, the smallest standard errors were obtained over a large range of test scores. The results obtained, that an EG design equating can be improved by adjusting for differences in test score distributions caused by differences in the distribution of covariates, are useful in practice because not all standardized tests have anchor tests.

  1. An Evaluation of Kernel Equating: Parallel Equating with Classical Methods in the SAT Subject Tests[TM] Program. Research Report. ETS RR-09-06

    ERIC Educational Resources Information Center

    Grant, Mary C.; Zhang, Lilly; Damiano, Michele

    2009-01-01

    This study investigated kernel equating methods by comparing these methods to operational equatings for two tests in the SAT Subject Tests[TM] program. GENASYS (ETS, 2007) was used for all equating methods and scaled score kernel equating results were compared to Tucker, Levine observed score, chained linear, and chained equipercentile equating…

  2. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    PubMed

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  3. MOOSE: A parallel computational framework for coupled systems of nonlinear equations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derek Gaston; Chris Newman; Glen Hansen

    Systems of coupled, nonlinear partial differential equations (PDEs) often arise in simulation of nuclear processes. MOOSE: Multiphysics Object Oriented Simulation Environment, a parallel computational framework targeted at the solution of such systems, is presented. As opposed to traditional data-flow oriented computational frameworks, MOOSE is instead founded on the mathematical principle of Jacobian-free Newton-Krylov (JFNK) solution methods. Utilizing the mathematical structure present in JFNK, physics expressions are modularized into `Kernels,'' allowing for rapid production of new simulation tools. In addition, systems are solved implicitly and fully coupled, employing physics based preconditioning, which provides great flexibility even with large variance in timemore » scales. A summary of the mathematics, an overview of the structure of MOOSE, and several representative solutions from applications built on the framework are presented.« less

  4. TMD splitting functions in [Formula: see text] factorization: the real contribution to the gluon-to-gluon splitting.

    PubMed

    Hentschinski, M; Kusina, A; Kutak, K; Serino, M

    2018-01-01

    We calculate the transverse momentum dependent gluon-to-gluon splitting function within [Formula: see text]-factorization, generalizing the framework employed in the calculation of the quark splitting functions in Hautmann et al. (Nucl Phys B 865:54-66, arXiv:1205.1759, 2012), Gituliar et al. (JHEP 01:181, arXiv:1511.08439, 2016), Hentschinski et al. (Phys Rev D 94(11):114013, arXiv:1607.01507, 2016) and demonstrate at the same time the consistency of the extended formalism with previous results. While existing versions of [Formula: see text] factorized evolution equations contain already a gluon-to-gluon splitting function i.e. the leading order Balitsky-Fadin-Kuraev-Lipatov (BFKL) kernel or the Ciafaloni-Catani-Fiorani-Marchesini (CCFM) kernel, the obtained splitting function has the important property that it reduces both to the leading order BFKL kernel in the high energy limit, to the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) gluon-to-gluon splitting function in the collinear limit as well as to the CCFM kernel in the soft limit. At the same time we demonstrate that this splitting kernel can be obtained from a direct calculation of the QCD Feynman diagrams, based on a combined implementation of the Curci-Furmanski-Petronzio formalism for the calculation of the collinear splitting functions and the framework of high energy factorization.

  5. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  6. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  7. Fredholm-Volterra Integral Equation with a Generalized Singular Kernel and its Numerical Solutions

    NASA Astrophysics Data System (ADS)

    El-Kalla, I. L.; Al-Bugami, A. M.

    2010-11-01

    In this paper, the existence and uniqueness of solution of the Fredholm-Volterra integral equation (F-VIE), with a generalized singular kernel, are discussed and proved in the spaceL2(Ω)×C(0,T). The Fredholm integral term (FIT) is considered in position while the Volterra integral term (VIT) is considered in time. Using a numerical technique we have a system of Fredholm integral equations (SFIEs). This system of integral equations can be reduced to a linear algebraic system (LAS) of equations by using two different methods. These methods are: Toeplitz matrix method and Product Nyström method. A numerical examples are considered when the generalized kernel takes the following forms: Carleman function, logarithmic form, Cauchy kernel, and Hilbert kernel.

  8. Development of low-frequency kernel-function aerodynamics for comparison with time-dependent finite-difference methods

    NASA Technical Reports Server (NTRS)

    Bland, S. R.

    1982-01-01

    Finite difference methods for unsteady transonic flow frequency use simplified equations in which certain of the time dependent terms are omitted from the governing equations. Kernel functions are derived for two dimensional subsonic flow, and provide accurate solutions of the linearized potential equation with the same time dependent terms omitted. These solutions make possible a direct evaluation of the finite difference codes for the linear problem. Calculations with two of these low frequency kernel functions verify the accuracy of the LTRAN2 and HYTRAN2 finite difference codes. Comparisons of the low frequency kernel function results with the Possio kernel function solution of the complete linear equations indicate the adequacy of the HYTRAN approximation for frequencies in the range of interest for flutter calculations.

  9. A numerical solution for two-dimensional Fredholm integral equations of the second kind with kernels of the logarithmic potential form

    NASA Technical Reports Server (NTRS)

    Gabrielsen, R. E.; Uenal, A.

    1981-01-01

    Two dimensional Fredholm integral equations with logarithmic potential kernels are numerically solved. The explicit consequence of these solutions to their true solutions is demonstrated. The results are based on a previous work in which numerical solutions were obtained for Fredholm integral equations of the second kind with continuous kernels.

  10. The spatial sensitivity of Sp converted waves-kernels and their applications

    NASA Astrophysics Data System (ADS)

    Mancinelli, N. J.; Fischer, K. M.

    2017-12-01

    We have developed a framework for improved imaging of strong lateral variations in crust and upper mantle seismic discontinuity structure using teleseismic S-to-P (Sp) scattered waves. In our framework, we rapidly compute scattered wave sensitivities to velocity perturbations in a one-dimensional background model using ray-theoretical methods to account for timing, scattering, and geometrical spreading effects. The kernels accurately describe the amplitude and phase information of a scattered waveform, which we confirm by benchmarking against kernels derived from numerical solutions of the wave equation. The kernels demonstrate that the amplitude of an Sp converted wave at a given time is sensitive to structure along a quasi-hyperbolic curve, such that structure far from the direct ray path can influence the measurements. We use synthetic datasets to explore two potential applications of the scattered wave sensitivity kernels. First, we back-project scattered energy back to its origin using the kernel adjoint operator. This approach successfully images mantle interfaces at depths of 120-180 km with up to 20 km of vertical relief over lateral distances of 100 km (i.e., undulations with a maximal 20% grade) when station spacing is 10 km. Adjacent measurements sum coherently at nodes where gradients in seismic properties occur, and destructively interfere at nodes lacking gradients. In cases where the station spacing is greater than 10 km, the destructive interference can be incomplete, and smearing along the isochrons can occur. We demonstrate, however, that model smoothing can dampen these artifacts. This method is relatively fast, and accurately retrieves the positions of the interfaces, but it generally does not retrieve the strength of the velocity perturbations. Therefore, in our second approach, we attempt to invert directly for velocity perturbations from our reference model using an iterative conjugate-directions scheme.

  11. A Comparison of the Kernel Equating Method with Traditional Equating Methods Using SAT[R] Data

    ERIC Educational Resources Information Center

    Liu, Jinghua; Low, Albert C.

    2008-01-01

    This study applied kernel equating (KE) in two scenarios: equating to a very similar population and equating to a very different population, referred to as a distant population, using SAT[R] data. The KE results were compared to the results obtained from analogous traditional equating methods in both scenarios. The results indicate that KE results…

  12. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    ERIC Educational Resources Information Center

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  13. Resummed memory kernels in generalized system-bath master equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavros, Michael G.; Van Voorhis, Troy, E-mail: tvan@mit.edu

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between themore » two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.« less

  14. A solution for two-dimensional Fredholm integral equations of the second kind with periodic, semiperiodic, or nonperiodic kernels. [integral representation of the stationary Navier-Stokes problem

    NASA Technical Reports Server (NTRS)

    Gabrielsen, R. E.; Uenal, A.

    1981-01-01

    A numerical scheme for solving two dimensional Fredholm integral equations of the second kind is developed. The proof of the convergence of the numerical scheme is shown for three cases: the case of periodic kernels, the case of semiperiodic kernels, and the case of nonperiodic kernels. Applications to the incompressible, stationary Navier-Stokes problem are of primary interest.

  15. An Exploration of Kernel Equating Using SAT® Data: Equating to a Similar Population and to a Distant Population. Research Report. ETS RR-07-17

    ERIC Educational Resources Information Center

    Liu, Jinghua; Low, Albert C.

    2007-01-01

    This study applied kernel equating (KE) in two scenarios: equating to a very similar population and equating to a very different population, referred to as a distant population, using SAT® data. The KE results were compared to the results obtained from analogous classical equating methods in both scenarios. The results indicate that KE results are…

  16. Kernel and Traditional Equipercentile Equating with Degrees of Presmoothing. Research Report. ETS RR-07-15

    ERIC Educational Resources Information Center

    Moses, Tim; Holland, Paul

    2007-01-01

    The purpose of this study was to empirically evaluate the impact of loglinear presmoothing accuracy on equating bias and variability across chained and post-stratification equating methods, kernel and percentile-rank continuization methods, and sample sizes. The results of evaluating presmoothing on equating accuracy generally agreed with those of…

  17. Factorization and the synthesis of optimal feedback kernels for differential-delay systems

    NASA Technical Reports Server (NTRS)

    Milman, Mark M.; Scheid, Robert E.

    1987-01-01

    A combination of ideas from the theories of operator Riccati equations and Volterra factorizations leads to the derivation of a novel, relatively simple set of hyperbolic equations which characterize the optimal feedback kernel for the finite-time regulator problem for autonomous differential-delay systems. Analysis of these equations elucidates the underlying structure of the feedback kernel and leads to the development of fast and accurate numerical methods for its computation. Unlike traditional formulations based on the operator Riccati equation, the gain is characterized by means of classical solutions of the derived set of equations. This leads to the development of approximation schemes which are analogous to what has been accomplished for systems of ordinary differential equations with given initial conditions.

  18. An Evaluation of the Kernel Equating Method: A Special Study with Pseudotests Constructed from Real Test Data. Research Report. ETS RR-06-02

    ERIC Educational Resources Information Center

    von Davier, Alina A.; Holland, Paul W.; Livingston, Samuel A.; Casabianca, Jodi; Grant, Mary C.; Martin, Kathleen

    2006-01-01

    This study examines how closely the kernel equating (KE) method (von Davier, Holland, & Thayer, 2004a) approximates the results of other observed-score equating methods--equipercentile and linear equatings. The study used pseudotests constructed of item responses from a real test to simulate three equating designs: an equivalent groups (EG)…

  19. CLAss-Specific Subspace Kernel Representations and Adaptive Margin Slack Minimization for Large Scale Classification.

    PubMed

    Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan

    2018-02-01

    In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.

  20. Framework for analyzing ecological trait-based models in multidimensional niche spaces

    NASA Astrophysics Data System (ADS)

    Biancalani, Tommaso; DeVille, Lee; Goldenfeld, Nigel

    2015-05-01

    We develop a theoretical framework for analyzing ecological models with a multidimensional niche space. Our approach relies on the fact that ecological niches are described by sequences of symbols, which allows us to include multiple phenotypic traits. Ecological drivers, such as competitive exclusion, are modeled by introducing the Hamming distance between two sequences. We show that a suitable transform diagonalizes the community interaction matrix of these models, making it possible to predict the conditions for niche differentiation and, close to the instability onset, the asymptotically long time population distributions of niches. We exemplify our method using the Lotka-Volterra equations with an exponential competition kernel.

  1. Apollo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckingsal, David; Gamblin, Todd

    Modern performance portability frameworks provide application developers with a flexible way to determine how to run application kernels, however, they provide no guidance as to the best configuration for a given kernel. Apollo provides a model-generation framework that, when integrated with the RAJA library, uses lightweight decision tree models to select the fastest execution configuration on a per-kernel basis

  2. A GPU-based incompressible Navier-Stokes solver on moving overset grids

    NASA Astrophysics Data System (ADS)

    Chandar, Dominic D. J.; Sitaraman, Jayanarayanan; Mavriplis, Dimitri J.

    2013-07-01

    In pursuit of obtaining high fidelity solutions to the fluid flow equations in a short span of time, graphics processing units (GPUs) which were originally intended for gaming applications are currently being used to accelerate computational fluid dynamics (CFD) codes. With a high peak throughput of about 1 TFLOPS on a PC, GPUs seem to be favourable for many high-resolution computations. One such computation that involves a lot of number crunching is computing time accurate flow solutions past moving bodies. The aim of the present paper is thus to discuss the development of a flow solver on unstructured and overset grids and its implementation on GPUs. In its present form, the flow solver solves the incompressible fluid flow equations on unstructured/hybrid/overset grids using a fully implicit projection method. The resulting discretised equations are solved using a matrix-free Krylov solver using several GPU kernels such as gradient, Laplacian and reduction. Some of the simple arithmetic vector calculations are implemented using the CU++: An Object Oriented Framework for Computational Fluid Dynamics Applications using Graphics Processing Units, Journal of Supercomputing, 2013, doi:10.1007/s11227-013-0985-9 approach where GPU kernels are automatically generated at compile time. Results are presented for two- and three-dimensional computations on static and moving grids.

  3. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. The Swift-Hohenberg equation with a nonlocal nonlinearity

    NASA Astrophysics Data System (ADS)

    Morgan, David; Dawes, Jonathan H. P.

    2014-03-01

    It is well known that aspects of the formation of localised states in a one-dimensional Swift-Hohenberg equation can be described by Ginzburg-Landau-type envelope equations. This paper extends these multiple scales analyses to cases where an additional nonlinear integral term, in the form of a convolution, is present. The presence of a kernel function introduces a new lengthscale into the problem, and this results in additional complexity in both the derivation of envelope equations and in the bifurcation structure. When the kernel is short-range, weakly nonlinear analysis results in envelope equations of standard type but whose coefficients are modified in complicated ways by the nonlinear nonlocal term. Nevertheless, these computations can be formulated quite generally in terms of properties of the Fourier transform of the kernel function. When the lengthscale associated with the kernel is longer, our method leads naturally to the derivation of two different, novel, envelope equations that describe aspects of the dynamics in these new regimes. The first of these contains additional bifurcations, and unexpected loops in the bifurcation diagram. The second of these captures the stretched-out nature of the homoclinic snaking curves that arises due to the nonlocal term.

  5. A Kernel-Based Low-Rank (KLR) Model for Low-Dimensional Manifold Recovery in Highly Accelerated Dynamic MRI.

    PubMed

    Nakarmi, Ukash; Wang, Yanhua; Lyu, Jingyuan; Liang, Dong; Ying, Leslie

    2017-11-01

    While many low rank and sparsity-based approaches have been developed for accelerated dynamic magnetic resonance imaging (dMRI), they all use low rankness or sparsity in input space, overlooking the intrinsic nonlinear correlation in most dMRI data. In this paper, we propose a kernel-based framework to allow nonlinear manifold models in reconstruction from sub-Nyquist data. Within this framework, many existing algorithms can be extended to kernel framework with nonlinear models. In particular, we have developed a novel algorithm with a kernel-based low-rank model generalizing the conventional low rank formulation. The algorithm consists of manifold learning using kernel, low rank enforcement in feature space, and preimaging with data consistency. Extensive simulation and experiment results show that the proposed method surpasses the conventional low-rank-modeled approaches for dMRI.

  6. Uniqueness of Mass-Conserving Self-similar Solutions to Smoluchowski's Coagulation Equation with Inverse Power Law Kernels

    NASA Astrophysics Data System (ADS)

    Laurençot, Philippe

    2018-03-01

    Uniqueness of mass-conserving self-similar solutions to Smoluchowski's coagulation equation is shown when the coagulation kernel K is given by K(x,x_*)=2(x x_*)^{-α } , (x,x_*)\\in (0,∞)^2 , for some α >0.

  7. Boundary conditions for gas flow problems from anisotropic scattering kernels

    NASA Astrophysics Data System (ADS)

    To, Quy-Dong; Vu, Van-Huyen; Lauriat, Guy; Léonard, Céline

    2015-10-01

    The paper presents an interface model for gas flowing through a channel constituted of anisotropic wall surfaces. Using anisotropic scattering kernels and Chapman Enskog phase density, the boundary conditions (BCs) for velocity, temperature, and discontinuities including velocity slip and temperature jump at the wall are obtained. Two scattering kernels, Dadzie and Méolans (DM) kernel, and generalized anisotropic Cercignani-Lampis (ACL) are examined in the present paper, yielding simple BCs at the wall fluid interface. With these two kernels, we rigorously recover the analytical expression for orientation dependent slip shown in our previous works [Pham et al., Phys. Rev. E 86, 051201 (2012) and To et al., J. Heat Transfer 137, 091002 (2015)] which is in good agreement with molecular dynamics simulation results. More important, our models include both thermal transpiration effect and new equations for the temperature jump. While the same expression depending on the two tangential accommodation coefficients is obtained for slip velocity, the DM and ACL temperature equations are significantly different. The derived BC equations associated with these two kernels are of interest for the gas simulations since they are able to capture the direction dependent slip behavior of anisotropic interfaces.

  8. Stochastic quantization of topological field theory: Generalized Langevin equation with memory kernel

    NASA Astrophysics Data System (ADS)

    Menezes, G.; Svaiter, N. F.

    2006-07-01

    We use the method of stochastic quantization in a topological field theory defined in an Euclidean space, assuming a Langevin equation with a memory kernel. We show that our procedure for the Abelian Chern-Simons theory converges regardless of the nature of the Chern-Simons coefficient.

  9. On the solution of integral equations with a generalized cauchy kernel

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    In this paper a certain class of singular integral equations that may arise from the mixed boundary value problems in nonhomogeneous materials is considered. The distinguishing feature of these equations is that in addition to the Cauchy singularity, the kernels contain terms that are singular only at the end points. In the form of the singular integral equations adopted, the density function is a potential or a displacement and consequently the kernel has strong singularities of the form (t-x) sup-2, x sup n-2 (t+x) sup n, (n or = 2, 0x,tb). The complex function theory is used to determine the fundamental function of the problem for the general case and a simple numerical technique is described to solve the integral equation. Two examples from the theory of elasticity are then considered to show the application of the technique.

  10. Analysis of nonlocal neural fields for both general and gamma-distributed connectivities

    NASA Astrophysics Data System (ADS)

    Hutt, Axel; Atay, Fatihcan M.

    2005-04-01

    This work studies the stability of equilibria in spatially extended neuronal ensembles. We first derive the model equation from statistical properties of the neuron population. The obtained integro-differential equation includes synaptic and space-dependent transmission delay for both general and gamma-distributed synaptic connectivities. The latter connectivity type reveals infinite, finite, and vanishing self-connectivities. The work derives conditions for stationary and nonstationary instabilities for both kernel types. In addition, a nonlinear analysis for general kernels yields the order parameter equation of the Turing instability. To compare the results to findings for partial differential equations (PDEs), two typical PDE-types are derived from the examined model equation, namely the general reaction-diffusion equation and the Swift-Hohenberg equation. Hence, the discussed integro-differential equation generalizes these PDEs. In the case of the gamma-distributed kernels, the stability conditions are formulated in terms of the mean excitatory and inhibitory interaction ranges. As a novel finding, we obtain Turing instabilities in fields with local inhibition-lateral excitation, while wave instabilities occur in fields with local excitation and lateral inhibition. Numerical simulations support the analytical results.

  11. Modeling RF Fields in Hot Plasmas with Parallel Full Wave Code

    NASA Astrophysics Data System (ADS)

    Spencer, Andrew; Svidzinski, Vladimir; Zhao, Liangji; Galkin, Sergei; Kim, Jin-Soo

    2016-10-01

    FAR-TECH, Inc. is developing a suite of full wave RF plasma codes. It is based on a meshless formulation in configuration space with adapted cloud of computational points (CCP) capability and using the hot plasma conductivity kernel to model the nonlocal plasma dielectric response. The conductivity kernel is calculated by numerically integrating the linearized Vlasov equation along unperturbed particle trajectories. Work has been done on the following calculations: 1) the conductivity kernel in hot plasmas, 2) a monitor function based on analytic solutions of the cold-plasma dispersion relation, 3) an adaptive CCP based on the monitor function, 4) stencils to approximate the wave equations on the CCP, 5) the solution to the full wave equations in the cold-plasma model in tokamak geometry for ECRH and ICRH range of frequencies, and 6) the solution to the wave equations using the calculated hot plasma conductivity kernel. We will present results on using a meshless formulation on adaptive CCP to solve the wave equations and on implementing the non-local hot plasma dielectric response to the wave equations. The presentation will include numerical results of wave propagation and absorption in the cold and hot tokamak plasma RF models, using DIII-D geometry and plasma parameters. Work is supported by the U.S. DOE SBIR program.

  12. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. The construction of a two-dimensional reproducing kernel function and its application in a biomedical model.

    PubMed

    Guo, Qi; Shen, Shu-Ting

    2016-04-29

    There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.

  14. Using Kernel Equating to Assess Item Order Effects on Test Scores

    ERIC Educational Resources Information Center

    Moses, Tim; Yang, Wen-Ling; Wilson, Christine

    2007-01-01

    This study explored the use of kernel equating for integrating and extending two procedures proposed for assessing item order effects in test forms that have been administered to randomly equivalent groups. When these procedures are used together, they can provide complementary information about the extent to which item order effects impact test…

  15. The derivation and approximation of coarse-grained dynamics from Langevin dynamics

    NASA Astrophysics Data System (ADS)

    Ma, Lina; Li, Xiantao; Liu, Chun

    2016-11-01

    We present a derivation of a coarse-grained description, in the form of a generalized Langevin equation, from the Langevin dynamics model that describes the dynamics of bio-molecules. The focus is placed on the form of the memory kernel function, the colored noise, and the second fluctuation-dissipation theorem that connects them. Also presented is a hierarchy of approximations for the memory and random noise terms, using rational approximations in the Laplace domain. These approximations offer increasing accuracy. More importantly, they eliminate the need to evaluate the integral associated with the memory term at each time step. Direct sampling of the colored noise can also be avoided within this framework. Therefore, the numerical implementation of the generalized Langevin equation is much more efficient.

  16. On the Asymptotic Behavior of the Kernel Function in the Generalized Langevin Equation: A One-Dimensional Lattice Model

    NASA Astrophysics Data System (ADS)

    Chu, Weiqi; Li, Xiantao

    2018-01-01

    We present some estimates for the memory kernel function in the generalized Langevin equation, derived using the Mori-Zwanzig formalism from a one-dimensional lattice model, in which the particles interactions are through nearest and second nearest neighbors. The kernel function can be explicitly expressed in a matrix form. The analysis focuses on the decay properties, both spatially and temporally, revealing a power-law behavior in both cases. The dependence on the level of coarse-graining is also studied.

  17. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Non-parametric wall model and methods of identifying boundary conditions for moments in gas flow equations

    NASA Astrophysics Data System (ADS)

    Liao, Meng; To, Quy-Dong; Léonard, Céline; Monchiet, Vincent

    2018-03-01

    In this paper, we use the molecular dynamics simulation method to study gas-wall boundary conditions. Discrete scattering information of gas molecules at the wall surface is obtained from collision simulations. The collision data can be used to identify the accommodation coefficients for parametric wall models such as Maxwell and Cercignani-Lampis scattering kernels. Since these scattering kernels are based on a limited number of accommodation coefficients, we adopt non-parametric statistical methods to construct the kernel to overcome these issues. Different from parametric kernels, the non-parametric kernels require no parameter (i.e. accommodation coefficients) and no predefined distribution. We also propose approaches to derive directly the Navier friction and Kapitza thermal resistance coefficients as well as other interface coefficients associated with moment equations from the non-parametric kernels. The methods are applied successfully to systems composed of CH4 or CO2 and graphite, which are of interest to the petroleum industry.

  19. Equation for the Nakanishi Weight Function Using the Inverse Stieltjes Transform

    NASA Astrophysics Data System (ADS)

    Karmanov, V. A.; Carbonell, J.; Frederico, T.

    2018-05-01

    The bound state Bethe-Salpeter amplitude was expressed by Nakanishi in terms of a smooth weight function g. By using the generalized Stieltjes transform, we derive an integral equation for the Nakanishi function g for a bound state case. It has the standard form g= \\hat{V} g, where \\hat{V} is a two-dimensional integral operator. The prescription for obtaining the kernel V starting with the kernel K of the Bethe-Salpeter equation is given.

  20. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays.

  1. A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.

    2017-05-01

    Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.

  2. On the Kernel function of the integral equation relating lift and downwash distributions of oscillating wings in supersonic flow

    NASA Technical Reports Server (NTRS)

    Watkins, Charles E; Berman, Julian H

    1956-01-01

    This report treats the Kernel function of the integral equation that relates a known or prescribed downwash distribution to an unknown lift distribution for harmonically oscillating wings in supersonic flow. The treatment is essentially an extension to supersonic flow of the treatment given in NACA report 1234 for subsonic flow. For the supersonic case the Kernel function is derived by use of a suitable form of acoustic doublet potential which employs a cutoff or Heaviside unit function. The Kernel functions are reduced to forms that can be accurately evaluated by considering the functions in two parts: a part in which the singularities are isolated and analytically expressed, and a nonsingular part which can be tabulated.

  3. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    PubMed Central

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  4. Error and Complexity Analysis for a Collocation-Grid-Projection Plus Precorrected-FFT Algorithm for Solving Potential Integral Equations with LaPlace or Helmholtz Kernels

    NASA Technical Reports Server (NTRS)

    Phillips, J. R.

    1996-01-01

    In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.

  5. Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction

    NASA Astrophysics Data System (ADS)

    Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc

    2018-02-01

    Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.

  6. A new iterative scheme for solving the discrete Smoluchowski equation

    NASA Astrophysics Data System (ADS)

    Smith, Alastair J.; Wells, Clive G.; Kraft, Markus

    2018-01-01

    This paper introduces a new iterative scheme for solving the discrete Smoluchowski equation and explores the numerical convergence properties of the method for a range of kernels admitting analytical solutions, in addition to some more physically realistic kernels typically used in kinetics applications. The solver is extended to spatially dependent problems with non-uniform velocities and its performance investigated in detail.

  7. Semiclassical analysis for pseudo-relativistic Hartree equations

    NASA Astrophysics Data System (ADS)

    Cingolani, Silvia; Secchi, Simone

    2015-06-01

    In this paper we study the semiclassical limit for the pseudo-relativistic Hartree equation $\\sqrt{-\\varepsilon^2 \\Delta + m^2}u + V u = (I_\\alpha * |u|^{p}) |u|^{p-2}u$ in $\\mathbb{R}^N$ where $m>0$, $2 \\leq p < \\frac{2N}{N-1}$, $V \\colon \\mathbb{R}^N \\to \\mathbb{R}$ is an external scalar potential, $I_\\alpha (x) = \\frac{c_{N,\\alpha}}{|x|^{N-\\alpha}}$ is a convolution kernel, $c_{N,\\alpha}$ is a positive constant and $(N-1)p-N<\\alpha

  8. A new discriminative kernel from probabilistic models.

    PubMed

    Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert

    2002-10-01

    Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.

  9. Tail Behaviour of Self-Similar Profiles with Infinite Mass for Smoluchowski's Coagulation Equation

    NASA Astrophysics Data System (ADS)

    Throm, Sebastian

    2018-03-01

    In this article, we consider self-similar profiles to Smoluchowski's coagulation equation for which we derive the precise asymptotic behaviour at infinity. More precisely, we look at so-called fat-tailed profiles which decay algebraically and as a consequence have infinite total mass. The results only require mild assumptions on the coagulation kernel and thus cover a large class of rate kernels.

  10. Stochastic Gravity: Theory and Applications.

    PubMed

    Hu, Bei Lok; Verdaguer, Enric

    2004-01-01

    Whereas semiclassical gravity is based on the semiclassical Einstein equation with sources given by the expectation value of the stress-energy tensor of quantum fields, stochastic semiclassical gravity is based on the Einstein-Langevin equation, which has in addition sources due to the noise kernel. The noise kernel is the vacuum expectation value of the (operatorvalued) stress-energy bi-tensor which describes the fluctuations of quantum matter fields in curved spacetimes. In the first part, we describe the fundamentals of this new theory via two approaches: the axiomatic and the functional. The axiomatic approach is useful to see the structure of the theory from the framework of semiclassical gravity, showing the link from the mean value of the stress-energy tensor to their correlation functions. The functional approach uses the Feynman-Vernon influence functional and the Schwinger-Keldysh closed-time-path effective action methods which are convenient for computations. It also brings out the open systems concepts and the statistical and stochastic contents of the theory such as dissipation, fluctuations, noise, and decoherence. We then focus on the properties of the stress-energy bi-tensor. We obtain a general expression for the noise kernel of a quantum field defined at two distinct points in an arbitrary curved spacetime as products of covariant derivatives of the quantum field's Green function. In the second part, we describe three applications of stochastic gravity theory. First, we consider metric perturbations in a Minkowski spacetime. We offer an analytical solution of the Einstein-Langevin equation and compute the two-point correlation functions for the linearized Einstein tensor and for the metric perturbations. Second, we discuss structure formation from the stochastic gravity viewpoint, which can go beyond the standard treatment by incorporating the full quantum effect of the inflaton fluctuations. Third, we discuss the backreaction of Hawking radiation in the gravitational background of a quasi-static black hole (enclosed in a box). We derive a fluctuation-dissipation relation between the fluctuations in the radiation and the dissipative dynamics of metric fluctuations.

  11. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    PubMed Central

    Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969

  12. A Tensor-Product-Kernel Framework for Multiscale Neural Activity Decoding and Control

    PubMed Central

    Li, Lin; Brockmeier, Austin J.; Choi, John S.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2014-01-01

    Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569

  13. Poroelastic Modeling as a Proof of Concept for Modular Representation of Coupled Geophysical Processes

    NASA Astrophysics Data System (ADS)

    Walker, R. L., II; Knepley, M.; Aminzadeh, F.

    2017-12-01

    We seek to use the tools provided by the Portable, Extensible Toolkit for Scientific Computation (PETSc) to represent a multiphysics problem in a form that decouples the element definition from the fully coupled equation through the use of pointwise functions that imitate the strong form of the governing equation. This allows allows individual physical processes to be expressed as independent kernels that may be then coupled with the existing finite element framework, PyLith, and capitalizes upon the flexibility offered by the solver, data management, and time stepping algorithms offered by PETSc. To demonstrate a characteristic example of coupled geophysical simulation devised in this manner, we present a model of a synthetic poroelastic environment, with and without the consideration of inertial effects, with fluid initially represented as a single phase. Matrix displacement and fluid pressure serve as the desired unknowns, with the option for various model parameters represented as dependent variables of the central unknowns. While independent of PyLith, this model also serves to showcase the adaptability of physics kernels for synthetic forward modeling. In addition, we seek to expand the base case to demonstrate the impact of modeling fluid as single phase compressible versus a single incompressible phase. As a goal, we also seek to include multiphase fluid modeling, as well as capillary effects.

  14. KINETIC-J: A computational kernel for solving the linearized Vlasov equation applied to calculations of the kinetic, configuration space plasma current for time harmonic wave electric fields

    NASA Astrophysics Data System (ADS)

    Green, David L.; Berry, Lee A.; Simpson, Adam B.; Younkin, Timothy R.

    2018-04-01

    We present the KINETIC-J code, a computational kernel for evaluating the linearized Vlasov equation with application to calculating the kinetic plasma response (current) to an applied time harmonic wave electric field. This code addresses the need for a configuration space evaluation of the plasma current to enable kinetic full-wave solvers for waves in hot plasmas to move beyond the limitations of the traditional Fourier spectral methods. We benchmark the kernel via comparison with the standard k →-space forms of the hot plasma conductivity tensor.

  15. A method for computing the kernel of the downwash integral equation for arbitrary complex frequencies

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.; Rowe, W. S.

    1984-01-01

    For the design of active controls to stabilize flight vehicles, which requires the use of unsteady aerodynamics that are valid for arbitrary complex frequencies, algorithms are derived for evaluating the nonelementary part of the kernel of the integral equation that relates unsteady pressure to downwash. This part of the kernel is separated into an infinite limit integral that is evaluated using Bessel and Struve functions and into a finite limit integral that is expanded in series and integrated termwise in closed form. The developed series expansions gave reliable answers for all complex reduced frequencies and executed faster than exponential approximations for many pressure stations.

  16. On the solution of integral equations with a generalized Cauchy kernel

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1987-01-01

    A numerical technique is developed analytically to solve a class of singular integral equations occurring in mixed boundary-value problems for nonhomogeneous elastic media with discontinuities. The approach of Kaya and Erdogan (1987) is extended to treat equations with generalized Cauchy kernels, reformulating the boundary-value problems in terms of potentials as the unknown functions. The numerical implementation of the solution is discussed, and results for an epoxy-Al plate with a crack terminating at the interface and loading normal to the crack are presented in tables.

  17. Singularity Preserving Numerical Methods for Boundary Integral Equations

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki (Principal Investigator)

    1996-01-01

    In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.

  18. Analysis of the cable equation with non-local and non-singular kernel fractional derivative

    NASA Astrophysics Data System (ADS)

    Karaagac, Berat

    2018-02-01

    Recently a new concept of differentiation was introduced in the literature where the kernel was converted from non-local singular to non-local and non-singular. One of the great advantages of this new kernel is its ability to portray fading memory and also well defined memory of the system under investigation. In this paper the cable equation which is used to develop mathematical models of signal decay in submarine or underwater telegraphic cables will be analysed using the Atangana-Baleanu fractional derivative due to the ability of the new fractional derivative to describe non-local fading memory. The existence and uniqueness of the more generalized model is presented in detail via the fixed point theorem. A new numerical scheme is used to solve the new equation. In addition, stability, convergence and numerical simulations are presented.

  19. Alternative Derivations for the Poisson Integral Formula

    ERIC Educational Resources Information Center

    Chen, J. T.; Wu, C. S.

    2006-01-01

    Poisson integral formula is revisited. The kernel in the Poisson integral formula can be derived in a series form through the direct BEM free of the concept of image point by using the null-field integral equation in conjunction with the degenerate kernels. The degenerate kernels for the closed-form Green's function and the series form of Poisson…

  20. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Kernel Methods for Mining Instance Data in Ontologies

    NASA Astrophysics Data System (ADS)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  2. Unified connected theory of few-body reaction mechanisms in N-body scattering theory

    NASA Technical Reports Server (NTRS)

    Polyzou, W. N.; Redish, E. F.

    1978-01-01

    A unified treatment of different reaction mechanisms in nonrelativistic N-body scattering is presented. The theory is based on connected kernel integral equations that are expected to become compact for reasonable constraints on the potentials. The operators T/sub +-//sup ab/(A) are approximate transition operators that describe the scattering proceeding through an arbitrary reaction mechanism A. These operators are uniquely determined by a connected kernel equation and satisfy an optical theorem consistent with the choice of reaction mechanism. Connected kernel equations relating T/sub +-//sup ab/(A) to the full T/sub +-//sup ab/ allow correction of the approximate solutions for any ignored process to any order. This theory gives a unified treatment of all few-body reaction mechanisms with the same dynamic simplicity of a model calculation, but can include complicated reaction mechanisms involving overlapping configurations where it is difficult to formulate models.

  3. On the solution of integral equations with strongly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m ,m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup -m , terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  4. On the solution of integral equations with strong ly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1985-01-01

    In this paper some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m or = 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t,x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  5. On the solution of integral equations with strongly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1987-01-01

    Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  6. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    PubMed

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  7. Implicit kernel sparse shape representation: a sparse-neighbors-based objection segmentation framework.

    PubMed

    Yao, Jincao; Yu, Huimin; Hu, Roland

    2017-01-01

    This paper introduces a new implicit-kernel-sparse-shape-representation-based object segmentation framework. Given an input object whose shape is similar to some of the elements in the training set, the proposed model can automatically find a cluster of implicit kernel sparse neighbors to approximately represent the input shape and guide the segmentation. A distance-constrained probabilistic definition together with a dualization energy term is developed to connect high-level shape representation and low-level image information. We theoretically prove that our model not only derives from two projected convex sets but is also equivalent to a sparse-reconstruction-error-based representation in the Hilbert space. Finally, a "wake-sleep"-based segmentation framework is applied to drive the evolutionary curve to recover the original shape of the object. We test our model on two public datasets. Numerical experiments on both synthetic images and real applications show the superior capabilities of the proposed framework.

  8. Characterization of non-diffusive transport in plasma turbulence by means of flux-gradient integro-differential kernels

    NASA Astrophysics Data System (ADS)

    Alcuson, J. A.; Reynolds-Barredo, J. M.; Mier, J. A.; Sanchez, Raul; Del-Castillo-Negrete, Diego; Newman, David E.; Tribaldos, V.

    2015-11-01

    A method to determine fractional transport exponents in systems dominated by fluid or plasma turbulence is proposed. The method is based on the estimation of the integro-differential kernel that relates values of the fluxes and gradients of the transported field, and its comparison with the family of analytical kernels of the linear fractional transport equation. Although use of this type of kernels has been explored before in this context, the methodology proposed here is rather unique since the connection with specific fractional equations is exploited from the start. The procedure has been designed to be particularly well-suited for application in experimental setups, taking advantage of the fact that kernel determination only requires temporal data of the transported field measured on an Eulerian grid. The simplicity and robustness of the method is tested first by using fabricated data from continuous-time random walk models built with prescribed transport characteristics. Its strengths are then illustrated on numerical Eulerian data gathered from simulations of a magnetically confined turbulent plasma in a near-critical regime, that is known to exhibit superdiffusive radial transport

  9. Higher-order kinetic expansion of quantum dissipative dynamics: mapping quantum networks to kinetic networks.

    PubMed

    Wu, Jianlan; Cao, Jianshu

    2013-07-28

    We apply a new formalism to derive the higher-order quantum kinetic expansion (QKE) for studying dissipative dynamics in a general quantum network coupled with an arbitrary thermal bath. The dynamics of system population is described by a time-convoluted kinetic equation, where the time-nonlocal rate kernel is systematically expanded of the order of off-diagonal elements of the system Hamiltonian. In the second order, the rate kernel recovers the expression of the noninteracting-blip approximation method. The higher-order corrections in the rate kernel account for the effects of the multi-site quantum coherence and the bath relaxation. In a quantum harmonic bath, the rate kernels of different orders are analytically derived. As demonstrated by four examples, the higher-order QKE can reliably predict quantum dissipative dynamics, comparing well with the hierarchic equation approach. More importantly, the higher-order rate kernels can distinguish and quantify distinct nontrivial quantum coherent effects, such as long-range energy transfer from quantum tunneling and quantum interference arising from the phase accumulation of interactions.

  10. Generalized time-dependent Schrödinger equation in two dimensions under constraints

    NASA Astrophysics Data System (ADS)

    Sandev, Trifce; Petreska, Irina; Lenzi, Ervin K.

    2018-01-01

    We investigate a generalized two-dimensional time-dependent Schrödinger equation on a comb with a memory kernel. A Dirac delta term is introduced in the Schrödinger equation so that the quantum motion along the x-direction is constrained at y = 0. The wave function is analyzed by using Green's function approach for several forms of the memory kernel, which are of particular interest. Closed form solutions for the cases of Dirac delta and power-law memory kernels in terms of Fox H-function, as well as for a distributed order memory kernel, are obtained. Further, a nonlocal term is also introduced and investigated analytically. It is shown that the solution for such a case can be represented in terms of infinite series in Fox H-functions. Green's functions for each of the considered cases are analyzed and plotted for the most representative ones. Anomalous diffusion signatures are evident from the presence of the power-law tails. The normalized Green's functions obtained in this work are of broader interest, as they are an important ingredient for further calculations and analyses of some interesting effects in the transport properties in low-dimensional heterogeneous media.

  11. Phylodynamic Inference with Kernel ABC and Its Application to HIV Epidemiology.

    PubMed

    Poon, Art F Y

    2015-09-01

    The shapes of phylogenetic trees relating virus populations are determined by the adaptation of viruses within each host, and by the transmission of viruses among hosts. Phylodynamic inference attempts to reverse this flow of information, estimating parameters of these processes from the shape of a virus phylogeny reconstructed from a sample of genetic sequences from the epidemic. A key challenge to phylodynamic inference is quantifying the similarity between two trees in an efficient and comprehensive way. In this study, I demonstrate that a new distance measure, based on a subset tree kernel function from computational linguistics, confers a significant improvement over previous measures of tree shape for classifying trees generated under different epidemiological scenarios. Next, I incorporate this kernel-based distance measure into an approximate Bayesian computation (ABC) framework for phylodynamic inference. ABC bypasses the need for an analytical solution of model likelihood, as it only requires the ability to simulate data from the model. I validate this "kernel-ABC" method for phylodynamic inference by estimating parameters from data simulated under a simple epidemiological model. Results indicate that kernel-ABC attained greater accuracy for parameters associated with virus transmission than leading software on the same data sets. Finally, I apply the kernel-ABC framework to study a recent outbreak of a recombinant HIV subtype in China. Kernel-ABC provides a versatile framework for phylodynamic inference because it can fit a broader range of models than methods that rely on the computation of exact likelihoods. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  12. On the non-stationary generalized Langevin equation

    NASA Astrophysics Data System (ADS)

    Meyer, Hugues; Voigtmann, Thomas; Schilling, Tanja

    2017-12-01

    In molecular dynamics simulations and single molecule experiments, observables are usually measured along dynamic trajectories and then averaged over an ensemble ("bundle") of trajectories. Under stationary conditions, the time-evolution of such averages is described by the generalized Langevin equation. By contrast, if the dynamics is not stationary, it is not a priori clear which form the equation of motion for an averaged observable has. We employ the formalism of time-dependent projection operator techniques to derive the equation of motion for a non-equilibrium trajectory-averaged observable as well as for its non-stationary auto-correlation function. The equation is similar in structure to the generalized Langevin equation but exhibits a time-dependent memory kernel as well as a fluctuating force that implicitly depends on the initial conditions of the process. We also derive a relation between this memory kernel and the autocorrelation function of the fluctuating force that has a structure similar to a fluctuation-dissipation relation. In addition, we show how the choice of the projection operator allows us to relate the Taylor expansion of the memory kernel to data that are accessible in MD simulations and experiments, thus allowing us to construct the equation of motion. As a numerical example, the procedure is applied to Brownian motion initialized in non-equilibrium conditions and is shown to be consistent with direct measurements from simulations.

  13. Kernel-Correlated Levy Field Driven Forward Rate and Application to Derivative Pricing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bo Lijun; Wang Yongjin; Yang Xuewei, E-mail: xwyangnk@yahoo.com.cn

    2013-08-01

    We propose a term structure of forward rates driven by a kernel-correlated Levy random field under the HJM framework. The kernel-correlated Levy random field is composed of a kernel-correlated Gaussian random field and a centered Poisson random measure. We shall give a criterion to preclude arbitrage under the risk-neutral pricing measure. As applications, an interest rate derivative with general payoff functional is priced under this pricing measure.

  14. THERMOS. 30-Group ENDF/B Scattered Kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCrosson, F.J.; Finch, D.R.

    1973-12-01

    These data are 30-group THERMOS thermal scattering kernels for P0 to P5 Legendre orders for every temperature of every material from s(alpha,beta) data stored in the ENDF/B library. These scattering kernels were generated using the FLANGE2 computer code. To test the kernels, the integral properties of each set of kernels were determined by a precision integration of the diffusion length equation and compared to experimental measurements of these properties. In general, the agreement was very good. Details of the methods used and results obtained are contained in the reference. The scattering kernels are organized into a two volume magnetic tapemore » library from which they may be retrieved easily for use in any 30-group THERMOS library.« less

  15. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  16. Individualism in plant populations: using stochastic differential equations to model individual neighbourhood-dependent plant growth.

    PubMed

    Lv, Qiming; Schneider, Manuel K; Pitchford, Jonathan W

    2008-08-01

    We study individual plant growth and size hierarchy formation in an experimental population of Arabidopsis thaliana, within an integrated analysis that explicitly accounts for size-dependent growth, size- and space-dependent competition, and environmental stochasticity. It is shown that a Gompertz-type stochastic differential equation (SDE) model, involving asymmetric competition kernels and a stochastic term which decreases with the logarithm of plant weight, efficiently describes individual plant growth, competition, and variability in the studied population. The model is evaluated within a Bayesian framework and compared to its deterministic counterpart, and to several simplified stochastic models, using distributional validation. We show that stochasticity is an important determinant of size hierarchy and that SDE models outperform the deterministic model if and only if structural components of competition (asymmetry; size- and space-dependence) are accounted for. Implications of these results are discussed in the context of plant ecology and in more general modelling situations.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ikeda, Y.; Sato, T.

    Three-body resonances in the KNN system have been studied within a framework of the KNN-{pi}YN coupled-channel Faddeev equation. By solving the three-body equation, the energy dependence of the resonant KN amplitude is fully taken into account. The S-matrix pole has been investigated from the eigenvalue of the kernel with the analytic continuation of the scattering amplitude on the unphysical Riemann sheet. The KN interaction is constructed from the leading order term of the chiral Lagrangian using relativistic kinematics. The {lambda}(1405) resonance is dynamically generated in this model, where the KN interaction parameters are fitted to the data of scattering length.more » As a result we find a three-body resonance of the strange dibaryon system with binding energy B{approx}79 MeV and width {gamma}{approx}74 MeV. The energy of the three-body resonance is found to be sensitive to the model of the I=0 KN interaction.« less

  18. Standard Errors of Equating for the Percentile Rank-Based Equipercentile Equating with Log-Linear Presmoothing

    ERIC Educational Resources Information Center

    Wang, Tianyou

    2009-01-01

    Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…

  19. Standard Errors of Equating Differences: Prior Developments, Extensions, and Simulations

    ERIC Educational Resources Information Center

    Moses, Tim; Zhang, Wenmin

    2011-01-01

    The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…

  20. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    PubMed

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  1. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  2. Note: A simple picture of subdiffusive polymer motion from stochastic simulations

    NASA Astrophysics Data System (ADS)

    Gniewek, Pawel; Kolinski, Andrzej

    2011-02-01

    Entangled polymer solutions and melts exhibit unusual frictional properties. In the entanglement limit self-diffusion coefficient of long flexible polymers decays with the second power of chain length and viscosity increases with 3-3.5 power of chain length.1 It is very difficult to provide detailed molecular-level explanation of the entanglement effect.2 Perhaps, the problem of many entangled polymer chains is the most complex multibody issue of classical physics. There are different approaches to polymer melt dynamics. Some of these recognize hydrodynamic interactions as a dominant term, while topological constraints for polymer chains are assumed as a secondary factor. Other theories consider the topological constraints as the most important factors controlling polymer dynamics. Herman and co-workers describe polymer dynamics in melts, as a lateral sliding of a chain along other chains until complete mutual disentanglement. Despite the success in explaining the power-laws for viscosity, the model has some limitations. First of all, memory effects are ignored, that is, polymer segments are treated independently. Also, each entanglement/obstacle is treated as a separate entity, which is certainly a simplification of the memory effect problem. In addition to that, correlated motions of segments are addressed within the framework of renormalized Rouse-chain theory,7 without calling any topological entanglements in advance. This approach leads to the generalized Langevin equation characterized by distinct memory kernels describing local and nonlocal segment correlations or to the Smoluchowski equation in which the segments' mobility is treated as a stochastic variable.11 Both models describe the polymer segments motion at a microscopic level. An interesting alternative is to solve the integrodifferential equation for the chain relaxation with a sophisticated kernel function.12 The design of the kernel function is based on a mesoscopic description of the polymer melt. These theories explain some experimental data, although the description of the crossover between the Rouse and non-Rouse behavior is not satisfactory. Obviously, within the scope of a short note we cannot review all theoretical concepts of the polymer melt dynamics. Here we focus just on the interpretation of the observed single segment autocorrelation function.

  3. Steady/unsteady aerodynamic analysis of wings at subsonic, sonic and supersonic Mach numbers using a 3D panel method

    NASA Astrophysics Data System (ADS)

    Cho, Jeonghyun; Han, Cheolheui; Cho, Leesang; Cho, Jinsoo

    2003-08-01

    This paper treats the kernel function of an integral equation that relates a known or prescribed upwash distribution to an unknown lift distribution for a finite wing. The pressure kernel functions of the singular integral equation are summarized for all speed range in the Laplace transform domain. The sonic kernel function has been reduced to a form, which can be conveniently evaluated as a finite limit from both the subsonic and supersonic sides when the Mach number tends to one. Several examples are solved including rectangular wings, swept wings, a supersonic transport wing and a harmonically oscillating wing. Present results are given with other numerical data, showing continuous results through the unit Mach number. Computed results are in good agreement with other numerical results.

  4. A spectral boundary integral equation method for the 2-D Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    In this paper, we present a new numerical formulation of solving the boundary integral equations reformulated from the Helmholtz equation. The boundaries of the problems are assumed to be smooth closed contours. The solution on the boundary is treated as a periodic function, which is in turn approximated by a truncated Fourier series. A Fourier collocation method is followed in which the boundary integral equation is transformed into a system of algebraic equations. It is shown that in order to achieve spectral accuracy for the numerical formulation, the nonsmoothness of the integral kernels, associated with the Helmholtz equation, must be carefully removed. The emphasis of the paper is on investigating the essential elements of removing the nonsmoothness of the integral kernels in the spectral implementation. The present method is robust for a general boundary contour. Aspects of efficient implementation of the method using FFT are also discussed. A numerical example of wave scattering is given in which the exponential accuracy of the present numerical method is demonstrated.

  5. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Registering Cortical Surfaces Based on Whole-Brain Structural Connectivity and Continuous Connectivity Analysis

    PubMed Central

    Gutman, Boris; Leonardo, Cassandra; Jahanshad, Neda; Hibar, Derrek; Eschen-burg, Kristian; Nir, Talia; Villalon, Julio; Thompson, Paul

    2014-01-01

    We present a framework for registering cortical surfaces based on tractography-informed structural connectivity. We define connectivity as a continuous kernel on the product space of the cortex, and develop a method for estimating this kernel from tractography fiber models. Next, we formulate the kernel registration problem, and present a means to non-linearly register two brains’ continuous connectivity profiles. We apply theoretical results from operator theory to develop an algorithm for decomposing the connectome into its shared and individual components. Lastly, we extend two discrete connectivity measures to the continuous case, and apply our framework to 98 Alzheimer’s patients and controls. Our measures show significant differences between the two groups. PMID:25320795

  7. A new treatment of nonlocality in scattering process

    NASA Astrophysics Data System (ADS)

    Upadhyay, N. J.; Bhagwat, A.; Jain, B. K.

    2018-01-01

    Nonlocality in the scattering potential leads to an integro-differential equation. In this equation nonlocality enters through an integral over the nonlocal potential kernel. The resulting Schrödinger equation is usually handled by approximating r,{r}{\\prime }-dependence of the nonlocal kernel. The present work proposes a novel method to solve the integro-differential equation. The method, using the mean value theorem of integral calculus, converts the nonhomogeneous term to a homogeneous term. The effective local potential in this equation turns out to be energy independent, but has relative angular momentum dependence. This method is accurate and valid for any form of nonlocality. As illustrative examples, the total and differential cross sections for neutron scattering off 12C, 56Fe and 100Mo nuclei are calculated with this method in the low energy region (up to 10 MeV) and are found to be in reasonable accord with the experiments.

  8. Application of fractional derivative with exponential law to bi-fractional-order wave equation with frictional memory kernel

    NASA Astrophysics Data System (ADS)

    Cuahutenango-Barro, B.; Taneco-Hernández, M. A.; Gómez-Aguilar, J. F.

    2017-12-01

    Analytical solutions of the wave equation with bi-fractional-order and frictional memory kernel of Mittag-Leffler type are obtained via Caputo-Fabrizio fractional derivative in the Liouville-Caputo sense. Through the method of separation of variables and Laplace transform method we derive closed-form solutions and establish fundamental solutions. Special cases with homogeneous Dirichlet boundary conditions and nonhomogeneous initial conditions, as well as for the external force are considered. Numerical simulations of the special solutions were done and novel behaviors are obtained.

  9. Triple collinear emissions in parton showers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Höche, Stefan; Prestel, Stefan

    2017-10-01

    A framework to include triple collinear splitting functions into parton showers is presented, and the implementation of flavor-changing NLO splitting kernels is discussed as a first application. The correspondence between the Monte-Carlo integration and the analytic computation of NLO DGLAP evolution kernels is made explicit for both timelike and spacelike parton evolution. Numerical simulation results are obtained with two independent implementations of the new algorithm, using the two independent event generation frameworks Pythia and Sherpa.

  10. Using the Kernel Method of Test Equating for Estimating the Standard Errors of Population Invariance Measures

    ERIC Educational Resources Information Center

    Moses, Tim

    2008-01-01

    Equating functions are supposed to be population invariant, meaning that the choice of subpopulation used to compute the equating function should not matter. The extent to which equating functions are population invariant is typically assessed in terms of practical difference criteria that do not account for equating functions' sampling…

  11. Research on Standard Errors of Equating Differences. Research Report. ETS RR-10-25

    ERIC Educational Resources Information Center

    Moses, Tim; Zhang, Wenmin

    2010-01-01

    In this paper, the "standard error of equating difference" (SEED) is described in terms of originally proposed kernel equating functions (von Davier, Holland, & Thayer, 2004) and extended to incorporate traditional linear and equipercentile functions. These derivations expand on prior developments of SEEDs and standard errors of equating and…

  12. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  13. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Color-suppression of non-planar diagrams in bosonic bound states

    NASA Astrophysics Data System (ADS)

    Alvarenga Nogueira, J. H.; Ji, Chueng-Ryong; Ydrefors, E.; Frederico, T.

    2018-02-01

    We study the suppression of non-planar diagrams in a scalar QCD model of a meson system in 3 + 1 space-time dimensions due to the inclusion of the color degrees of freedom. As a prototype of the color-singlet meson, we consider a flavor-nonsinglet system consisting of a scalar-quark and a scalar-antiquark with equal masses exchanging a scalar-gluon of a different mass, which is investigated within the framework of the homogeneous Bethe-Salpeter equation. The equation is solved by using the Nakanishi representation for the manifestly covariant bound-state amplitude and its light-front projection. The resulting non-singular integral equation is solved numerically. The damping of the impact of the cross-ladder kernel on the binding energies are studied in detail. The color-suppression of the cross-ladder effects on the light-front wave function and the elastic electromagnetic form factor are also discussed. As our results show, the suppression appears significantly large for Nc = 3, which supports the use of rainbow-ladder truncations in practical non-perturbative calculations within QCD.

  15. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    NASA Astrophysics Data System (ADS)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  16. An iterative kernel based method for fourth order nonlinear equation with nonlinear boundary condition

    NASA Astrophysics Data System (ADS)

    Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid

    2018-06-01

    This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.

  17. Data-driven parameterization of the generalized Langevin equation

    DOE PAGES

    Lei, Huan; Baker, Nathan A.; Li, Xiantao

    2016-11-29

    We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.

  18. Generalized Langevin equation with tempered memory kernel

    NASA Astrophysics Data System (ADS)

    Liemert, André; Sandev, Trifce; Kantz, Holger

    2017-01-01

    We study a generalized Langevin equation for a free particle in presence of a truncated power-law and Mittag-Leffler memory kernel. It is shown that in presence of truncation, the particle from subdiffusive behavior in the short time limit, turns to normal diffusion in the long time limit. The case of harmonic oscillator is considered as well, and the relaxation functions and the normalized displacement correlation function are represented in an exact form. By considering external time-dependent periodic force we obtain resonant behavior even in case of a free particle due to the influence of the environment on the particle movement. Additionally, the double-peak phenomenon in the imaginary part of the complex susceptibility is observed. It is obtained that the truncation parameter has a huge influence on the behavior of these quantities, and it is shown how the truncation parameter changes the critical frequencies. The normalized displacement correlation function for a fractional generalized Langevin equation is investigated as well. All the results are exact and given in terms of the three parameter Mittag-Leffler function and the Prabhakar generalized integral operator, which in the kernel contains a three parameter Mittag-Leffler function. Such kind of truncated Langevin equation motion can be of high relevance for the description of lateral diffusion of lipids and proteins in cell membranes.

  19. Triple collinear emissions in parton showers

    DOE PAGES

    Hoche, Stefan; Prestel, Stefan

    2017-10-17

    A framework to include triple collinear splitting functions into parton showers is presented, and the implementation of flavor-changing next-to-leading-order (NLO) splitting kernels is discussed as a first application. The correspondence between the Monte Carlo integration and the analytic computation of NLO DGLAP evolution kernels is made explicit for both timelike and spacelike parton evolution. Finally, numerical simulation results are obtained with two independent implementations of the new algorithm, using the two independent event generation frameworks PYTHIA and SHERPA.

  20. A robust nonparametric framework for reconstruction of stochastic differential equation models

    NASA Astrophysics Data System (ADS)

    Rajabzadeh, Yalda; Rezaie, Amir Hossein; Amindavar, Hamidreza

    2016-05-01

    In this paper, we employ a nonparametric framework to robustly estimate the functional forms of drift and diffusion terms from discrete stationary time series. The proposed method significantly improves the accuracy of the parameter estimation. In this framework, drift and diffusion coefficients are modeled through orthogonal Legendre polynomials. We employ the least squares regression approach along with the Euler-Maruyama approximation method to learn coefficients of stochastic model. Next, a numerical discrete construction of mean squared prediction error (MSPE) is established to calculate the order of Legendre polynomials in drift and diffusion terms. We show numerically that the new method is robust against the variation in sample size and sampling rate. The performance of our method in comparison with the kernel-based regression (KBR) method is demonstrated through simulation and real data. In case of real dataset, we test our method for discriminating healthy electroencephalogram (EEG) signals from epilepsy ones. We also demonstrate the efficiency of the method through prediction in the financial data. In both simulation and real data, our algorithm outperforms the KBR method.

  1. GPU-Accelerated Forward and Back-Projections with Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction.

    PubMed

    Ha, S; Matej, S; Ispiryan, M; Mueller, K

    2013-02-01

    We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.

  2. GPU-Accelerated Forward and Back-Projections With Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction

    NASA Astrophysics Data System (ADS)

    Ha, S.; Matej, S.; Ispiryan, M.; Mueller, K.

    2013-02-01

    We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.

  3. Using the Kernel Method of Test Equating for Estimating the Standard Errors of Population Invariance Measures. Research Report. ETS RR-06-20

    ERIC Educational Resources Information Center

    Moses, Tim

    2006-01-01

    Population invariance is an important requirement of test equating. An equating function is said to be population invariant when the choice of (sub)population used to compute the equating function does not matter. In recent studies, the extent to which equating functions are population invariant is typically addressed in terms of practical…

  4. The role of fractional time-derivative operators on anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Tateishi, Angel A.; Ribeiro, Haroldo V.; Lenzi, Ervin K.

    2017-10-01

    The generalized diffusion equations with fractional order derivatives have shown be quite efficient to describe the diffusion in complex systems, with the advantage of producing exact expressions for the underlying diffusive properties. Recently, researchers have proposed different fractional-time operators (namely: the Caputo-Fabrizio and Atangana-Baleanu) which, differently from the well-known Riemann-Liouville operator, are defined by non-singular memory kernels. Here we proposed to use these new operators to generalize the usual diffusion equation. By analyzing the corresponding fractional diffusion equations within the continuous time random walk framework, we obtained waiting time distributions characterized by exponential, stretched exponential, and power-law functions, as well as a crossover between two behaviors. For the mean square displacement, we found crossovers between usual and confined diffusion, and between usual and sub-diffusion. We obtained the exact expressions for the probability distributions, where non-Gaussian and stationary distributions emerged. This former feature is remarkable because the fractional diffusion equation is solved without external forces and subjected to the free diffusion boundary conditions. We have further shown that these new fractional diffusion equations are related to diffusive processes with stochastic resetting, and to fractional diffusion equations with derivatives of distributed order. Thus, our results suggest that these new operators may be a simple and efficient way for incorporating different structural aspects into the system, opening new possibilities for modeling and investigating anomalous diffusive processes.

  5. A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System.

    PubMed

    Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan

    2015-01-01

    The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods.

  6. A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System

    PubMed Central

    Yuan, Xianfeng; Song, Mumin; Chen, Zhumin; Li, Yan

    2015-01-01

    The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods. PMID:26229526

  7. A new analysis of the Fornberg-Whitham equation pertaining to a fractional derivative with Mittag-Leffler-type kernel

    NASA Astrophysics Data System (ADS)

    Kumar, Devendra; Singh, Jagdev; Baleanu, Dumitru

    2018-02-01

    The mathematical model of breaking of non-linear dispersive water waves with memory effect is very important in mathematical physics. In the present article, we examine a novel fractional extension of the non-linear Fornberg-Whitham equation occurring in wave breaking. We consider the most recent theory of differentiation involving the non-singular kernel based on the extended Mittag-Leffler-type function to modify the Fornberg-Whitham equation. We examine the existence of the solution of the non-linear Fornberg-Whitham equation of fractional order. Further, we show the uniqueness of the solution. We obtain the numerical solution of the new arbitrary order model of the non-linear Fornberg-Whitham equation with the aid of the Laplace decomposition technique. The numerical outcomes are displayed in the form of graphs and tables. The results indicate that the Laplace decomposition algorithm is a very user-friendly and reliable scheme for handling such type of non-linear problems of fractional order.

  8. Modeling electro-magneto-hydrodynamic thermo-fluidic transport of biofluids with new trend of fractional derivative without singular kernel

    NASA Astrophysics Data System (ADS)

    Abdulhameed, M.; Vieru, D.; Roslan, R.

    2017-10-01

    This paper investigates the electro-magneto-hydrodynamic flow of the non-Newtonian behavior of biofluids, with heat transfer, through a cylindrical microchannel. The fluid is acted by an arbitrary time-dependent pressure gradient, an external electric field and an external magnetic field. The governing equations are considered as fractional partial differential equations based on the Caputo-Fabrizio time-fractional derivatives without singular kernel. The usefulness of fractional calculus to study fluid flows or heat and mass transfer phenomena was proven. Several experimental measurements led to conclusion that, in such problems, the models described by fractional differential equations are more suitable. The most common time-fractional derivative used in Continuum Mechanics is Caputo derivative. However, two disadvantages appear when this derivative is used. First, the definition kernel is a singular function and, secondly, the analytical expressions of the problem solutions are expressed by generalized functions (Mittag-Leffler, Lorenzo-Hartley, Robotnov, etc.) which, generally, are not adequate to numerical calculations. The new time-fractional derivative Caputo-Fabrizio, without singular kernel, is more suitable to solve various theoretical and practical problems which involve fractional differential equations. Using the Caputo-Fabrizio derivative, calculations are simpler and, the obtained solutions are expressed by elementary functions. Analytical solutions of the biofluid velocity and thermal transport are obtained by means of the Laplace and finite Hankel transforms. The influence of the fractional parameter, Eckert number and Joule heating parameter on the biofluid velocity and thermal transport are numerically analyzed and graphic presented. This fact can be an important in Biochip technology, thus making it possible to use this analysis technique extremely effective to control bioliquid samples of nanovolumes in microfluidic devices used for biological analysis and medical diagnosis.

  9. Quantum kinetic expansion in the spin-boson model: Matrix formulation and system-bath factorized initial state.

    PubMed

    Gong, Zhihao; Tang, Zhoufei; Wang, Haobin; Wu, Jianlan

    2017-12-28

    Within the framework of the hierarchy equation of motion (HEOM), the quantum kinetic expansion (QKE) method of the spin-boson model is reformulated in the matrix representation. The equivalence between the two formulations (HEOM matrices and quantum operators) is numerically verified from the calculation of the time-integrated QKE rates. The matrix formulation of the QKE is extended to the system-bath factorized initial state. Following a one-to-one mapping between HEOM matrices and quantum operators, a quantum kinetic equation is rederived. The rate kernel is modified by an extra term following a systematic expansion over the site-site coupling. This modified QKE is numerically tested for its reliability by calculating the time-integrated rate and non-Markovian population kinetics. For an intermediate-to-strong dissipation strength and a large site-site coupling, the population transfer is found to be significantly different when the initial condition is changed from the local equilibrium to system-bath factorized state.

  10. Optical properties of body-centered tetragonal C4: Insights from many-body perturbation and time-dependent density functional theories

    NASA Astrophysics Data System (ADS)

    Tarighi Ahmadpour, Mahdi; Rostamnejadi, Ali; Hashemifar, S. Javad

    2018-04-01

    We study the electronic structure and optical properties of a body-centered tetragonal phase of carbon (bct-C4) within the framework of time-dependent density functional theory and Bethe-Salpeter equation. The results indicate that the optical properties of bct-C4 are strongly affected by the electron-hole interaction. It is demonstrated that the long-range corrected exchange-correlation kernels could fairly reproduce the Bethe-Salpeter equation results. The effective carrier number reveals that at energies above 30 eV, the excitonic effects are not dominant any more and that the optical transitions originate mainly from electronic excitations. The emerged peaks in the calculated electron energy loss spectra are discussed in terms of plasmon excitations and interband transitions. The results of the research indicate that bct-C4 is an indirect wide-band-gap semiconductor, which is transparent in the visible region and opaque in the ultraviolet spectral range.

  11. Dielectric relaxation measurement and analysis of restricted water structure in rice kernels

    NASA Astrophysics Data System (ADS)

    Yagihara, Shin; Oyama, Mikio; Inoue, Akio; Asano, Megumi; Sudo, Seiichi; Shinyashiki, Naoki

    2007-04-01

    Dielectric relaxation measurements were performed for rice kernels by time domain reflectometry (TDR) with flat-end coaxial electrodes. Difficulties in good contact between the surfaces of the electrodes and the kernels are eliminated by a TDR set-up with a sample holder for a kernel, and the water content could be evaluated from relaxation curves. Dielectric measurements were performed for rice kernels, rice flour and boiled rice with various water contents, and the water amount and dynamic behaviour of water molecules were explained from restricted dynamics of water molecules and also from the τ-β (relaxation time versus the relaxation-time distribution parameter of the Cole-Cole equation) diagram. In comparison with other aqueous systems, the dynamic structure of water in moist rice is more similar to aqueous dispersion systems than to aqueous solutions.

  12. Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D., E-mail: sergei.ivanov@uni-rostock.de

    2015-06-28

    Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied,more » usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.« less

  13. Computational methods for reactive transport modeling: A Gibbs energy minimization approach for multiphase equilibrium calculations

    NASA Astrophysics Data System (ADS)

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg

    2016-02-01

    We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.

  14. Symmetry preserving truncations of the gap and Bethe-Salpeter equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binosi, Daniele; Chang, Lei; Papavassiliou, Joannis

    2016-05-01

    Ward-Green-Takahashi (WGT) identities play a crucial role in hadron physics, e.g. imposing stringent relationships between the kernels of the one-and two-body problems, which must be preserved in any veracious treatment of mesons as bound states. In this connection, one may view the dressed gluon-quark vertex, Gamma(alpha)(mu), as fundamental. We use a novel representation of Gamma(alpha)(mu), in terms of the gluon-quark scattering matrix, to develop a method capable of elucidating the unique quark-antiquark Bethe-Salpeter kernel, K, that is symmetry consistent with a given quark gap equation. A strength of the scheme is its ability to expose and capitalize on graphic symmetriesmore » within the kernels. This is displayed in an analysis that reveals the origin of H-diagrams in K, which are two-particle-irreducible contributions, generated as two-loop diagrams involving the three-gluon vertex, that cannot be absorbed as a dressing of Gamma(alpha)(mu) in a Bethe-Salpeter kernel nor expressed as a member of the class of crossed-box diagrams. Thus, there are no general circumstances under which the WGT identities essential for a valid description of mesons can be preserved by a Bethe-Salpeter kernel obtained simply by dressing both gluon-quark vertices in a ladderlike truncation; and, moreover, adding any number of similarly dressed crossed-box diagrams cannot improve the situation.« less

  15. Initial Simulations of RF Waves in Hot Plasmas Using the FullWave Code

    NASA Astrophysics Data System (ADS)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo

    2017-10-01

    FullWave is a simulation tool that models RF fields in hot inhomogeneous magnetized plasmas. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. In an rf field, the hot plasma dielectric response is limited to the distance of a few particles' Larmor radii, near the magnetic field line passing through the test point. The localization of the hot plasma dielectric response results in a sparse matrix of the problem thus significantly reduces the size of the problem and makes the simulations faster. We will present the initial results of modeling of rf waves using the Fullwave code, including calculation of nonlocal conductivity kernel in 2D Tokamak geometry; the interpolation of conductivity kernel from test points to adaptive cloud of computational points; and the results of self-consistent simulations of 2D rf fields using calculated hot plasma conductivity kernel in a tokamak plasma with reduced parameters. Work supported by the US DOE ``SBIR program.

  16. Diffuse correlation tomography in the transport regime: A theoretical study of the sensitivity to Brownian motion.

    PubMed

    Tricoli, Ugo; Macdonald, Callum M; Durduran, Turgut; Da Silva, Anabela; Markel, Vadim A

    2018-02-01

    Diffuse correlation tomography (DCT) uses the electric-field temporal autocorrelation function to measure the mean-square displacement of light-scattering particles in a turbid medium over a given exposure time. The movement of blood particles is here estimated through a Brownian-motion-like model in contrast to ordered motion as in blood flow. The sensitivity kernel relating the measurable field correlation function to the mean-square displacement of the particles can be derived by applying a perturbative analysis to the correlation transport equation (CTE). We derive an analytical expression for the CTE sensitivity kernel in terms of the Green's function of the radiative transport equation, which describes the propagation of the intensity. We then evaluate the kernel numerically. The simulations demonstrate that, in the transport regime, the sensitivity kernel provides sharper spatial information about the medium as compared with the correlation diffusion approximation. Also, the use of the CTE allows one to explore some additional degrees of freedom in the data such as the collimation direction of sources and detectors. Our results can be used to improve the spatial resolution of DCT, in particular, with applications to blood flow imaging in regions where the Brownian motion is dominant.

  17. Diffuse correlation tomography in the transport regime: A theoretical study of the sensitivity to Brownian motion

    NASA Astrophysics Data System (ADS)

    Tricoli, Ugo; Macdonald, Callum M.; Durduran, Turgut; Da Silva, Anabela; Markel, Vadim A.

    2018-02-01

    Diffuse correlation tomography (DCT) uses the electric-field temporal autocorrelation function to measure the mean-square displacement of light-scattering particles in a turbid medium over a given exposure time. The movement of blood particles is here estimated through a Brownian-motion-like model in contrast to ordered motion as in blood flow. The sensitivity kernel relating the measurable field correlation function to the mean-square displacement of the particles can be derived by applying a perturbative analysis to the correlation transport equation (CTE). We derive an analytical expression for the CTE sensitivity kernel in terms of the Green's function of the radiative transport equation, which describes the propagation of the intensity. We then evaluate the kernel numerically. The simulations demonstrate that, in the transport regime, the sensitivity kernel provides sharper spatial information about the medium as compared with the correlation diffusion approximation. Also, the use of the CTE allows one to explore some additional degrees of freedom in the data such as the collimation direction of sources and detectors. Our results can be used to improve the spatial resolution of DCT, in particular, with applications to blood flow imaging in regions where the Brownian motion is dominant.

  18. Stochastic quantization of (λϕ4)d scalar theory: Generalized Langevin equation with memory kernel

    NASA Astrophysics Data System (ADS)

    Menezes, G.; Svaiter, N. F.

    2007-02-01

    The method of stochastic quantization for a scalar field theory is reviewed. A brief survey for the case of self-interacting scalar field, implementing the stochastic perturbation theory up to the one-loop level, is presented. Then, it is introduced a colored random noise in the Einstein's relations, a common prescription employed by one of the stochastic regularizations, to control the ultraviolet divergences of the theory. This formalism is extended to the case where a Langevin equation with a memory kernel is used. It is shown that, maintaining the Einstein's relations with a colored noise, there is convergence to a non-regularized theory.

  19. Enriched reproducing kernel particle method for fractional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Ying, Yuping; Lian, Yanping; Tang, Shaoqiang; Liu, Wing Kam

    2018-06-01

    The reproducing kernel particle method (RKPM) has been efficiently applied to problems with large deformations, high gradients and high modal density. In this paper, it is extended to solve a nonlocal problem modeled by a fractional advection-diffusion equation (FADE), which exhibits a boundary layer with low regularity. We formulate this method on a moving least-square approach. Via the enrichment of fractional-order power functions to the traditional integer-order basis for RKPM, leading terms of the solution to the FADE can be exactly reproduced, which guarantees a good approximation to the boundary layer. Numerical tests are performed to verify the proposed approach.

  20. Development of FullWave : Hot Plasma RF Simulation Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Kim, Jin-Soo; Spencer, J. Andrew; Zhao, Liangji; Galkin, Sergei

    2017-10-01

    Full wave simulation tool, modeling RF fields in hot inhomogeneous magnetized plasma, is being developed. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated in configuration space without limiting approximations by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. This approach allows for better resolution of plasma resonances, antenna structures and complex boundaries. The formulation of FullWave and preliminary results will be presented: construction of the finite differences for approximation of derivatives on adaptive cloud of computational points; model and results of nonlocal conductivity kernel calculation in tokamak geometry; results of 2-D full wave simulations in the cold plasma model in tokamak geometry using the formulated approach; results of self-consistent calculations of hot plasma dielectric response and RF fields in 1-D mirror magnetic field; preliminary results of self-consistent simulations of 2-D RF fields in tokamak using the calculated hot plasma conductivity kernel; development of iterative solver for wave equations. Work is supported by the U.S. DOE SBIR program.

  1. Brownian motion of a nano-colloidal particle: the role of the solvent.

    PubMed

    Torres-Carbajal, Alexis; Herrera-Velarde, Salvador; Castañeda-Priego, Ramón

    2015-07-15

    Brownian motion is a feature of colloidal particles immersed in a liquid-like environment. Usually, it can be described by means of the generalised Langevin equation (GLE) within the framework of the Mori theory. In principle, all quantities that appear in the GLE can be calculated from the molecular information of the whole system, i.e., colloids and solvent molecules. In this work, by means of extensive Molecular Dynamics simulations, we study the effects of the microscopic details and the thermodynamic state of the solvent on the movement of a single nano-colloid. In particular, we consider a two-dimensional model system in which the mass and size of the colloid are two and one orders of magnitude, respectively, larger than the ones associated with the solvent molecules. The latter ones interact via a Lennard-Jones-type potential to tune the nature of the solvent, i.e., it can be either repulsive or attractive. We choose the linear momentum of the Brownian particle as the observable of interest in order to fully describe the Brownian motion within the Mori framework. We particularly focus on the colloid diffusion at different solvent densities and two temperature regimes: high and low (near the critical point) temperatures. To reach our goal, we have rewritten the GLE as a second kind Volterra integral in order to compute the memory kernel in real space. With this kernel, we evaluate the momentum-fluctuating force correlation function, which is of particular relevance since it allows us to establish when the stationarity condition has been reached. Our findings show that even at high temperatures, the details of the attractive interaction potential among solvent molecules induce important changes in the colloid dynamics. Additionally, near the critical point, the dynamical scenario becomes more complex; all the correlation functions decay slowly in an extended time window, however, the memory kernel seems to be only a function of the solvent density. Thus, the explicit inclusion of the solvent in the description of Brownian motion allows us to better understand the behaviour of the memory kernel at those thermodynamic states near the critical region without any further approximation. This information is useful to elaborate more realistic descriptions of Brownian motion that take into account the particular details of the host medium.

  2. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  3. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  4. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  5. Multiscale Support Vector Learning With Projection Operator Wavelet Kernel for Nonlinear Dynamical System Identification.

    PubMed

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2016-02-03

    A giant leap has been made in the past couple of decades with the introduction of kernel-based learning as a mainstay for designing effective nonlinear computational learning algorithms. In view of the geometric interpretation of conditional expectation and the ubiquity of multiscale characteristics in highly complex nonlinear dynamic systems [1]-[3], this paper presents a new orthogonal projection operator wavelet kernel, aiming at developing an efficient computational learning approach for nonlinear dynamical system identification. In the framework of multiresolution analysis, the proposed projection operator wavelet kernel can fulfill the multiscale, multidimensional learning to estimate complex dependencies. The special advantage of the projection operator wavelet kernel developed in this paper lies in the fact that it has a closed-form expression, which greatly facilitates its application in kernel learning. To the best of our knowledge, it is the first closed-form orthogonal projection wavelet kernel reported in the literature. It provides a link between grid-based wavelets and mesh-free kernel-based methods. Simulation studies for identifying the parallel models of two benchmark nonlinear dynamical systems confirm its superiority in model accuracy and sparsity.

  6. Lévy processes on a generalized fractal comb

    NASA Astrophysics Data System (ADS)

    Sandev, Trifce; Iomin, Alexander; Méndez, Vicenç

    2016-09-01

    Comb geometry, constituted of a backbone and fingers, is one of the most simple paradigm of a two-dimensional structure, where anomalous diffusion can be realized in the framework of Markov processes. However, the intrinsic properties of the structure can destroy this Markovian transport. These effects can be described by the memory and spatial kernels. In particular, the fractal structure of the fingers, which is controlled by the spatial kernel in both the real and the Fourier spaces, leads to the Lévy processes (Lévy flights) and superdiffusion. This generalization of the fractional diffusion is described by the Riesz space fractional derivative. In the framework of this generalized fractal comb model, Lévy processes are considered, and exact solutions for the probability distribution functions are obtained in terms of the Fox H-function for a variety of the memory kernels, and the rate of the superdiffusive spreading is studied by calculating the fractional moments. For a special form of the memory kernels, we also observed a competition between long rests and long jumps. Finally, we considered the fractal structure of the fingers controlled by a Weierstrass function, which leads to the power-law kernel in the Fourier space. This is a special case, when the second moment exists for superdiffusion in this competition between long rests and long jumps.

  7. A robust multi-kernel change detection framework for detecting leaf beetle defoliation using Landsat 7 ETM+ data

    NASA Astrophysics Data System (ADS)

    Anees, Asim; Aryal, Jagannath; O'Reilly, Małgorzata M.; Gale, Timothy J.; Wardlaw, Tim

    2016-12-01

    A robust non-parametric framework, based on multiple Radial Basic Function (RBF) kernels, is proposed in this study, for detecting land/forest cover changes using Landsat 7 ETM+ images. One of the widely used frameworks is to find change vectors (difference image) and use a supervised classifier to differentiate between change and no-change. The Bayesian Classifiers e.g. Maximum Likelihood Classifier (MLC), Naive Bayes (NB), are widely used probabilistic classifiers which assume parametric models, e.g. Gaussian function, for the class conditional distributions. However, their performance can be limited if the data set deviates from the assumed model. The proposed framework exploits the useful properties of Least Squares Probabilistic Classifier (LSPC) formulation i.e. non-parametric and probabilistic nature, to model class posterior probabilities of the difference image using a linear combination of a large number of Gaussian kernels. To this end, a simple technique, based on 10-fold cross-validation is also proposed for tuning model parameters automatically instead of selecting a (possibly) suboptimal combination from pre-specified lists of values. The proposed framework has been tested and compared with Support Vector Machine (SVM) and NB for detection of defoliation, caused by leaf beetles (Paropsisterna spp.) in Eucalyptus nitens and Eucalyptus globulus plantations of two test areas, in Tasmania, Australia, using raw bands and band combination indices of Landsat 7 ETM+. It was observed that due to multi-kernel non-parametric formulation and probabilistic nature, the LSPC outperforms parametric NB with Gaussian assumption in change detection framework, with Overall Accuracy (OA) ranging from 93.6% (κ = 0.87) to 97.4% (κ = 0.94) against 85.3% (κ = 0.69) to 93.4% (κ = 0.85), and is more robust to changing data distributions. Its performance was comparable to SVM, with added advantages of being probabilistic and capable of handling multi-class problems naturally with its original formulation.

  8. The resolvent of singular integral equations. [of kernel functions in mixed boundary value problems

    NASA Technical Reports Server (NTRS)

    Williams, M. H.

    1977-01-01

    The investigation reported is concerned with the construction of the resolvent for any given kernel function. In problems with ill-behaved inhomogeneous terms as, for instance, in the aerodynamic problem of flow over a flapped airfoil, direct numerical methods become very difficult. A description is presented of a solution method by resolvent which can be employed in such problems.

  9. Method of mechanical quadratures for solving singular integral equations of various types

    NASA Astrophysics Data System (ADS)

    Sahakyan, A. V.; Amirjanyan, H. A.

    2018-04-01

    The method of mechanical quadratures is proposed as a common approach intended for solving the integral equations defined on finite intervals and containing Cauchy-type singular integrals. This method can be used to solve singular integral equations of the first and second kind, equations with generalized kernel, weakly singular equations, and integro-differential equations. The quadrature rules for several different integrals represented through the same coefficients are presented. This allows one to reduce the integral equations containing integrals of different types to a system of linear algebraic equations.

  10. An accurate and efficient method for evaluating the kernel of the integral equation relating pressure to normalwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    This paper describes an accurate economical method for generating approximations to the kernel of the integral equation relating unsteady pressure to normalwash in nonplanar flow. The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the non elementary integrals in the kernel by exponential approximations and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. Coefficients for 8, 12, 24, and 72 term approximations are tabulated in the report. Also, since the method is automated, it can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  11. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  12. Solution of two-body relativistic bound state equations with confining plus Coulomb interactions

    NASA Technical Reports Server (NTRS)

    Maung, Khin Maung; Kahana, David E.; Norbury, John W.

    1992-01-01

    Studies of meson spectroscopy have often employed a nonrelativistic Coulomb plus Linear Confining potential in position space. However, because the quarks in mesons move at an appreciable fraction of the speed of light, it is necessary to use a relativistic treatment of the bound state problem. Such a treatment is most easily carried out in momentum space. However, the position space Linear and Coulomb potentials lead to singular kernels in momentum space. Using a subtraction procedure we show how to remove these singularities exactly and thereby solve the Schroedinger equation in momentum space for all partial waves. Furthermore, we generalize the Linear and Coulomb potentials to relativistic kernels in four dimensional momentum space. Again we use a subtraction procedure to remove the relativistic singularities exactly for all partial waves. This enables us to solve three dimensional reductions of the Bethe-Salpeter equation. We solve six such equations for Coulomb plus Confining interactions for all partial waves.

  13. Next generation extended Lagrangian first principles molecular dynamics

    NASA Astrophysics Data System (ADS)

    Niklasson, Anders M. N.

    2017-08-01

    Extended Lagrangian Born-Oppenheimer molecular dynamics [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] is formulated for general Hohenberg-Kohn density-functional theory and compared with the extended Lagrangian framework of first principles molecular dynamics by Car and Parrinello [Phys. Rev. Lett. 55, 2471 (1985)]. It is shown how extended Lagrangian Born-Oppenheimer molecular dynamics overcomes several shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while improving or maintaining important features of Car-Parrinello simulations. The accuracy of the electronic degrees of freedom in extended Lagrangian Born-Oppenheimer molecular dynamics, with respect to the exact Born-Oppenheimer solution, is of second-order in the size of the integration time step and of fourth order in the potential energy surface. Improved stability over recent formulations of extended Lagrangian Born-Oppenheimer molecular dynamics is achieved by generalizing the theory to finite temperature ensembles, using fractional occupation numbers in the calculation of the inner-product kernel of the extended harmonic oscillator that appears as a preconditioner in the electronic equations of motion. Material systems that normally exhibit slow self-consistent field convergence can be simulated using integration time steps of the same order as in direct Born-Oppenheimer molecular dynamics, but without the requirement of an iterative, non-linear electronic ground-state optimization prior to the force evaluations and without a systematic drift in the total energy. In combination with proposed low-rank and on the fly updates of the kernel, this formulation provides an efficient and general framework for quantum-based Born-Oppenheimer molecular dynamics simulations.

  14. Next generation extended Lagrangian first principles molecular dynamics.

    PubMed

    Niklasson, Anders M N

    2017-08-07

    Extended Lagrangian Born-Oppenheimer molecular dynamics [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] is formulated for general Hohenberg-Kohn density-functional theory and compared with the extended Lagrangian framework of first principles molecular dynamics by Car and Parrinello [Phys. Rev. Lett. 55, 2471 (1985)]. It is shown how extended Lagrangian Born-Oppenheimer molecular dynamics overcomes several shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while improving or maintaining important features of Car-Parrinello simulations. The accuracy of the electronic degrees of freedom in extended Lagrangian Born-Oppenheimer molecular dynamics, with respect to the exact Born-Oppenheimer solution, is of second-order in the size of the integration time step and of fourth order in the potential energy surface. Improved stability over recent formulations of extended Lagrangian Born-Oppenheimer molecular dynamics is achieved by generalizing the theory to finite temperature ensembles, using fractional occupation numbers in the calculation of the inner-product kernel of the extended harmonic oscillator that appears as a preconditioner in the electronic equations of motion. Material systems that normally exhibit slow self-consistent field convergence can be simulated using integration time steps of the same order as in direct Born-Oppenheimer molecular dynamics, but without the requirement of an iterative, non-linear electronic ground-state optimization prior to the force evaluations and without a systematic drift in the total energy. In combination with proposed low-rank and on the fly updates of the kernel, this formulation provides an efficient and general framework for quantum-based Born-Oppenheimer molecular dynamics simulations.

  15. A Reduced Order Model of the Linearized Incompressible Navier-Strokes Equations for the Sensor/Actuator Placement Problem

    NASA Technical Reports Server (NTRS)

    Allan, Brian G.

    2000-01-01

    A reduced order modeling approach of the Navier-Stokes equations is presented for the design of a distributed optimal feedback kernel. This approach is based oil a Krylov subspace method where significant modes of the flow are captured in the model This model is then used in all optimal feedback control design where sensing and actuation is performed oil tile entire flow field. This control design approach yields all optimal feedback kernel which provides insight into the placement of sensors and actuators in the flow field. As all evaluation of this approach, a two-dimensional shear layer and driven cavity flow are investigated.

  16. A systematic approach to sketch Bethe-Salpeter equation

    NASA Astrophysics Data System (ADS)

    Qin, Si-xue

    2016-03-01

    To study meson properties, one needs to solve the gap equation for the quark propagator and the Bethe-Salpeter (BS) equation for the meson wavefunction, self-consistently. The gluon propagator, the quark-gluon vertex, and the quark-anti-quark scattering kernel are key pieces to solve those equations. Predicted by lattice-QCD and Dyson-Schwinger analyses of QCD's gauge sector, gluons are non-perturbatively massive. In the matter sector, the modeled gluon propagator which can produce a veracious description of meson properties needs to possess a mass scale, accordingly. Solving the well-known longitudinal Ward-Green-Takahashi identities (WGTIs) and the less-known transverse counterparts together, one obtains a nontrivial solution which can shed light on the structure of the quark-gluon vertex. It is highlighted that the phenomenologically proposed anomalous chromomagnetic moment (ACM) vertex originates from the QCD Lagrangian symmetries and its strength is proportional to the magnitude of dynamical chiral symmetry breaking (DCSB). The color-singlet vector and axial-vector WGTIs can relate the BS kernel and the dressed quark-gluon vertex to each other. Using the relation, one can truncate the gap equation and the BS equation, systematically, without violating crucial symmetries, e.g., gauge symmetry and chiral symmetry.

  17. Semiclassical dynamics of spin density waves

    NASA Astrophysics Data System (ADS)

    Chern, Gia-Wei; Barros, Kipton; Wang, Zhentao; Suwa, Hidemaro; Batista, Cristian D.

    2018-01-01

    We present a theoretical framework for equilibrium and nonequilibrium dynamical simulation of quantum states with spin-density-wave (SDW) order. Within a semiclassical adiabatic approximation that retains electron degrees of freedom, we demonstrate that the SDW order parameter obeys a generalized Landau-Lifshitz equation. With the aid of an enhanced kernel polynomial method, our linear-scaling quantum Landau-Lifshitz dynamics (QLLD) method enables dynamical SDW simulations with N ≃105 lattice sites. Our real-space formulation can be used to compute dynamical responses, such as the dynamical structure factor, of complex and even inhomogeneous SDW configurations at zero or finite temperatures. Applying the QLLD to study the relaxation of a noncoplanar topological SDW under the excitation of a short pulse, we further demonstrate the crucial role of spatial correlations and fluctuations in the SDW dynamics.

  18. Complete description of all self-similar models driven by Lévy stable noise

    NASA Astrophysics Data System (ADS)

    Weron, Aleksander; Burnecki, Krzysztof; Mercik, Szymon; Weron, Karina

    2005-01-01

    A canonical decomposition of H -self-similar Lévy symmetric α -stable processes is presented. The resulting components completely described by both deterministic kernels and the corresponding stochastic integral with respect to the Lévy symmetric α -stable motion are shown to be related to the dissipative and conservative parts of the dynamics. This result provides stochastic analysis tools for study the anomalous diffusion phenomena in the Langevin equation framework. For example, a simple computer test for testing the origins of self-similarity is implemented for four real empirical time series recorded from different physical systems: an ionic current flow through a single channel in a biological membrane, an energy of solar flares, a seismic electric signal recorded during seismic Earth activity, and foreign exchange rate daily returns.

  19. Automatically detect and track infrared small targets with kernel Fukunaga-Koontz transform and Kalman prediction.

    PubMed

    Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan

    2007-11-01

    Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.

  20. Automatically detect and track infrared small targets with kernel Fukunaga-Koontz transform and Kalman prediction

    NASA Astrophysics Data System (ADS)

    Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan

    2007-11-01

    Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.

  1. An acceleration framework for synthetic aperture radar algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youngsoo; Gloster, Clay S.; Alexander, Winser E.

    2017-04-01

    Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.

  2. Propagation phenomena in monostable integro-differential equations: Acceleration or not?

    NASA Astrophysics Data System (ADS)

    Alfaro, Matthieu; Coville, Jérôme

    2017-11-01

    We consider the homogeneous integro-differential equation ∂t u = J * u - u + f (u) with a monostable nonlinearity f. Our interest is twofold: we investigate the existence/nonexistence of travelling waves, and the propagation properties of the Cauchy problem. When the dispersion kernel J is exponentially bounded, travelling waves are known to exist and solutions of the Cauchy problem typically propagate at a constant speed [7,10,11,22,26,27]. On the other hand, when the dispersion kernel J has heavy tails and the nonlinearity f is nondegenerate, i.e. f‧ (0) > 0, travelling waves do not exist and solutions of the Cauchy problem propagate by accelerating [14,20,27]. For a general monostable nonlinearity, a dichotomy between these two types of propagation behaviour is still not known. The originality of our work is to provide such dichotomy by studying the interplay between the tails of the dispersion kernel and the Allee effect induced by the degeneracy of f, i.e. f‧ (0) = 0. First, for algebraic decaying kernels, we prove the exact separation between existence and nonexistence of travelling waves. This in turn provides the exact separation between nonacceleration and acceleration in the Cauchy problem. In the latter case, we provide a first estimate of the position of the level sets of the solution.

  3. The crack problem in a reinforced cylindrical shell

    NASA Technical Reports Server (NTRS)

    Yahsi, O. S.; Erdogan, F.

    1986-01-01

    In this paper a partially reinforced cylinder containing an axial through crack is considered. The reinforcement is assumed to be fully bonded to the main cylinder. The composite cylinder is thus modelled by a nonhomogeneous shell having a step change in the elastic properties at the z=0 plane, z being the axial coordinate. Using a Reissner type transverse shear theory the problem is reduced to a pair of singular integral equations. In the special case of a crack tip touching the bimaterial interface it is shown that the dominant parts of the kernels of the integral equations associated with both membrane loading and bending of the shell reduce to the generalized Cauchy kernel obtained for the corresponding plane stress case. The integral equations are solved and the stress intensity factors are given for various crack and shell dimensions. A bonded fiberglass reinforcement which may serve as a crack arrestor is used as an example.

  4. The crack problem in a reinforced cylindrical shell

    NASA Technical Reports Server (NTRS)

    Yahsi, O. S.; Erdogan, F.

    1986-01-01

    A partially reinforced cylinder containing an axial through crack is considered. The reinforcement is assumed to be fully bonded to the main cylinder. The composite cylinder is thus modelled by a nonhomogeneous shell having a step change in the elastic properties at the z = 0 plane, z being the axial coordinate. Using a Reissner type transverse shear theory the problem is reduced to a pair of singular integral equations. In the special case of a crack tip touching the bimaterial interface it is shown that the dominant parts of the kernels of the integral equations associated with both membrane loading and bending of the shell reduce to the generalized Cauchy kernel obtained for the corresponding plane stress case. The integral equations are solved and the stress intensity factors are given for various crack and shell dimensions. A bonded fiberglass reinforcement which may serve as a crack arrestor is used as an example.

  5. Unsupervised multiple kernel learning for heterogeneous data integration.

    PubMed

    Mariette, Jérôme; Villa-Vialaneix, Nathalie

    2018-03-15

    Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.

  6. Inference of Spatio-Temporal Functions Over Graphs via Multikernel Kriged Kalman Filtering

    NASA Astrophysics Data System (ADS)

    Ioannidis, Vassilis N.; Romero, Daniel; Giannakis, Georgios B.

    2018-06-01

    Inference of space-time varying signals on graphs emerges naturally in a plethora of network science related applications. A frequently encountered challenge pertains to reconstructing such dynamic processes, given their values over a subset of vertices and time instants. The present paper develops a graph-aware kernel-based kriged Kalman filter that accounts for the spatio-temporal variations, and offers efficient online reconstruction, even for dynamically evolving network topologies. The kernel-based learning framework bypasses the need for statistical information by capitalizing on the smoothness that graph signals exhibit with respect to the underlying graph. To address the challenge of selecting the appropriate kernel, the proposed filter is combined with a multi-kernel selection module. Such a data-driven method selects a kernel attuned to the signal dynamics on-the-fly within the linear span of a pre-selected dictionary. The novel multi-kernel learning algorithm exploits the eigenstructure of Laplacian kernel matrices to reduce computational complexity. Numerical tests with synthetic and real data demonstrate the superior reconstruction performance of the novel approach relative to state-of-the-art alternatives.

  7. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    NASA Astrophysics Data System (ADS)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  8. Multiscale asymmetric orthogonal wavelet kernel for linear programming support vector learning and nonlinear dynamic systems identification.

    PubMed

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2014-05-01

    Support vector regression for approximating nonlinear dynamic systems is more delicate than the approximation of indicator functions in support vector classification, particularly for systems that involve multitudes of time scales in their sampled data. The kernel used for support vector learning determines the class of functions from which a support vector machine can draw its solution, and the choice of kernel significantly influences the performance of a support vector machine. In this paper, to bridge the gap between wavelet multiresolution analysis and kernel learning, the closed-form orthogonal wavelet is exploited to construct new multiscale asymmetric orthogonal wavelet kernels for linear programming support vector learning. The closed-form multiscale orthogonal wavelet kernel provides a systematic framework to implement multiscale kernel learning via dyadic dilations and also enables us to represent complex nonlinear dynamics effectively. To demonstrate the superiority of the proposed multiscale wavelet kernel in identifying complex nonlinear dynamic systems, two case studies are presented that aim at building parallel models on benchmark datasets. The development of parallel models that address the long-term/mid-term prediction issue is more intricate and challenging than the identification of series-parallel models where only one-step ahead prediction is required. Simulation results illustrate the effectiveness of the proposed multiscale kernel learning.

  9. New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.

    PubMed

    Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto

    2017-06-21

    We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.

  10. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    PubMed

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  11. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies

    PubMed Central

    Manitz, Juliane; Burger, Patricia; Amos, Christopher I.; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility. PMID:28785300

  12. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies.

    PubMed

    Friedrichs, Stefanie; Manitz, Juliane; Burger, Patricia; Amos, Christopher I; Risch, Angela; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike; Hofner, Benjamin

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility.

  13. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  14. Mathematical theory of exchange-driven growth

    NASA Astrophysics Data System (ADS)

    Esenturk, Emre

    2018-07-01

    Exchange-driven growth is a process in which pairs of clusters interact by exchanging single unit of mass at a time. The rate of exchange is given by an interaction kernel which depends on the masses of the two interacting clusters. In this paper we establish the fundamental mathematical properties of the mean field rate equations of this process for the first time. We find two different classes of behavior depending on whether is symmetric or not. For the non-symmetric case, we prove global existence and uniqueness of solutions for kernels satisfying . This result is optimal in the sense that we show for a large class of initial conditions and kernels satisfying the solutions cannot exist. On the other hand, for symmetric kernels, we prove global existence of solutions for ( while existence is lost for ( In the intermediate regime we can only show local existence. We conjecture that the intermediate regime exhibits finite-time gelation in accordance with the heuristic results obtained for particular kernels.

  15. An algorithm of improving speech emotional perception for hearing aid

    NASA Astrophysics Data System (ADS)

    Xi, Ji; Liang, Ruiyu; Fei, Xianju

    2017-07-01

    In this paper, a speech emotion recognition (SER) algorithm was proposed to improve the emotional perception of hearing-impaired people. The algorithm utilizes multiple kernel technology to overcome the drawback of SVM: slow training speed. Firstly, in order to improve the adaptive performance of Gaussian Radial Basis Function (RBF), the parameter determining the nonlinear mapping was optimized on the basis of Kernel target alignment. Then, the obtained Kernel Function was used as the basis kernel of Multiple Kernel Learning (MKL) with slack variable that could solve the over-fitting problem. However, the slack variable also brings the error into the result. Therefore, a soft-margin MKL was proposed to balance the margin against the error. Moreover, the relatively iterative algorithm was used to solve the combination coefficients and hyper-plane equations. Experimental results show that the proposed algorithm can acquire an accuracy of 90% for five kinds of emotions including happiness, sadness, anger, fear and neutral. Compared with KPCA+CCA and PIM-FSVM, the proposed algorithm has the highest accuracy.

  16. Adaptive learning in complex reproducing kernel Hilbert spaces employing Wirtinger's subgradients.

    PubMed

    Bouboulis, Pantelis; Slavakis, Konstantinos; Theodoridis, Sergios

    2012-03-01

    This paper presents a wide framework for non-linear online supervised learning tasks in the context of complex valued signal processing. The (complex) input data are mapped into a complex reproducing kernel Hilbert space (RKHS), where the learning phase is taking place. Both pure complex kernels and real kernels (via the complexification trick) can be employed. Moreover, any convex, continuous and not necessarily differentiable function can be used to measure the loss between the output of the specific system and the desired response. The only requirement is the subgradient of the adopted loss function to be available in an analytic form. In order to derive analytically the subgradients, the principles of the (recently developed) Wirtinger's calculus in complex RKHS are exploited. Furthermore, both linear and widely linear (in RKHS) estimation filters are considered. To cope with the problem of increasing memory requirements, which is present in almost all online schemes in RKHS, the sparsification scheme, based on projection onto closed balls, has been adopted. We demonstrate the effectiveness of the proposed framework in a non-linear channel identification task, a non-linear channel equalization problem and a quadrature phase shift keying equalization scheme, using both circular and non circular synthetic signal sources.

  17. Eigenfunctions and heat kernels of super Maass Laplacians on the super Poincaré upper half-plane

    NASA Astrophysics Data System (ADS)

    Oshima, Kazuto

    1992-03-01

    Heat kernels of ``super Maass Laplacians'' are explicitly constructed on super Poincaré upper half-plane by a serious treatment of a complete set of eigenfunctions. By component decomposition an explicit treatment can be done for arbitrary weight and a knowledge of classical Maass Laplacians becomes helpful. The result coincides with that of Aoki [Commun. Math. Phys. 117, 405 (1988)] which was obtained by solving differential equations.

  18. Generalized multiple kernel learning with data-dependent priors.

    PubMed

    Mao, Qi; Tsang, Ivor W; Gao, Shenghua; Wang, Li

    2015-06-01

    Multiple kernel learning (MKL) and classifier ensemble are two mainstream methods for solving learning problems in which some sets of features/views are more informative than others, or the features/views within a given set are inconsistent. In this paper, we first present a novel probabilistic interpretation of MKL such that maximum entropy discrimination with a noninformative prior over multiple views is equivalent to the formulation of MKL. Instead of using the noninformative prior, we introduce a novel data-dependent prior based on an ensemble of kernel predictors, which enhances the prediction performance of MKL by leveraging the merits of the classifier ensemble. With the proposed probabilistic framework of MKL, we propose a hierarchical Bayesian model to learn the proposed data-dependent prior and classification model simultaneously. The resultant problem is convex and other information (e.g., instances with either missing views or missing labels) can be seamlessly incorporated into the data-dependent priors. Furthermore, a variety of existing MKL models can be recovered under the proposed MKL framework and can be readily extended to incorporate these priors. Extensive experiments demonstrate the benefits of our proposed framework in supervised and semisupervised settings, as well as in tasks with partial correspondence among multiple views.

  19. Mathematical inference in one point microrheology

    NASA Astrophysics Data System (ADS)

    Hohenegger, Christel; McKinley, Scott

    2016-11-01

    Pioneered by the work of Mason and Weitz, one point passive microrheology has been successfully applied to obtaining estimates of the loss and storage modulus of viscoelastic fluids when the mean-square displacement obeys a local power law. Using numerical simulations of a fluctuating viscoelastic fluid model, we study the problem of recovering the mechanical parameters of the fluid's memory kernel using statistical inference like mean-square displacements and increment auto-correlation functions. Seeking a better understanding of the influence of the assumptions made in the inversion process, we mathematically quantify the uncertainty in traditional one point microrheology for simulated data and demonstrate that a large family of memory kernels yields the same statistical signature. We consider both simulated data obtained from a full viscoelastic fluid simulation of the unsteady Stokes equations with fluctuations and from a Generalized Langevin Equation of the particle's motion described by the same memory kernel. From the theory of inverse problems, we propose an alternative method that can be used to recover information about the loss and storage modulus and discuss its limitations and uncertainties. NSF-DMS 1412998.

  20. A new discrete dipole kernel for quantitative susceptibility mapping.

    PubMed

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Genetic Analysis of Kernel Traits in Maize-Teosinte Introgression Populations.

    PubMed

    Liu, Zhengbin; Garcia, Arturo; McMullen, Michael D; Flint-Garcia, Sherry A

    2016-08-09

    Seed traits have been targeted by human selection during the domestication of crop species as a way to increase the caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is most directly related to overall grain yield. Additional seed traits involved in seed shape may have also contributed to larger grain. Maize (Zea mays ssp. mays) kernel weight has increased more than 10-fold in the 9000 years since domestication from its wild ancestor, teosinte (Z. mays ssp. parviglumis). In order to study how size and shape affect kernel weight, we analyzed kernel morphometric traits in a set of 10 maize-teosinte introgression populations using digital imaging software. We identified quantitative trait loci (QTL) for kernel area and length with moderate allelic effects that colocalize with kernel weight QTL. Several genomic regions with strong effects during maize domestication were detected, and a genetic framework for kernel traits was characterized by complex pleiotropic interactions. Our results both confirm prior reports of kernel domestication loci and identify previously uncharacterized QTL with a range of allelic effects, enabling future research into the genetic basis of these traits. Copyright © 2016 Liu et al.

  2. Genetic Analysis of Kernel Traits in Maize-Teosinte Introgression Populations

    PubMed Central

    Liu, Zhengbin; Garcia, Arturo; McMullen, Michael D.; Flint-Garcia, Sherry A.

    2016-01-01

    Seed traits have been targeted by human selection during the domestication of crop species as a way to increase the caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is most directly related to overall grain yield. Additional seed traits involved in seed shape may have also contributed to larger grain. Maize (Zea mays ssp. mays) kernel weight has increased more than 10-fold in the 9000 years since domestication from its wild ancestor, teosinte (Z. mays ssp. parviglumis). In order to study how size and shape affect kernel weight, we analyzed kernel morphometric traits in a set of 10 maize-teosinte introgression populations using digital imaging software. We identified quantitative trait loci (QTL) for kernel area and length with moderate allelic effects that colocalize with kernel weight QTL. Several genomic regions with strong effects during maize domestication were detected, and a genetic framework for kernel traits was characterized by complex pleiotropic interactions. Our results both confirm prior reports of kernel domestication loci and identify previously uncharacterized QTL with a range of allelic effects, enabling future research into the genetic basis of these traits. PMID:27317774

  3. Searching Remote Homology with Spectral Clustering with Symmetry in Neighborhood Cluster Kernels

    PubMed Central

    Maulik, Ujjwal; Sarkar, Anasua

    2013-01-01

    Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of “recent” paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request. Contact: sarkar@labri.fr. PMID:23457439

  4. Searching remote homology with spectral clustering with symmetry in neighborhood cluster kernels.

    PubMed

    Maulik, Ujjwal; Sarkar, Anasua

    2013-01-01

    Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of "recent" paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request. sarkar@labri.fr.

  5. Exact RG flow equations and quantum gravity

    NASA Astrophysics Data System (ADS)

    de Alwis, S. P.

    2018-03-01

    We discuss the different forms of the functional RG equation and their relation to each other. In particular we suggest a generalized background field version that is close in spirit to the Polchinski equation as an alternative to the Wetterich equation to study Weinberg's asymptotic safety program for defining quantum gravity, and argue that the former is better suited for this purpose. Using the heat kernel expansion and proper time regularization we find evidence in support of this program in agreement with previous work.

  6. Four-electron model for singlet and triplet excitation energy transfers with inclusion of coherence memory, inelastic tunneling and nuclear quantum effects

    NASA Astrophysics Data System (ADS)

    Suzuki, Yosuke; Ebina, Kuniyoshi; Tanaka, Shigenori

    2016-08-01

    A computational scheme to describe the coherent dynamics of excitation energy transfer (EET) in molecular systems is proposed on the basis of generalized master equations with memory kernels. This formalism takes into account those physical effects in electron-bath coupling system such as the spin symmetry of excitons, the inelastic electron tunneling and the quantum features of nuclear motions, thus providing a theoretical framework to perform an ab initio description of EET through molecular simulations for evaluating the spectral density and the temporal correlation function of electronic coupling. Some test calculations have then been carried out to investigate the dependence of exciton population dynamics on coherence memory, inelastic tunneling correlation time, magnitude of electronic coupling, quantum correction to temporal correlation function, reorganization energy and energy gap.

  7. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  8. Improved Online Support Vector Machines Spam Filtering Using String Kernels

    NASA Astrophysics Data System (ADS)

    Amayri, Ola; Bouguila, Nizar

    A major bottleneck in electronic communications is the enormous dissemination of spam emails. Developing of suitable filters that can adequately capture those emails and achieve high performance rate become a main concern. Support vector machines (SVMs) have made a large contribution to the development of spam email filtering. Based on SVMs, the crucial problems in email classification are feature mapping of input emails and the choice of the kernels. In this paper, we present thorough investigation of several distance-based kernels and propose the use of string kernels and prove its efficiency in blocking spam emails. We detail a feature mapping variants in text classification (TC) that yield improved performance for the standard SVMs in filtering task. Furthermore, to cope for realtime scenarios we propose an online active framework for spam filtering.

  9. Biologically-Inspired Spike-Based Automatic Speech Recognition of Isolated Digits Over a Reproducing Kernel Hilbert Space

    PubMed Central

    Li, Kan; Príncipe, José C.

    2018-01-01

    This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime. PMID:29666568

  10. Biologically-Inspired Spike-Based Automatic Speech Recognition of Isolated Digits Over a Reproducing Kernel Hilbert Space.

    PubMed

    Li, Kan; Príncipe, José C

    2018-01-01

    This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime.

  11. Frozen Gaussian approximation for 3D seismic tomography

    NASA Astrophysics Data System (ADS)

    Chai, Lihui; Tong, Ping; Yang, Xu

    2018-05-01

    Three-dimensional (3D) wave-equation-based seismic tomography is computationally challenging in large scales and high-frequency regime. In this paper, we apply the frozen Gaussian approximation (FGA) method to compute 3D sensitivity kernels and seismic tomography of high-frequency. Rather than standard ray theory used in seismic inversion (e.g. Kirchhoff migration and Gaussian beam migration), FGA is used to compute the 3D high-frequency sensitivity kernels for travel-time or full waveform inversions. Specifically, we reformulate the equations of the forward and adjoint wavefields for the purpose of convenience to apply FGA, and with this reformulation, one can efficiently compute the Green’s functions whose convolutions with source time function produce wavefields needed for the construction of 3D kernels. Moreover, a fast summation method is proposed based on local fast Fourier transform which greatly improves the speed of reconstruction as the last step of FGA algorithm. We apply FGA to both the travel-time adjoint tomography and full waveform inversion (FWI) on synthetic crosswell seismic data with dominant frequencies as high as those of real crosswell data, and confirm again that FWI requires a more sophisticated initial velocity model for the convergence than travel-time adjoint tomography. We also numerically test the accuracy of applying FGA to local earthquake tomography. This study paves the way to directly apply wave-equation-based seismic tomography methods into real data around their dominant frequencies.

  12. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  13. Semi-analytical solution for the generalized absorbing boundary condition in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Lee, Chung-Shuo; Chen, Yan-Yu; Yu, Chi-Hua; Hsu, Yu-Chuan; Chen, Chuin-Shan

    2017-07-01

    We present a semi-analytical solution of a time-history kernel for the generalized absorbing boundary condition in molecular dynamics (MD) simulations. To facilitate the kernel derivation, the concept of virtual atoms in real space that can conform with an arbitrary boundary in an arbitrary lattice is adopted. The generalized Langevin equation is regularized using eigenvalue decomposition and, consequently, an analytical expression of an inverse Laplace transform is obtained. With construction of dynamical matrices in the virtual domain, a semi-analytical form of the time-history kernel functions for an arbitrary boundary in an arbitrary lattice can be found. The time-history kernel functions for different crystal lattices are derived to show the generality of the proposed method. Non-equilibrium MD simulations in a triangular lattice with and without the absorbing boundary condition are conducted to demonstrate the validity of the solution.

  14. A Linear Kernel for Co-Path/Cycle Packing

    NASA Astrophysics Data System (ADS)

    Chen, Zhi-Zhong; Fellows, Michael; Fu, Bin; Jiang, Haitao; Liu, Yang; Wang, Lusheng; Zhu, Binhai

    Bounded-Degree Vertex Deletion is a fundamental problem in graph theory that has new applications in computational biology. In this paper, we address a special case of Bounded-Degree Vertex Deletion, the Co-Path/Cycle Packing problem, which asks to delete as few vertices as possible such that the graph of the remaining (residual) vertices is composed of disjoint paths and simple cycles. The problem falls into the well-known class of 'node-deletion problems with hereditary properties', is hence NP-complete and unlikely to admit a polynomial time approximation algorithm with approximation factor smaller than 2. In the framework of parameterized complexity, we present a kernelization algorithm that produces a kernel with at most 37k vertices, improving on the super-linear kernel of Fellows et al.'s general theorem for Bounded-Degree Vertex Deletion. Using this kernel,and the method of bounded search trees, we devise an FPT algorithm that runs in time O *(3.24 k ). On the negative side, we show that the problem is APX-hard and unlikely to have a kernel smaller than 2k by a reduction from Vertex Cover.

  15. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    NASA Astrophysics Data System (ADS)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  16. Dynamic extension of the Simulation Problem Analysis Kernel (SPANK)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowell, E.F.; Buhl, W.F.

    1988-07-15

    The Simulation Problem Analysis Kernel (SPANK) is an object-oriented simulation environment for general simulation purposes. Among its unique features is use of the directed graph as the primary data structure, rather than the matrix. This allows straightforward use of graph algorithms for matching variables and equations, and reducing the problem graph for efficient numerical solution. The original prototype implementation demonstrated the principles for systems of algebraic equations, allowing simulation of steady-state, nonlinear systems (Sowell 1986). This paper describes how the same principles can be extended to include dynamic objects, allowing simulation of general dynamic systems. The theory is developed andmore » an implementation is described. An example is taken from the field of building energy system simulation. 2 refs., 9 figs.« less

  17. Conformable derivative approach to anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Zhou, H. W.; Yang, S.; Zhang, S. Q.

    2018-02-01

    By using a new derivative with fractional order, referred to conformable derivative, an alternative representation of the diffusion equation is proposed to improve the modeling of anomalous diffusion. The analytical solutions of the conformable derivative model in terms of Gauss kernel and Error function are presented. The power law of the mean square displacement for the conformable diffusion model is studied invoking the time-dependent Gauss kernel. The parameters related to the conformable derivative model are determined by Levenberg-Marquardt method on the basis of the experimental data of chloride ions transportation in reinforced concrete. The data fitting results showed that the conformable derivative model agrees better with the experimental data than the normal diffusion equation. Furthermore, the potential application of the proposed conformable derivative model of water flow in low-permeability media is discussed.

  18. Efficient solution of the Wigner-Liouville equation using a spectral decomposition of the force field

    NASA Astrophysics Data System (ADS)

    Van de Put, Maarten L.; Sorée, Bart; Magnus, Wim

    2017-12-01

    The Wigner-Liouville equation is reformulated using a spectral decomposition of the classical force field instead of the potential energy. The latter is shown to simplify the Wigner-Liouville kernel both conceptually and numerically as the spectral force Wigner-Liouville equation avoids the numerical evaluation of the highly oscillatory Wigner kernel which is nonlocal in both position and momentum. The quantum mechanical evolution is instead governed by a term local in space and non-local in momentum, where the non-locality in momentum has only a limited range. An interpretation of the time evolution in terms of two processes is presented; a classical evolution under the influence of the averaged driving field, and a probability-preserving quantum-mechanical generation and annihilation term. Using the inherent stability and reduced complexity, a direct deterministic numerical implementation using Chebyshev and Fourier pseudo-spectral methods is detailed. For the purpose of illustration, we present results for the time-evolution of a one-dimensional resonant tunneling diode driven out of equilibrium.

  19. A general CFD framework for fault-resilient simulations based on multi-resolution information fusion

    NASA Astrophysics Data System (ADS)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-10-01

    We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.

  20. Adaptive Multilevel Middleware for Object Systems

    DTIC Science & Technology

    2006-12-01

    the system at the system-call level or using the CORBA-standard Extensible Transport Framework ( ETF ). Transparent insertion is highly desirable from an...often as it needs to. This is remedied by using the real-time scheduling class in a stock Linux kernel. We used schedsetscheduler system call (with...real-time scheduling class (SCHEDFIFO) for all the ML-NFD programs, later experiments with CPU load indicate that a stock Linux kernel is not

  1. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  2. Anytime query-tuned kernel machine classifiers via Cholesky factorization

    NASA Technical Reports Server (NTRS)

    DeCoste, D.

    2002-01-01

    We recently demonstrated 2 to 64-fold query-time speedups of Support Vector Machine and Kernel Fisher classifiers via a new computational geometry method for anytime output bounds (DeCoste,2002). This new paper refines our approach in two key ways. First, we introduce a simple linear algebra formulation based on Cholesky factorization, yielding simpler equations and lower computational overhead. Second, this new formulation suggests new methods for achieving additional speedups, including tuning on query samples. We demonstrate effectiveness on benchmark datasets.

  3. Kinetic Rate Kernels via Hierarchical Liouville-Space Projection Operator Approach.

    PubMed

    Zhang, Hou-Dao; Yan, YiJing

    2016-05-19

    Kinetic rate kernels in general multisite systems are formulated on the basis of a nonperturbative quantum dissipation theory, the hierarchical equations of motion (HEOM) formalism, together with the Nakajima-Zwanzig projection operator technique. The present approach exploits the HEOM-space linear algebra. The quantum non-Markovian site-to-site transfer rate can be faithfully evaluated via projected HEOM dynamics. The developed method is exact, as evident by the comparison to the direct HEOM evaluation results on the population evolution.

  4. Calculation of plasma dielectric response in inhomogeneous magnetic field near electron cyclotron resonance

    NASA Astrophysics Data System (ADS)

    Evstatiev, Evstati; Svidzinski, Vladimir; Spencer, Andy; Galkin, Sergei

    2014-10-01

    Full wave 3-D modeling of RF fields in hot magnetized nonuniform plasma requires calculation of nonlocal conductivity kernel describing the dielectric response of such plasma to the RF field. In many cases, the conductivity kernel is a localized function near the test point which significantly simplifies numerical solution of the full wave 3-D problem. Preliminary results of feasibility analysis of numerical calculation of the conductivity kernel in a 3-D hot nonuniform magnetized plasma in the electron cyclotron frequency range will be reported. This case is relevant to modeling of ECRH in ITER. The kernel is calculated by integrating the linearized Vlasov equation along the unperturbed particle's orbits. Particle's orbits in the nonuniform equilibrium magnetic field are calculated numerically by one of the Runge-Kutta methods. RF electric field is interpolated on a specified grid on which the conductivity kernel is discretized. The resulting integrals in the particle's initial velocity and time are then calculated numerically. Different optimization approaches of the integration are tested in this feasibility analysis. Work is supported by the U.S. DOE SBIR program.

  5. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H; Williams, Samuel; Datta, Kaushik

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we developmore » a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.« less

  6. SU-C-9A-03: Simultaneous Deconvolution and Segmentation for PET Tumor Delineation Using a Variational Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L; Tan, S; Lu, W

    2014-06-01

    Purpose: To implement a new method that integrates deconvolution with segmentation under the variational framework for PET tumor delineation. Methods: Deconvolution and segmentation are both challenging problems in image processing. The partial volume effect (PVE) makes tumor boundaries in PET image blurred which affects the accuracy of tumor segmentation. Deconvolution aims to obtain a PVE-free image, which can help to improve the segmentation accuracy. Conversely, a correct localization of the object boundaries is helpful to estimate the blur kernel, and thus assist in the deconvolution. In this study, we proposed to solve the two problems simultaneously using a variational methodmore » so that they can benefit each other. The energy functional consists of a fidelity term and a regularization term, and the blur kernel was limited to be the isotropic Gaussian kernel. We minimized the energy functional by solving the associated Euler-Lagrange equations and taking the derivative with respect to the parameters of the kernel function. An alternate minimization method was used to iterate between segmentation, deconvolution and blur-kernel recovery. The performance of the proposed method was tested on clinic PET images of patients with non-Hodgkin's lymphoma, and compared with seven other segmentation methods using the dice similarity index (DSI) and volume error (VE). Results: Among all segmentation methods, the proposed one (DSI=0.81, VE=0.05) has the highest accuracy, followed by the active contours without edges (DSI=0.81, VE=0.25), while other methods including the Graph Cut and the Mumford-Shah (MS) method have lower accuracy. A visual inspection shows that the proposed method localizes the real tumor contour very well. Conclusion: The result showed that deconvolution and segmentation can contribute to each other. The proposed variational method solve the two problems simultaneously, and leads to a high performance for tumor segmentation in PET. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less

  7. Gene function prediction with gene interaction networks: a context graph kernel approach.

    PubMed

    Li, Xin; Chen, Hsinchun; Li, Jiexun; Zhang, Zhu

    2010-01-01

    Predicting gene functions is a challenge for biologists in the postgenomic era. Interactions among genes and their products compose networks that can be used to infer gene functions. Most previous studies adopt a linkage assumption, i.e., they assume that gene interactions indicate functional similarities between connected genes. In this study, we propose to use a gene's context graph, i.e., the gene interaction network associated with the focal gene, to infer its functions. In a kernel-based machine-learning framework, we design a context graph kernel to capture the information in context graphs. Our experimental study on a testbed of p53-related genes demonstrates the advantage of using indirect gene interactions and shows the empirical superiority of the proposed approach over linkage-assumption-based methods, such as the algorithm to minimize inconsistent connected genes and diffusion kernels.

  8. Spectral-Element Simulations of Wave Propagation in Porous Media: Finite-Frequency Sensitivity Kernels Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Morency, C.; Tromp, J.

    2008-12-01

    The mathematical formulation of wave propagation in porous media developed by Biot is based upon the principle of virtual work, ignoring processes at the microscopic level, and does not explicitly incorporate gradients in porosity. Based on recent studies focusing on averaging techniques, we derive the macroscopic porous medium equations from the microscale, with a particular emphasis on the effects of gradients in porosity. In doing so, we are able to naturally determine two key terms in the momentum equations and constitutive relationships, directly translating the coupling between the solid and fluid phases, namely a drag force and an interfacial strain tensor. In both terms, gradients in porosity arise. One remarkable result is that when we rewrite this set of equations in terms of the well known Biot variables us, w), terms involving gradients in porosity are naturally accommodated by gradients involving w, the fluid motion relative to the solid, and Biot's formulation is recovered, i.e., it remains valid in the presence of porosity gradients We have developed a numerical implementation of the Biot equations for two-dimensional problems based upon the spectral-element method (SEM) in the time domain. The SEM is a high-order variational method, which has the advantage of accommodating complex geometries like a finite-element method, while keeping the exponential convergence rate of (pseudo)spectral methods. As in the elastic and acoustic cases, poroelastic wave propagation based upon the SEM involves a diagonal mass matrix, which leads to explicit time integration schemes that are well-suited to simulations on parallel computers. Effects associated with physical dispersion & attenuation and frequency-dependent viscous resistance are addressed by using a memory variable approach. Various benchmarks involving poroelastic wave propagation in the high- and low-frequency regimes, and acoustic-poroelastic and poroelastic-poroelastic discontinuities have been successfully performed. We present finite-frequency sensitivity kernels for wave propagation in porous media based upon adjoint methods. We first show that the adjoint equations in porous media are similar to the regular Biot equations upon defining an appropriate adjoint source. Then we present finite-frequency kernels for seismic phases in porous media (e.g., fast P, slow P, and S). These kernels illustrate the sensitivity of seismic observables to structural parameters and form the basis of tomographic inversions. Finally, we show an application of this imaging technique related to the detection of buried landmines and unexploded ordnance (UXO) in porous environments.

  9. Prioritizing individual genetic variants after kernel machine testing using variable selection.

    PubMed

    He, Qianchuan; Cai, Tianxi; Liu, Yang; Zhao, Ni; Harmon, Quaker E; Almli, Lynn M; Binder, Elisabeth B; Engel, Stephanie M; Ressler, Kerry J; Conneely, Karen N; Lin, Xihong; Wu, Michael C

    2016-12-01

    Kernel machine learning methods, such as the SNP-set kernel association test (SKAT), have been widely used to test associations between traits and genetic polymorphisms. In contrast to traditional single-SNP analysis methods, these methods are designed to examine the joint effect of a set of related SNPs (such as a group of SNPs within a gene or a pathway) and are able to identify sets of SNPs that are associated with the trait of interest. However, as with many multi-SNP testing approaches, kernel machine testing can draw conclusion only at the SNP-set level, and does not directly inform on which one(s) of the identified SNP set is actually driving the associations. A recently proposed procedure, KerNel Iterative Feature Extraction (KNIFE), provides a general framework for incorporating variable selection into kernel machine methods. In this article, we focus on quantitative traits and relatively common SNPs, and adapt the KNIFE procedure to genetic association studies and propose an approach to identify driver SNPs after the application of SKAT to gene set analysis. Our approach accommodates several kernels that are widely used in SNP analysis, such as the linear kernel and the Identity by State (IBS) kernel. The proposed approach provides practically useful utilities to prioritize SNPs, and fills the gap between SNP set analysis and biological functional studies. Both simulation studies and real data application are used to demonstrate the proposed approach. © 2016 WILEY PERIODICALS, INC.

  10. An improved numerical method for the kernel density functional estimation of disperse flow

    NASA Astrophysics Data System (ADS)

    Smith, Timothy; Ranjan, Reetesh; Pantano, Carlos

    2014-11-01

    We present an improved numerical method to solve the transport equation for the one-point particle density function (pdf), which can be used to model disperse flows. The transport equation, a hyperbolic partial differential equation (PDE) with a source term, is derived from the Lagrangian equations for a dilute particle system by treating position and velocity as state-space variables. The method approximates the pdf by a discrete mixture of kernel density functions (KDFs) with space and time varying parameters and performs a global Rayleigh-Ritz like least-square minimization on the state-space of velocity. Such an approximation leads to a hyperbolic system of PDEs for the KDF parameters that cannot be written completely in conservation form. This system is solved using a numerical method that is path-consistent, according to the theory of non-conservative hyperbolic equations. The resulting formulation is a Roe-like update that utilizes the local eigensystem information of the linearized system of PDEs. We will present the formulation of the base method, its higher-order extension and further regularization to demonstrate that the method can predict statistics of disperse flows in an accurate, consistent and efficient manner. This project was funded by NSF Project NSF-DMS 1318161.

  11. Rate kernel theory for pseudo-first-order kinetics of diffusion-influenced reactions and application to fluorescence quenching kinetics.

    PubMed

    Yang, Mino

    2007-06-07

    Theoretical foundation of rate kernel equation approaches for diffusion-influenced chemical reactions is presented and applied to explain the kinetics of fluorescence quenching reactions. A many-body master equation is constructed by introducing stochastic terms, which characterize the rates of chemical reactions, into the many-body Smoluchowski equation. A Langevin-type of memory equation for the density fields of reactants evolving under the influence of time-independent perturbation is derived. This equation should be useful in predicting the time evolution of reactant concentrations approaching the steady state attained by the perturbation as well as the steady-state concentrations. The dynamics of fluctuation occurring in equilibrium state can be predicted by the memory equation by turning the perturbation off and consequently may be useful in obtaining the linear response to a time-dependent perturbation. It is found that unimolecular decay processes including the time-independent perturbation can be incorporated into bimolecular reaction kinetics as a Laplace transform variable. As a result, a theory for bimolecular reactions along with the unimolecular process turned off is sufficient to predict overall reaction kinetics including the effects of unimolecular reactions and perturbation. As the present formulation is applied to steady-state kinetics of fluorescence quenching reactions, the exact relation between fluorophore concentrations and the intensity of excitation light is derived.

  12. A nonlinear autoregressive Volterra model of the Hodgkin-Huxley equations.

    PubMed

    Eikenberry, Steffen E; Marmarelis, Vasilis Z

    2013-02-01

    We propose a new variant of Volterra-type model with a nonlinear auto-regressive (NAR) component that is a suitable framework for describing the process of AP generation by the neuron membrane potential, and we apply it to input-output data generated by the Hodgkin-Huxley (H-H) equations. Volterra models use a functional series expansion to describe the input-output relation for most nonlinear dynamic systems, and are applicable to a wide range of physiologic systems. It is difficult, however, to apply the Volterra methodology to the H-H model because is characterized by distinct subthreshold and suprathreshold dynamics. When threshold is crossed, an autonomous action potential (AP) is generated, the output becomes temporarily decoupled from the input, and the standard Volterra model fails. Therefore, in our framework, whenever membrane potential exceeds some threshold, it is taken as a second input to a dual-input Volterra model. This model correctly predicts membrane voltage deflection both within the subthreshold region and during APs. Moreover, the model naturally generates a post-AP afterpotential and refractory period. It is known that the H-H model converges to a limit cycle in response to a constant current injection. This behavior is correctly predicted by the proposed model, while the standard Volterra model is incapable of generating such limit cycle behavior. The inclusion of cross-kernels, which describe the nonlinear interactions between the exogenous and autoregressive inputs, is found to be absolutely necessary. The proposed model is general, non-parametric, and data-derived.

  13. Compiler-Driven Performance Optimization and Tuning for Multicore Architectures

    DTIC Science & Technology

    2015-04-10

    develop a powerful system for auto-tuning of library routines and compute-intensive kernels, driven by the Pluto system for multicores that we are...kernels, driven by the Pluto system for multicores that we are developing. The work here is motivated by recent advances in two major areas of...automatic C-to-CUDA code generator using a polyhedral compiler transformation framework. We have used and adapted PLUTO (our state-of-the-art tool

  14. Forced Ignition Study Based On Wavelet Method

    NASA Astrophysics Data System (ADS)

    Martelli, E.; Valorani, M.; Paolucci, S.; Zikoski, Z.

    2011-05-01

    The control of ignition in a rocket engine is a critical problem for combustion chamber design. Therefore it is essential to fully understand the mechanism of ignition during its earliest stages. In this paper the characteristics of flame kernel formation and initial propagation in a hydrogen-argon-oxygen mixing layer are studied using 2D direct numerical simulations with detailed chemistry and transport properties. The flame kernel is initiated by adding an energy deposition source term in the energy equation. The effect of unsteady strain rate is studied by imposing a 2D turbulence velocity field, which is initialized by means of a synthetic field. An adaptive wavelet method, based on interpolating wavelets is used in this study to solve the compressible reactive Navier- Stokes equations. This method provides an alternative means to refine the computational grid points according to local demands of the physical solution. The present simulations show that in the very early instants the kernel perturbed by the turbulent field is characterized by an increased burning area and a slightly increased rad- ical formation. In addition, the calculations show that the wavelet technique yields a significant reduction in the number of degrees of freedom necessary to achieve a pre- scribed solution accuracy.

  15. Numerical integration of the extended variable generalized Langevin equation with a positive Prony representable memory kernel.

    PubMed

    Baczewski, Andrew D; Bond, Stephen D

    2013-07-28

    Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.

  16. Nonconservative Lagrangian Mechanics: Purely Causal Equations of Motion

    NASA Astrophysics Data System (ADS)

    Dreisigmeyer, David W.; Young, Peter M.

    2015-06-01

    This work builds on the Volterra series formalism presented in Dreisigmeyer and Young (J Phys A 36: 8297, 2003) to model nonconservative systems. Here we treat Lagrangians and actions as `time dependent' Volterra series. We present a new family of kernels to be used in these Volterra series that allow us to derive a single retarded equation of motion using a variational principle.

  17. Influence of Initial Correlations on Evolution of a Subsystem in a Heat Bath and Polaron Mobility

    NASA Astrophysics Data System (ADS)

    Los, Victor F.

    2017-08-01

    A regular approach to accounting for initial correlations, which allows to go beyond the unrealistic random phase (initial product state) approximation in deriving the evolution equations, is suggested. An exact homogeneous (time-convolution and time-convolutionless) equations for a relevant part of the two-time equilibrium correlation function for the dynamic variables of a subsystem interacting with a boson field (heat bath) are obtained. No conventional approximation like RPA or Bogoliubov's principle of weakening of initial correlations is used. The obtained equations take into account the initial correlations in the kernel governing their evolution. The solution to these equations is found in the second order of the kernel expansion in the electron-phonon interaction, which demonstrates that generally the initial correlations influence the correlation function's evolution in time. It is explicitly shown that this influence vanishes on a large timescale (actually at t→ ∞) and the evolution process enters an irreversible kinetic regime. The developed approach is applied to the Fröhlich polaron and the low-temperature polaron mobility (which was under a long-time debate) is found with a correction due to initial correlations.

  18. Fracture and fatigue analysis of functionally graded and homogeneous materials using singular integral equation approach

    NASA Astrophysics Data System (ADS)

    Zhao, Huaqing

    There are two major objectives of this thesis work. One is to study theoretically the fracture and fatigue behavior of both homogeneous and functionally graded materials, with or without crack bridging. The other is to further develop the singular integral equation approach in solving mixed boundary value problems. The newly developed functionally graded materials (FGMs) have attracted considerable research interests as candidate materials for structural applications ranging from aerospace to automobile to manufacturing. From the mechanics viewpoint, the unique feature of FGMs is that their resistance to deformation, fracture and damage varies spatially. In order to guide the microstructure selection and the design and performance assessment of components made of functionally graded materials, in this thesis work, a series of theoretical studies has been carried out on the mode I stress intensity factors and crack opening displacements for FGMs with different combinations of geometry and material under various loading conditions, including: (1) a functionally graded layer under uniform strain, far field pure bending and far field axial loading, (2) a functionally graded coating on an infinite substrate under uniform strain, and (3) a functionally graded coating on a finite substrate under uniform strain, far field pure bending and far field axial loading. In solving crack problems in homogeneous and non-homogeneous materials, a very powerful singular integral equation (SEE) method has been developed since 1960s by Erdogan and associates to solve mixed boundary value problems. However, some of the kernel functions developed earlier are incomplete and possibly erroneous. In this thesis work, mode I fracture problems in a homogeneous strip are reformulated and accurate singular Cauchy type kernels are derived. Very good convergence rates and consistency with standard data are achieved. Other kernel functions are subsequently developed for mode I fracture in functionally graded materials. This work provides a solid foundation for further applications of the singular integral equation approach to fracture and fatigue problems in advanced composites. The concept of crack bridging is a unifying theory for fracture at various length scales, from atomic cleavage to rupture of concrete structures. However, most of the previous studies are limited to small scale bridging analyses although large scale bridging conditions prevail in engineering materials. In this work, a large scale bridging analysis is included within the framework of singular integral equation approach. This allows us to study fracture, fatigue and toughening mechanisms in advanced materials with crack bridging. As an example, the fatigue crack growth of grain bridging ceramics is studied. With the advent of composite materials technology, more complex material microstructures are being introduced, and more mechanics issues such as inhomogeneity and nonlinearity come into play. Improved mathematical and numerical tools need to be developed to allow theoretical modeling of these materials. This thesis work is an attempt to meet these challenges by making contributions to both micromechanics modeling and applied mathematics. It sets the stage for further investigations of a wide range of problems in the deformation and fracture of advanced engineering materials.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cirilo-Lombardo, Diego Julio; Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980 Dubna

    The central role played by pseudodifferential operators in relativistic dynamics is known very well. In this work, operators like the Schrodinger one (e.g., square root) are treated from the point of view of the non-local pseudodifferential Green functions. Starting from the explicit construction of the Green (semigroup) theoretical kernel, a theorem linking the integrability conditions and their dependence on the spacetime dimensions is given. Relativistic wave equations with arbitrary spin and the causality problem are discussed with the algebraic interpretation of the radical operator and their relation with coherent and squeezed states. Also we perform by means of pure theoreticalmore » procedures (based in physical concepts and symmetry) the relativistic position operator which satisfies the conditions of integrability: it is a non-local, Lorentz invariant and does not have the same problems as the “local”position operator proposed by Newton and Wigner. Physical examples, as zitterbewegung and rogue waves, are presented and deeply analyzed in this theoretical framework.« less

  20. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    PubMed

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  1. Derivation of aerodynamic kernel functions

    NASA Technical Reports Server (NTRS)

    Dowell, E. H.; Ventres, C. S.

    1973-01-01

    The method of Fourier transforms is used to determine the kernel function which relates the pressure on a lifting surface to the prescribed downwash within the framework of Dowell's (1971) shear flow model. This model is intended to improve upon the potential flow aerodynamic model by allowing for the aerodynamic boundary layer effects neglected in the potential flow model. For simplicity, incompressible, steady flow is considered. The proposed method is illustrated by deriving known results from potential flow theory.

  2. Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    PubMed Central

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Huan; Baker, Nathan A.; Li, Xiantao

    We present a data-driven approach to determine the memory kernel and random noise of the generalized Langevin equation. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. Further, we show that such an approximation can be constructed to arbitrarily high order. Within these approximations, the generalized Langevin dynamics can be embedded in an extended stochastic model without memory. We demonstrate how to introduce the stochastic noise so that the fluctuation-dissipation theorem is exactly satisfied.

  4. Pattern formation of microtubules and motors: inelastic interaction of polar rods.

    PubMed

    Aranson, Igor S; Tsimring, Lev S

    2005-05-01

    We derive a model describing spatiotemporal organization of an array of microtubules interacting via molecular motors. Starting from a stochastic model of inelastic polar rods with a generic anisotropic interaction kernel we obtain a set of equations for the local rods concentration and orientation. At large enough mean density of rods and concentration of motors, the model describes orientational instability. We demonstrate that the orientational instability leads to the formation of vortices and (for large density and/or kernel anisotropy) asters seen in recent experiments.

  5. Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing

    2017-12-14

    Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.

  6. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  7. Seismic Imaging of VTI, HTI and TTI based on Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Rusmanugroho, H.; Tromp, J.

    2014-12-01

    Recent studies show that isotropic seismic imaging based on adjoint method reduces low-frequency artifact caused by diving waves, which commonly occur in two-wave wave-equation migration, such as Reverse Time Migration (RTM). Here, we derive new expressions of sensitivity kernels for Vertical Transverse Isotropy (VTI) using the Thomsen parameters (ɛ, δ, γ) plus the P-, and S-wave speeds (α, β) as well as via the Chen & Tromp (GJI 2005) parameters (A, C, N, L, F). For Horizontal Transverse Isotropy (HTI), these parameters depend on an azimuthal angle φ, where the tilt angle θ is equivalent to 90°, and for Tilted Transverse Isotropy (TTI), these parameters depend on both the azimuth and tilt angles. We calculate sensitivity kernels for each of these two approaches. Individual kernels ("images") are numerically constructed based on the interaction between the regular and adjoint wavefields in smoothed models which are in practice estimated through Full-Waveform Inversion (FWI). The final image is obtained as a result of summing all shots, which are well distributed to sample the target model properly. The impedance kernel, which is a sum of sensitivity kernels of density and the Thomsen or Chen & Tromp parameters, looks crisp and promising for seismic imaging. The other kernels suffer from low-frequency artifacts, similar to traditional seismic imaging conditions. However, all sensitivity kernels are important for estimating the gradient of the misfit function, which, in combination with a standard gradient-based inversion algorithm, is used to minimize the objective function in FWI.

  8. A note on the self-similar solutions to the spontaneous fragmentation equation

    NASA Astrophysics Data System (ADS)

    Breschi, Giancarlo; Fontelos, Marco A.

    2017-05-01

    We provide a method to compute self-similar solutions for various fragmentation equations and use it to compute their asymptotic behaviours. Our procedure is applied to specific cases: (i) the case of mitosis, where fragmentation results into two identical fragments, (ii) fragmentation limited to the formation of sufficiently large fragments, and (iii) processes with fragmentation kernel presenting a power-like behaviour.

  9. An Adaptive Genetic Association Test Using Double Kernel Machines.

    PubMed

    Zhan, Xiang; Epstein, Michael P; Ghosh, Debashis

    2015-10-01

    Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study.

  10. Development of full wave code for modeling RF fields in hot non-uniform plasmas

    NASA Astrophysics Data System (ADS)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo

    2016-10-01

    FAR-TECH, Inc. is developing a full wave RF modeling code to model RF fields in fusion devices and in general plasma applications. As an important component of the code, an adaptive meshless technique is introduced to solve the wave equations, which allows resolving plasma resonances efficiently and adapting to the complexity of antenna geometry and device boundary. The computational points are generated using either a point elimination method or a force balancing method based on the monitor function, which is calculated by solving the cold plasma dispersion equation locally. Another part of the code is the conductivity kernel calculation, used for modeling the nonlocal hot plasma dielectric response. The conductivity kernel is calculated on a coarse grid of test points and then interpolated linearly onto the computational points. All the components of the code are parallelized using MPI and OpenMP libraries to optimize the execution speed and memory. The algorithm and the results of our numerical approach to solving 2-D wave equations in a tokamak geometry will be presented. Work is supported by the U.S. DOE SBIR program.

  11. Full Wave Parallel Code for Modeling RF Fields in Hot Plasmas

    NASA Astrophysics Data System (ADS)

    Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo

    2015-11-01

    FAR-TECH, Inc. is developing a suite of full wave RF codes in hot plasmas. It is based on a formulation in configuration space with grid adaptation capability. The conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating the linearized Vlasov equation along unperturbed test particle orbits. For Tokamak applications a 2-D version of the code is being developed. Progress of this work will be reported. This suite of codes has the following advantages over existing spectral codes: 1) It utilizes the localized nature of plasma dielectric response to the RF field and calculates this response numerically without approximations. 2) It uses an adaptive grid to better resolve resonances in plasma and antenna structures. 3) It uses an efficient sparse matrix solver to solve the formulated linear equations. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel is calculated. Work is supported by the U.S. DOE SBIR program.

  12. Structural graph-based morphometry: A multiscale searchlight framework based on sulcal pits.

    PubMed

    Takerkart, Sylvain; Auzias, Guillaume; Brun, Lucile; Coulon, Olivier

    2017-01-01

    Studying the topography of the cortex has proved valuable in order to characterize populations of subjects. In particular, the recent interest towards the deepest parts of the cortical sulci - the so-called sulcal pits - has opened new avenues in that regard. In this paper, we introduce the first fully automatic brain morphometry method based on the study of the spatial organization of sulcal pits - Structural Graph-Based Morphometry (SGBM). Our framework uses attributed graphs to model local patterns of sulcal pits, and further relies on three original contributions. First, a graph kernel is defined to provide a new similarity measure between pit-graphs, with few parameters that can be efficiently estimated from the data. Secondly, we present the first searchlight scheme dedicated to brain morphometry, yielding dense information maps covering the full cortical surface. Finally, a multi-scale inference strategy is designed to jointly analyze the searchlight information maps obtained at different spatial scales. We demonstrate the effectiveness of our framework by studying gender differences and cortical asymmetries: we show that SGBM can both localize informative regions and estimate their spatial scales, while providing results which are consistent with the literature. Thanks to the modular design of our kernel and the vast array of available kernel methods, SGBM can easily be extended to include a more detailed description of the sulcal patterns and solve different statistical problems. Therefore, we suggest that our SGBM framework should be useful for both reaching a better understanding of the normal brain and defining imaging biomarkers in clinical settings. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Relationship Between Integro-Differential Schrodinger Equation with a Symmetric Kernel and Position-Dependent Effective Mass

    NASA Astrophysics Data System (ADS)

    Khosropour, B.; Moayedi, S. K.; Sabzali, R.

    2018-07-01

    The solution of integro-differential Schrodinger equation (IDSE) which was introduced by physicists has a great role in the fields of science. The purpose of this paper comes in two parts. First, studying the relationship between integro-differential Schrodinger equation with a symmetric non-local potential and one-dimensional Schrodinger equation with a position-dependent effective mass. Second, we show that the quantum Hamiltonian for a particle with position-dependent mass after applying Liouville-Green transformations will be converted to a quantum Hamiltonian for a particle with constant mass.

  14. Structured functional additive regression in reproducing kernel Hilbert spaces.

    PubMed

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2014-06-01

    Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application.

  15. Helium: lifting high-performance stencil kernels from stripped x86 binaries to halide DSL code

    DOE PAGES

    Mendis, Charith; Bosboom, Jeffrey; Wu, Kevin; ...

    2015-06-03

    Highly optimized programs are prone to bit rot, where performance quickly becomes suboptimal in the face of new hardware and compiler techniques. In this paper we show how to automatically lift performance-critical stencil kernels from a stripped x86 binary and generate the corresponding code in the high-level domain-specific language Halide. Using Halide's state-of-the-art optimizations targeting current hardware, we show that new optimized versions of these kernels can replace the originals to rejuvenate the application for newer hardware. The original optimized code for kernels in stripped binaries is nearly impossible to analyze statically. Instead, we rely on dynamic traces to regeneratemore » the kernels. We perform buffer structure reconstruction to identify input, intermediate and output buffer shapes. Here, we abstract from a forest of concrete dependency trees which contain absolute memory addresses to symbolic trees suitable for high-level code generation. This is done by canonicalizing trees, clustering them based on structure, inferring higher-dimensional buffer accesses and finally by solving a set of linear equations based on buffer accesses to lift them up to simple, high-level expressions. Helium can handle highly optimized, complex stencil kernels with input-dependent conditionals. We lift seven kernels from Adobe Photoshop giving a 75 % performance improvement, four kernels from Irfan View, leading to 4.97 x performance, and one stencil from the mini GMG multigrid benchmark netting a 4.25 x improvement in performance. We manually rejuvenated Photoshop by replacing eleven of Photoshop's filters with our lifted implementations, giving 1.12 x speedup without affecting the user experience.« less

  16. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    PubMed

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.

  17. A fast numerical solution of scattering by a cylinder: Spectral method for the boundary integral equations

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.

  18. On exponential stability of linear Levin-Nohel integro-differential equations

    NASA Astrophysics Data System (ADS)

    Tien Dung, Nguyen

    2015-02-01

    The aim of this paper is to investigate the exponential stability for linear Levin-Nohel integro-differential equations with time-varying delays. To the best of our knowledge, the exponential stability for such equations has not yet been discussed. In addition, since we do not require that the kernel and delay are continuous, our results improve those obtained in Becker and Burton [Proc. R. Soc. Edinburgh, Sect. A: Math. 136, 245-275 (2006)]; Dung [J. Math. Phys. 54, 082705 (2013)]; and Jin and Luo [Comput. Math. Appl. 57(7), 1080-1088 (2009)].

  19. Computational Modeling of Neurotransmitter Release Evoked by Electrical Stimulation: Nonlinear Approaches to Predicting Stimulation-Evoked Dopamine Release.

    PubMed

    Trevathan, James K; Yousefi, Ali; Park, Hyung Ook; Bartoletta, John J; Ludwig, Kip A; Lee, Kendall H; Lujan, J Luis

    2017-02-15

    Neurochemical changes evoked by electrical stimulation of the nervous system have been linked to both therapeutic and undesired effects of neuromodulation therapies used to treat obsessive-compulsive disorder, depression, epilepsy, Parkinson's disease, stroke, hypertension, tinnitus, and many other indications. In fact, interest in better understanding the role of neurochemical signaling in neuromodulation therapies has been a focus of recent government- and industry-sponsored programs whose ultimate goal is to usher in an era of personalized medicine by creating neuromodulation therapies that respond to real-time changes in patient status. A key element to achieving these precision therapeutic interventions is the development of mathematical modeling approaches capable of describing the nonlinear transfer function between neuromodulation parameters and evoked neurochemical changes. Here, we propose two computational modeling frameworks, based on artificial neural networks (ANNs) and Volterra kernels, that can characterize the input/output transfer functions of stimulation-evoked neurochemical release. We evaluate the ability of these modeling frameworks to characterize subject-specific neurochemical kinetics by accurately describing stimulation-evoked dopamine release across rodent (R 2 = 0.83 Volterra kernel, R 2 = 0.86 ANN), swine (R 2 = 0.90 Volterra kernel, R 2 = 0.93 ANN), and non-human primate (R 2 = 0.98 Volterra kernel, R 2 = 0.96 ANN) models of brain stimulation. Ultimately, these models will not only improve understanding of neurochemical signaling in healthy and diseased brains but also facilitate the development of neuromodulation strategies capable of controlling neurochemical release via closed-loop strategies.

  20. Reduced-Order Models Based on Linear and Nonlinear Aerodynamic Impulse Responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1999-01-01

    This paper discusses a method for the identification and application of reduced-order models based on linear and nonlinear aerodynamic impulse responses. The Volterra theory of nonlinear systems and an appropriate kernel identification technique are described. Insight into the nature of kernels is provided by applying the method to the nonlinear Riccati equation in a non-aerodynamic application. The method is then applied to a nonlinear aerodynamic model of RAE 2822 supercritical airfoil undergoing plunge motions using the CFL3D Navier-Stokes flow solver with the Spalart-Allmaras turbulence model. Results demonstrate the computational efficiency of the technique.

  1. Reduced Order Models Based on Linear and Nonlinear Aerodynamic Impulse Responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1999-01-01

    This paper discusses a method for the identification and application of reduced-order models based on linear and nonlinear aerodynamic impulse responses. The Volterra theory of nonlinear systems and an appropriate kernel identification technique are described. Insight into the nature of kernels is provided by applying the method to the nonlinear Riccati equation in a non-aerodynamic application. The method is then applied to a nonlinear aerodynamic model of an RAE 2822 supercritical airfoil undergoing plunge motions using the CFL3D Navier-Stokes flow solver with the Spalart-Allmaras turbulence model. Results demonstrate the computational efficiency of the technique.

  2. Omnibus Risk Assessment via Accelerated Failure Time Kernel Machine Modeling

    PubMed Central

    Sinnott, Jennifer A.; Cai, Tianxi

    2013-01-01

    Summary Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai et al., 2011). In this paper, we derive testing and prediction methods for KM regression under the accelerated failure time model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. PMID:24328713

  3. Bose–Einstein condensation temperature of finite systems

    NASA Astrophysics Data System (ADS)

    Xie, Mi

    2018-05-01

    In studies of the Bose–Einstein condensation of ideal gases in finite systems, the divergence problem usually arises in the equation of state. In this paper, we present a technique based on the heat kernel expansion and zeta function regularization to solve the divergence problem, and obtain the analytical expression of the Bose–Einstein condensation temperature for general finite systems. The result is represented by the heat kernel coefficients, where the asymptotic energy spectrum of the system is used. Besides the general case, for systems with exact spectra, e.g. ideal gases in an infinite slab or in a three-sphere, the sums of the spectra can be obtained exactly and the calculation of corrections to the critical temperatures is more direct. For a system confined in a bounded potential, the form of the heat kernel is different from the usual heat kernel expansion. We show that as long as the asymptotic form of the global heat kernel can be found, our method works. For Bose gases confined in three- and two-dimensional isotropic harmonic potentials, we obtain the higher-order corrections to the usual results of the critical temperatures. Our method can also be applied to the problem of generalized condensation, and we give the correction of the boundary on the second critical temperature in a highly anisotropic slab.

  4. Learn the Lagrangian: A Vector-Valued RKHS Approach to Identifying Lagrangian Systems.

    PubMed

    Cheng, Ching-An; Huang, Han-Pang

    2016-12-01

    We study the modeling of Lagrangian systems with multiple degrees of freedom. Based on system dynamics, canonical parametric models require ad hoc derivations and sometimes simplification for a computable solution; on the other hand, due to the lack of prior knowledge in the system's structure, modern nonparametric models in machine learning face the curse of dimensionality, especially in learning large systems. In this paper, we bridge this gap by unifying the theories of Lagrangian systems and vector-valued reproducing kernel Hilbert space. We reformulate Lagrangian systems with kernels that embed the governing Euler-Lagrange equation-the Lagrangian kernels-and show that these kernels span a subspace capturing the Lagrangian's projection as inverse dynamics. By such property, our model uses only inputs and outputs as in machine learning and inherits the structured form as in system dynamics, thereby removing the need for the mundane derivations for new systems as well as the generalization problem in learning from scratches. In effect, it learns the system's Lagrangian, a simpler task than directly learning the dynamics. To demonstrate, we applied the proposed kernel to identify the robot inverse dynamics in simulations and experiments. Our results present a competitive novel approach to identifying Lagrangian systems, despite using only inputs and outputs.

  5. On the Boltzmann Equation with Stochastic Kinetic Transport: Global Existence of Renormalized Martingale Solutions

    NASA Astrophysics Data System (ADS)

    Punshon-Smith, Samuel; Smith, Scott

    2018-02-01

    This article studies the Cauchy problem for the Boltzmann equation with stochastic kinetic transport. Under a cut-off assumption on the collision kernel and a coloring hypothesis for the noise coefficients, we prove the global existence of renormalized (in the sense of DiPerna/Lions) martingale solutions to the Boltzmann equation for large initial data with finite mass, energy, and entropy. Our analysis includes a detailed study of weak martingale solutions to a class of linear stochastic kinetic equations. This study includes a criterion for renormalization, the weak closedness of the solution set, and tightness of velocity averages in {{L}1}.

  6. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint

    PubMed Central

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2015-01-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575

  7. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.

    PubMed

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2014-10-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.

  8. An Adaptive Genetic Association Test Using Double Kernel Machines

    PubMed Central

    Zhan, Xiang; Epstein, Michael P.; Ghosh, Debashis

    2014-01-01

    Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study. PMID:26640602

  9. Goldstonic pseudoscalar mesons in Bethe-Salpeter-inspired setting

    NASA Astrophysics Data System (ADS)

    Lucha, Wolfgang; Schöberl, Franz F.

    2018-03-01

    For a two-particle bound-state equation closer to its Bethe-Salpeter origins than Salpeter’s equation, with effective interaction kernel deliberately forged such as to ensure, in the limit of zero mass of the bound-state constituents, the vanishing of the arising bound-state mass, we scrutinize the emerging features of the lightest pseudoscalar mesons for their agreement with the behavior predicted by a generalization of the Gell-Mann-Oakes-Renner relation.

  10. Structured functional additive regression in reproducing kernel Hilbert spaces

    PubMed Central

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2013-01-01

    Summary Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application. PMID:25013362

  11. CONSTRUCTING A FLEXIBLE LIKELIHOOD FUNCTION FOR SPECTROSCOPIC INFERENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.

    2015-10-20

    We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectralmore » line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf.« less

  12. Moisture Sorption Isotherms and Properties of Sorbed Water of Neem ( Azadirichta indica A. Juss) Kernels

    NASA Astrophysics Data System (ADS)

    Ngono Mbarga, M. C.; Bup Nde, D.; Mohagir, A.; Kapseu, C.; Elambo Nkeng, G.

    2017-01-01

    A neem tree growing abundantly in India as well as in some regions of Asia and Africa gives fruits whose kernels have about 40-50% oil. This oil has high therapeutic and cosmetic qualities and is recently projected to be an important raw material for the production of biodiesel. Its seed is harvested at high moisture contents, which leads tohigh post-harvest losses. In the paper, the sorption isotherms are determined by the static gravimetric method at 40, 50, and 60°C to establish a database useful in defining drying and storage conditions of neem kernels. Five different equations are validated for modeling the sorption isotherms of neem kernels. The properties of sorbed water, such as the monolayer moisture content, surface area of adsorbent, number of adsorbed monolayers, and the percent of bound water are also defined. The critical moisture content necessary for the safe storage of dried neem kernels is shown to range from 5 to 10% dry basis, which can be obtained at a relative humidity less than 65%. The isosteric heats of sorption at 5% moisture content are 7.40 and 22.5 kJ/kg for the adsorption and desorption processes, respectively. This work is the first, to the best of our knowledge, to give the important parameters necessary for drying and storage of neem kernels, a potential raw material for the production of oil to be used in pharmaceutics, cosmetics, and biodiesel manufacturing.

  13. Dielectric properties of almond kernels associated with radio frequency and microwave pasteurization

    NASA Astrophysics Data System (ADS)

    Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin

    2017-02-01

    To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10-300 MHz), but gradually over the measured MW range (300-3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R2 greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27-2450 MHz), moisture content (4.2-19.6% w.b.) and temperature (20-90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity.

  14. Dielectric properties of almond kernels associated with radio frequency and microwave pasteurization.

    PubMed

    Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin

    2017-02-10

    To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10-300 MHz), but gradually over the measured MW range (300-3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R 2 greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27-2450 MHz), moisture content (4.2-19.6% w.b.) and temperature (20-90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity.

  15. Least square regularized regression in sum space.

    PubMed

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  16. Optimized extreme learning machine for urban land cover classification using hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Su, Hongjun; Tian, Shufang; Cai, Yue; Sheng, Yehua; Chen, Chen; Najafian, Maryam

    2017-12-01

    This work presents a new urban land cover classification framework using the firefly algorithm (FA) optimized extreme learning machine (ELM). FA is adopted to optimize the regularization coefficient C and Gaussian kernel σ for kernel ELM. Additionally, effectiveness of spectral features derived from an FA-based band selection algorithm is studied for the proposed classification task. Three sets of hyperspectral databases were recorded using different sensors, namely HYDICE, HyMap, and AVIRIS. Our study shows that the proposed method outperforms traditional classification algorithms such as SVM and reduces computational cost significantly.

  17. Toward an alternative hardness kernel matrix structure in the Electronegativity Equalization Method (EEM).

    PubMed

    Chaves, J; Barroso, J M; Bultinck, P; Carbó-Dorca, R

    2006-01-01

    This study presents an alternative of the Electronegativity Equalization Method (EEM), where the usual Coulomb kernel has been transformed into a smooth function. The new framework, as the classical EEM, permits fast calculations of atomic charges in a given molecule for a small computational cost. The original EEM procedure needs to previously calibrate the different implied atomic hardness and electronegativity, using a chosen set of molecules. In the new EEM algorithm half the number of parameters needs to be calibrated, since a relationship between electronegativities and hardnesses has been found.

  18. Features and flaws of a contact interaction treatment of the kaon

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Chang, Lei; Roberts, Craig D.; Schmidt, Sebastian M.; Wan, Shaolong; Wilson, David J.

    2013-04-01

    Elastic and semileptonic transition form factors for the kaon and pion are calculated using the leading order in a global-symmetry-preserving truncation of the Dyson-Schwinger equations and a momentum-independent form for the associated kernels in the gap and Bethe-Salpeter equations. The computed form factors are compared both with those obtained using the same truncation but an interaction that preserves the one-loop renormalization-group behavior of QCD and with data. The comparisons show that in connection with observables revealed by probes with |Q2|≲M2, where M≈0.4GeV is an infrared value of the dressed-quark mass, results obtained using a symmetry-preserving regularization of the contact interaction are not realistically distinguishable from those produced by more sophisticated kernels, and available data on kaon form factors do not extend into the domain whereupon one could distinguish among the interactions. The situation differs if one includes the domain Q2>M2. Thereupon, a fully consistent treatment of the contact interaction produces form factors that are typically harder than those obtained with QCD renormalization-group-improved kernels. Among other things also described are a Ward identity for the inhomogeneous scalar vertex, similarity between the charge distribution of a dressed u quark in the K+ and that of the dressed u quark in the π+, and reflections upon the point whereat one might begin to see perturbative behavior in the pion form factor. Interpolations of the form factors are provided, which should assist in working to chart the interaction between light quarks by explicating the impact on hadron properties of differing assumptions about the behavior of the Bethe-Salpeter kernel.

  19. Protein fold recognition using geometric kernel data fusion.

    PubMed

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  20. Fokker-Planck equation for the non-Markovian Brownian motion in the presence of a magnetic field

    NASA Astrophysics Data System (ADS)

    Das, Joydip; Mondal, Shrabani; Bag, Bidhan Chandra

    2017-10-01

    In the present study, we have proposed the Fokker-Planck equation in a simple way for a Langevin equation of motion having ordinary derivative (OD), the Gaussian random force and a generalized frictional memory kernel. The equation may be associated with or without conservative force field from harmonic potential. We extend this method for a charged Brownian particle in the presence of a magnetic field. Thus, the present method is applicable for a Langevin equation of motion with OD, the Gaussian colored thermal noise and any kind of linear force field that may be conservative or not. It is also simple to apply this method for the colored Gaussian noise that is not related to the damping strength.

  1. Fokker-Planck equation for the non-Markovian Brownian motion in the presence of a magnetic field.

    PubMed

    Das, Joydip; Mondal, Shrabani; Bag, Bidhan Chandra

    2017-10-28

    In the present study, we have proposed the Fokker-Planck equation in a simple way for a Langevin equation of motion having ordinary derivative (OD), the Gaussian random force and a generalized frictional memory kernel. The equation may be associated with or without conservative force field from harmonic potential. We extend this method for a charged Brownian particle in the presence of a magnetic field. Thus, the present method is applicable for a Langevin equation of motion with OD, the Gaussian colored thermal noise and any kind of linear force field that may be conservative or not. It is also simple to apply this method for the colored Gaussian noise that is not related to the damping strength.

  2. Global solutions to the equation of thermoelasticity with fading memory

    NASA Astrophysics Data System (ADS)

    Okada, Mari; Kawashima, Shuichi

    2017-07-01

    We consider the initial-history value problem for the one-dimensional equation of thermoelasticity with fading memory. It is proved that if the data are smooth and small, then a unique smooth solution exists globally in time and converges to the constant equilibrium state as time goes to infinity. Our proof is based on a technical energy method which makes use of the strict convexity of the entropy function and the properties of strongly positive definite kernels.

  3. ML 3.1 developer's guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sala, Marzio; Hu, Jonathan Joseph; Tuminaro, Raymond Stephen

    2004-05-01

    ML development was started in 1997 by Ray Tuminaro and Charles Tong. Currently, there are several full- and part-time developers. The kernel of ML is written in ANSI C, and there is a rich C++ interface for Trilinos users and developers. ML can be customized to run geometric and algebraic multigrid; it can solve a scalar or a vector equation (with constant number of equations per grid node), and it can solve a form of Maxwell's equations. For a general introduction to ML and its applications, we refer to the Users Guide [SHT04], and to the ML web site, http://software.sandia.gov/ml.

  4. Groundstates of the Choquard equations with a sign-changing self-interaction potential

    NASA Astrophysics Data System (ADS)

    Battaglia, Luca; Van Schaftingen, Jean

    2018-06-01

    We consider a nonlinear Choquard equation -Δ u+u= (V * |u|^p )|u|^{p-2}u \\qquad {in }{R}^N, when the self-interaction potential V is unbounded from below. Under some assumptions on V and on p, covering p =2 and V being the one- or two-dimensional Newton kernel, we prove the existence of a nontrivial groundstate solution u\\in H^1 (R^N){\\setminus }{0} by solving a relaxed problem by a constrained minimization and then proving the convergence of the relaxed solutions to a groundstate of the original equation.

  5. A tensor Banach algebra approach to abstract kinetic equations

    NASA Astrophysics Data System (ADS)

    Greenberg, W.; van der Mee, C. V. M.

    The study deals with a concrete algebraic construction providing the existence theory for abstract kinetic equation boundary-value problems, when the collision operator A is an accretive finite-rank perturbation of the identity operator in a Hilbert space H. An algebraic generalization of the Bochner-Phillips theorem is utilized to study solvability of the abstract boundary-value problem without any regulatory condition. A Banach algebra in which the convolution kernel acts is obtained explicitly, and this result is used to prove a perturbation theorem for bisemigroups, which then plays a vital role in solving the initial equations.

  6. Unsteady free convection flow of viscous fluids with analytical results by employing time-fractional Caputo-Fabrizio derivative (without singular kernel)

    NASA Astrophysics Data System (ADS)

    Ali Shah, Nehad; Mahsud, Yasir; Ali Zafar, Azhar

    2017-10-01

    This article introduces a theoretical study for unsteady free convection flow of an incompressible viscous fluid. The fluid flows near an isothermal vertical plate. The plate has a translational motion with time-dependent velocity. The equations governing the fluid flow are expressed in fractional differential equations by using a newly defined time-fractional Caputo-Fabrizio derivative without singular kernel. Explicit solutions for velocity, temperature and solute concentration are obtained by applying the Laplace transform technique. As the fractional parameter approaches to one, solutions for the ordinary fluid model are extracted from the general solutions of the fractional model. The results showed that, for the fractional model, the obtained solutions for velocity, temperature and concentration exhibit stationary jumps discontinuity across the plane at t=0 , while the solutions are continuous functions in the case of the ordinary model. Finally, numerical results for flow features at small-time are illustrated through graphs for various pertinent parameters.

  7. The product form for path integrals on curved manifolds

    NASA Astrophysics Data System (ADS)

    Grosche, C.

    1988-03-01

    A general and simple framework for treating path integrals on curved manifolds is presented. The crucial point will be a product ansatz for the metric tensor and the quantum hamiltonian, i.e. we shall write g αβ = h αγh βγ and H = (1/2m)h αγp αp βh βγ + V + ΔV , respectively, a prescription which we shall call “product form” definition. The p α are hermitian momenta and Δ V is a well-defined quantum correction. We shall show that this ansatz, which looks quite special, is in fact - under reasonable assumptions in quantum mechanics - a very general one. We shall derive the lagrangian path integral in the “product form” definition and shall also prove that the Schro¨dinger equation can be derived from the corresponding short-time kernel. We shall discuss briefly an application of this prescription to the problem of free quantum motion on the Poincare´upper half-plane.

  8. SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamberlain, S; Roswell Park Cancer Institute, Buffalo, NY; French, S

    Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3more » × 10{sup 6} to 3 × 10{sup 7}); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10{sup 6} was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10{sup 6} have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.« less

  9. Omnibus risk assessment via accelerated failure time kernel machine modeling.

    PubMed

    Sinnott, Jennifer A; Cai, Tianxi

    2013-12-01

    Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.

  10. A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4

    NASA Technical Reports Server (NTRS)

    Park, Young-Keun; Fahrenthold, Eric P.

    2004-01-01

    An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.

  11. Dynamic topography and gravity anomalies for fluid layers whose viscosity varies exponentially with depth

    NASA Technical Reports Server (NTRS)

    Revenaugh, Justin; Parsons, Barry

    1987-01-01

    Adopting the formalism of Parsons and Daly (1983), analytical integral equations (Green's function integrals) are derived which relate gravity anomalies and dynamic boundary topography with temperature as a function of wavenumber for a fluid layer whose viscosity varies exponentially with depth. In the earth, such a viscosity profile may be found in the asthenosphere, where the large thermal gradient leads to exponential decrease of viscosity with depth, the effects of a pressure increase being small in comparison. It is shown that, when viscosity varies rapidly, topography kernels for both the surface and bottom boundaries (and hence the gravity kernel) are strongly affected at all wavelengths.

  12. Aerodynamics Via Acoustics: Application of Acoustic Formulas for Aerodynamic Calculations

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Myers, M. K.

    1986-01-01

    Prediction of aerodynamic loads on bodies in arbitrary motion is considered from an acoustic point of view, i.e., in a frame of reference fixed in the undisturbed medium. An inhomogeneous wave equation which governs the disturbance pressure is constructed and solved formally using generalized function theory. When the observer is located on the moving body surface there results a singular linear integral equation for surface pressure. Two different methods for obtaining such equations are discussed. Both steady and unsteady aerodynamic calculations are considered. Two examples are presented, the more important being an application to propeller aerodynamics. Of particular interest for numerical applications is the analytical behavior of the kernel functions in the various integral equations.

  13. Numerical techniques in radiative heat transfer for general, scattering, plane-parallel media

    NASA Technical Reports Server (NTRS)

    Sharma, A.; Cogley, A. C.

    1982-01-01

    The study of radiative heat transfer with scattering usually leads to the solution of singular Fredholm integral equations. The present paper presents an accurate and efficient numerical method to solve certain integral equations that govern radiative equilibrium problems in plane-parallel geometry for both grey and nongrey, anisotropically scattering media. In particular, the nongrey problem is represented by a spectral integral of a system of nonlinear integral equations in space, which has not been solved previously. The numerical technique is constructed to handle this unique nongrey governing equation as well as the difficulties caused by singular kernels. Example problems are solved and the method's accuracy and computational speed are analyzed.

  14. Coupled Hydrogeophysical Inversion and Hydrogeological Data Fusion

    NASA Astrophysics Data System (ADS)

    Cirpka, O. A.; Schwede, R. L.; Li, W.

    2012-12-01

    Tomographic geophysical monitoring methods give the opportunity to observe hydrogeological tests at higher spatial resolution than is possible with classical hydraulic monitoring tools. This has been demonstrated in a substantial number of studies in which electrical resistivity tomography (ERT) has been used to monitor salt-tracer experiments. It is now accepted that inversion of such data sets requires a fully coupled framework, explicitly accounting for the hydraulic processes (groundwater flow and solute transport), the relationship between solute and geophysical properties (petrophysical relationship such as Archie's law), and the governing equations of the geophysical surveying techniques (e.g., the Poisson equation) as consistent coupled system. These data sets can be amended with data from other - more direct - hydrogeological tests to infer the distribution of hydraulic aquifer parameters. In the inversion framework, meaningful condensation of data does not only contribute to inversion efficiency but also increases the stability of the inversion. In particular, transient concentration data themselves only weakly depend on hydraulic conductivity, and model improvement using gradient-based methods is only possible when a substantial agreement between measurements and model output already exists. The latter also holds when concentrations are monitored by ERT. Tracer arrival times, by contrast, show high sensitivity and a more monotonic dependence on hydraulic conductivity than concentrations themselves. Thus, even without using temporal-moment generating equations, inverting travel times rather than concentrations or related geoelectrical signals themselves is advantageous. We have applied this approach to concentrations measured directly or via ERT, and to heat-tracer data. We present a consistent inversion framework including temporal moments of concentrations, geoelectrical signals obtained during salt-tracer tests, drawdown data from hydraulic tomography and flowmeter measurements to identify mainly the hydraulic-conductivity distribution. By stating the inversion as geostatistical conditioning problem, we obtain parameter sets together with their correlated uncertainty. While we have applied the quasi-linear geostatistical approach as inverse kernel, other methods - such as ensemble Kalman methods - may suit the same purpose, particularly when many data points are to be included. In order to identify 3-D fields, discretized by about 50 million grid points, we use the high-performance-computing framework DUNE to solve the involved partial differential equations on midrange computer cluster. We have quantified the worth of different data types in these inference problems. In practical applications, the constitutive relationships between geophysical, thermal, and hydraulic properties can pose a problem, requiring additional inversion. However, not well constrained transient boundary conditions may put inversion efforts on larger (e.g. regional) scales even more into question. We envision that future hydrogeophysical inversion efforts will target boundary conditions, such as groundwater recharge rates, in conjunction with - or instead of - aquifer parameters. By this, the distinction between data assimilation and parameter estimation will gradually vanish.

  15. A shock-capturing SPH scheme based on adaptive kernel estimation

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime

    2006-02-01

    Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.

  16. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing

    PubMed Central

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2015-01-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA’s CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream. Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels. PMID:26566545

  17. Blood flow problem in the presence of magnetic particles through a circular cylinder using Caputo-Fabrizio fractional derivative

    NASA Astrophysics Data System (ADS)

    Uddin, Salah; Mohamad, Mahathir; Khalid, Kamil; Abdulhammed, Mohammed; Saifullah Rusiman, Mohd; Che – Him, Norziha; Roslan, Rozaini

    2018-04-01

    In this paper, the flow of blood mixed with magnetic particles subjected to uniform transverse magnetic field and pressure gradient in an axisymmetric circular cylinder is studied by using a new trend of fractional derivative without singular kernel. The governing equations are fractional partial differential equations derived based on the Caputo-Fabrizio time-fractional derivatives NFDt. The current result agrees considerably well with that of the previous Caputo fractional derivatives UFDt.

  18. Fourier's law of heat conduction: quantum mechanical master equation analysis.

    PubMed

    Wu, Lian-Ao; Segal, Dvira

    2008-06-01

    We derive the macroscopic Fourier's Law of heat conduction from the exact gain-loss time convolutionless quantum master equation under three assumptions for the interaction kernel. To second order in the interaction, we show that the first two assumptions are natural results of the long time limit. The third assumption can be satisfied by a family of interactions consisting of an exchange effect. The pure exchange model directly leads to energy diffusion in a weakly coupled spin- 12 chain.

  19. FAST TRACK COMMUNICATION: The nonlinear fragmentation equation

    NASA Astrophysics Data System (ADS)

    Ernst, Matthieu H.; Pagonabarraga, Ignacio

    2007-04-01

    We study the kinetics of nonlinear irreversible fragmentation. Here, fragmentation is induced by interactions/collisions between pairs of particles and modelled by general classes of interaction kernels, for several types of breakage models. We construct initial value and scaling solutions of the fragmentation equations, and apply the 'non-vanishing mass flux' criterion for the occurrence of shattering transitions. These properties enable us to determine the phase diagram for the occurrence of shattering states and of scaling states in the phase space of model parameters.

  20. Pixel-based meshfree modelling of skeletal muscles.

    PubMed

    Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu

    2016-01-01

    This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.

  1. Scalable and Power Efficient Data Analytics for Hybrid Exascale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok; Samatova, Nagiza; Wu, Kesheng

    This project developed a generic and optimized set of core data analytics functions. These functions organically consolidate a broad constellation of high performance analytical pipelines. As the architectures of emerging HPC systems become inherently heterogeneous, there is a need to design algorithms for data analysis kernels accelerated on hybrid multi-node, multi-core HPC architectures comprised of a mix of CPUs, GPUs, and SSDs. Furthermore, the power-aware trend drives the advances in our performance-energy tradeoff analysis framework which enables our data analysis kernels algorithms and software to be parameterized so that users can choose the right power-performance optimizations.

  2. Calculation of the time resolution of the J-PET tomograph using kernel density estimation

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    2017-06-01

    In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.

  3. Fractional-order difference equations for physical lattices and some applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarasov, Vasily E., E-mail: tarasov@theory.sinp.msu.ru

    2015-10-15

    Fractional-order operators for physical lattice models based on the Grünwald-Letnikov fractional differences are suggested. We use an approach based on the models of lattices with long-range particle interactions. The fractional-order operators of differentiation and integration on physical lattices are represented by kernels of lattice long-range interactions. In continuum limit, these discrete operators of non-integer orders give the fractional-order derivatives and integrals with respect to coordinates of the Grünwald-Letnikov types. As examples of the fractional-order difference equations for physical lattices, we give difference analogs of the fractional nonlocal Navier-Stokes equations and the fractional nonlocal Maxwell equations for lattices with long-range interactions.more » Continuum limits of these fractional-order difference equations are also suggested.« less

  4. A family of wave equations with some remarkable properties.

    PubMed

    da Silva, Priscila Leal; Freire, Igor Leite; Sampaio, Júlio Cesar Santos

    2018-02-01

    We consider a family of homogeneous nonlinear dispersive equations with two arbitrary parameters. Conservation laws are established from the point symmetries and imply that the whole family admits square integrable solutions. Recursion operators are found for two members of the family investigated. For one of them, a Lax pair is also obtained, proving its complete integrability. From the Lax pair, we construct a Miura-type transformation relating the original equation to the Korteweg-de Vries (KdV) equation. This transformation, on the other hand, enables us to obtain solutions of the equation from the kernel of a Schrödinger operator with potential parametrized by the solutions of the KdV equation. In particular, this allows us to exhibit a kink solution to the completely integrable equation from the 1-soliton solution of the KdV equation. Finally, peakon-type solutions are also found for a certain choice of the parameters, although for this particular case the equation is reduced to a homogeneous second-order nonlinear evolution equation.

  5. Interaction with Machine Improvisation

    NASA Astrophysics Data System (ADS)

    Assayag, Gerard; Bloch, George; Cont, Arshia; Dubnov, Shlomo

    We describe two multi-agent architectures for an improvisation oriented musician-machine interaction systems that learn in real time from human performers. The improvisation kernel is based on sequence modeling and statistical learning. We present two frameworks of interaction with this kernel. In the first, the stylistic interaction is guided by a human operator in front of an interactive computer environment. In the second framework, the stylistic interaction is delegated to machine intelligence and therefore, knowledge propagation and decision are taken care of by the computer alone. The first framework involves a hybrid architecture using two popular composition/performance environments, Max and OpenMusic, that are put to work and communicate together, each one handling the process at a different time/memory scale. The second framework shares the same representational schemes with the first but uses an Active Learning architecture based on collaborative, competitive and memory-based learning to handle stylistic interactions. Both systems are capable of processing real-time audio/video as well as MIDI. After discussing the general cognitive background of improvisation practices, the statistical modelling tools and the concurrent agent architecture are presented. Then, an Active Learning scheme is described and considered in terms of using different improvisation regimes for improvisation planning. Finally, we provide more details about the different system implementations and describe several performances with the system.

  6. Link predication based on matrix factorization by fusion of multi class organizations of the network.

    PubMed

    Jiao, Pengfei; Cai, Fei; Feng, Yiding; Wang, Wenjun

    2017-08-21

    Link predication aims at forecasting the latent or unobserved edges in the complex networks and has a wide range of applications in reality. Almost existing methods and models only take advantage of one class organization of the networks, which always lose important information hidden in other organizations of the network. In this paper, we propose a link predication framework which makes the best of the structure of networks in different level of organizations based on nonnegative matrix factorization, which is called NMF 3 here. We first map the observed network into another space by kernel functions, which could get the different order organizations. Then we combine the adjacency matrix of the network with one of other organizations, which makes us obtain the objective function of our framework for link predication based on the nonnegative matrix factorization. Third, we derive an iterative algorithm to optimize the objective function, which converges to a local optimum, and we propose a fast optimization strategy for large networks. Lastly, we test the proposed framework based on two kernel functions on a series of real world networks under different sizes of training set, and the experimental results show the feasibility, effectiveness, and competitiveness of the proposed framework.

  7. Research on offense and defense technology for iOS kernel security mechanism

    NASA Astrophysics Data System (ADS)

    Chu, Sijun; Wu, Hao

    2018-04-01

    iOS is a strong and widely used mobile device system. It's annual profits make up about 90% of the total profits of all mobile phone brands. Though it is famous for its security, there have been many attacks on the iOS operating system, such as the Trident apt attack in 2016. So it is important to research the iOS security mechanism and understand its weaknesses and put forward targeted protection and security check framework. By studying these attacks and previous jailbreak tools, we can see that an attacker could only run a ROP code and gain kernel read and write permissions based on the ROP after exploiting kernel and user layer vulnerabilities. However, the iOS operating system is still protected by the code signing mechanism, the sandbox mechanism, and the not-writable mechanism of the system's disk area. This is far from the steady, long-lasting control that attackers expect. Before iOS 9, breaking these security mechanisms was usually done by modifying the kernel's important data structures and security mechanism code logic. However, after iOS 9, the kernel integrity protection mechanism was added to the 64-bit operating system and none of the previous methods were adapted to the new versions of iOS [1]. But this does not mean that attackers can not break through. Therefore, based on the analysis of the vulnerability of KPP security mechanism, this paper implements two possible breakthrough methods for kernel security mechanism for iOS9 and iOS10. Meanwhile, we propose a defense method based on kernel integrity detection and sensitive API call detection to defense breakthrough method mentioned above. And we make experiments to prove that this method can prevent and detect attack attempts or invaders effectively and timely.

  8. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    NASA Astrophysics Data System (ADS)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  9. Partial differential equation techniques for analysing animal movement: A comparison of different methods.

    PubMed

    Wang, Yi-Shan; Potts, Jonathan R

    2017-03-07

    Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Construction of non-Markovian coarse-grained models employing the Mori-Zwanzig formalism and iterative Boltzmann inversion

    NASA Astrophysics Data System (ADS)

    Yoshimoto, Yuta; Li, Zhen; Kinefuchi, Ikuya; Karniadakis, George Em

    2017-12-01

    We propose a new coarse-grained (CG) molecular simulation technique based on the Mori-Zwanzig (MZ) formalism along with the iterative Boltzmann inversion (IBI). Non-Markovian dissipative particle dynamics (NMDPD) taking into account memory effects is derived in a pairwise interaction form from the MZ-guided generalized Langevin equation. It is based on the introduction of auxiliary variables that allow for the replacement of a non-Markovian equation with a Markovian one in a higher dimensional space. We demonstrate that the NMDPD model exploiting MZ-guided memory kernels can successfully reproduce the dynamic properties such as the mean square displacement and velocity autocorrelation function of a Lennard-Jones system, as long as the memory kernels are appropriately evaluated based on the Volterra integral equation using the force-velocity and velocity-velocity correlations. Furthermore, we find that the IBI correction of a pair CG potential significantly improves the representation of static properties characterized by a radial distribution function and pressure, while it has little influence on the dynamic processes. Our findings suggest that combining the advantages of both the MZ formalism and IBI leads to an accurate representation of both the static and dynamic properties of microscopic systems that exhibit non-Markovian behavior.

  11. A flexible, extendable, modular and computationally efficient approach to scattering-integral-based seismic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.; Lamara, S.

    2016-02-01

    We present a new conceptual approach to scattering-integral-based seismic full waveform inversion (FWI) that allows a flexible, extendable, modular and both computationally and storage-efficient numerical implementation. To achieve maximum modularity and extendability, interactions between the three fundamental steps carried out sequentially in each iteration of the inversion procedure, namely, solving the forward problem, computing waveform sensitivity kernels and deriving a model update, are kept at an absolute minimum and are implemented by dedicated interfaces. To realize storage efficiency and maximum flexibility, the spatial discretization of the inverted earth model is allowed to be completely independent of the spatial discretization employed by the forward solver. For computational efficiency reasons, the inversion is done in the frequency domain. The benefits of our approach are as follows: (1) Each of the three stages of an iteration is realized by a stand-alone software program. In this way, we avoid the monolithic, unflexible and hard-to-modify codes that have often been written for solving inverse problems. (2) The solution of the forward problem, required for kernel computation, can be obtained by any wave propagation modelling code giving users maximum flexibility in choosing the forward modelling method. Both time-domain and frequency-domain approaches can be used. (3) Forward solvers typically demand spatial discretizations that are significantly denser than actually desired for the inverted model. Exploiting this fact by pre-integrating the kernels allows a dramatic reduction of disk space and makes kernel storage feasible. No assumptions are made on the spatial discretization scheme employed by the forward solver. (4) In addition, working in the frequency domain effectively reduces the amount of data, the number of kernels to be computed and the number of equations to be solved. (5) Updating the model by solving a large equation system can be done using different mathematical approaches. Since kernels are stored on disk, it can be repeated many times for different regularization parameters without need to solve the forward problem, making the approach accessible to Occam's method. Changes of choice of misfit functional, weighting of data and selection of data subsets are still possible at this stage. We have coded our approach to FWI into a program package called ASKI (Analysis of Sensitivity and Kernel Inversion) which can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. It is written in modern FORTRAN language using object-oriented concepts that reflect the modular structure of the inversion procedure. We validate our FWI method by a small-scale synthetic study and present first results of its application to high-quality seismological data acquired in the southern Aegean.

  12. An accurate method for evaluating the kernel of the integral equation relating lift to downwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  13. New numerical approximation of fractional derivative with non-local and non-singular kernel: Application to chaotic models

    NASA Astrophysics Data System (ADS)

    Toufik, Mekkaoui; Atangana, Abdon

    2017-10-01

    Recently a new concept of fractional differentiation with non-local and non-singular kernel was introduced in order to extend the limitations of the conventional Riemann-Liouville and Caputo fractional derivatives. A new numerical scheme has been developed, in this paper, for the newly established fractional differentiation. We present in general the error analysis. The new numerical scheme was applied to solve linear and non-linear fractional differential equations. We do not need a predictor-corrector to have an efficient algorithm, in this method. The comparison of approximate and exact solutions leaves no doubt believing that, the new numerical scheme is very efficient and converges toward exact solution very rapidly.

  14. Gluten-containing grains skew gluten assessment in oats due to sample grind non-homogeneity.

    PubMed

    Fritz, Ronald D; Chen, Yumin; Contreras, Veronica

    2017-02-01

    Oats are easily contaminated with gluten-rich kernels of wheat, rye and barley. These contaminants are like gluten 'pills', shown here to skew gluten analysis results. Using R-Biopharm R5 ELISA, we quantified gluten in gluten-free oatmeal servings from an in-market survey. For samples with a 5-20ppm reading on a first test, replicate analyses provided results ranging <5ppm to >160ppm. This suggests sample grinding may inadequately disperse gluten to allow a single accurate gluten assessment. To ascertain this, and characterize the distribution of 0.25-g gluten test results for kernel contaminated oats, twelve 50g samples of pure oats, each spiked with a wheat kernel, showed that 0.25g test results followed log-normal-like distributions. With this, we estimate probabilities of mis-assessment for a 'single measure/sample' relative to the <20ppm regulatory threshold, and derive an equation relating the probability of mis-assessment to sample average gluten content. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. OPC modeling by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Tsay, C. S.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.; Lin, B. J.

    2005-05-01

    Optical proximity correction (OPC) is usually used to pre-distort mask layouts to make the printed patterns as close to the desired shapes as possible. For model-based OPC, a lithographic model to predict critical dimensions after lithographic processing is needed. The model is usually obtained via a regression of parameters based on experimental data containing optical proximity effects. When the parameters involve a mix of the continuous (optical and resist models) and the discrete (kernel numbers) sets, the traditional numerical optimization method may have difficulty handling model fitting. In this study, an artificial-intelligent optimization method was used to regress the parameters of the lithographic models for OPC. The implemented phenomenological models were constant-threshold models that combine diffused aerial image models with loading effects. Optical kernels decomposed from Hopkin"s equation were used to calculate aerial images on the wafer. Similarly, the numbers of optical kernels were treated as regression parameters. This way, good regression results were obtained with different sets of optical proximity effect data.

  16. On the self-similar solution to the Euler equations for an incompressible fluid in three dimensions

    NASA Astrophysics Data System (ADS)

    Pomeau, Yves

    2018-03-01

    The equations for a self-similar solution to an inviscid incompressible fluid are mapped into an integral equation that hopefully can be solved by iteration. It is argued that the exponents of the similarity are ruled by Kelvin's theorem of conservation of circulation. The end result is an iteration with a nonlinear term entering a kernel given by a 3D integral for a swirling flow, likely within reach of present-day computational power. Because of the slow decay of the similarity solution at large distances, its kinetic energy diverges, and some mathematical results excluding non-trivial solutions of the Euler equations in the self-similar case do not apply. xml:lang="fr"

  17. Front propagation and clustering in the stochastic nonlocal Fisher equation

    NASA Astrophysics Data System (ADS)

    Ganan, Yehuda A.; Kessler, David A.

    2018-04-01

    In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.

  18. Front propagation and clustering in the stochastic nonlocal Fisher equation.

    PubMed

    Ganan, Yehuda A; Kessler, David A

    2018-04-01

    In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.

  19. Efficient exact-exchange time-dependent density-functional theory methods and their relation to time-dependent Hartree-Fock.

    PubMed

    Hesselmann, Andreas; Görling, Andreas

    2011-01-21

    A recently introduced time-dependent exact-exchange (TDEXX) method, i.e., a response method based on time-dependent density-functional theory that treats the frequency-dependent exchange kernel exactly, is reformulated. In the reformulated version of the TDEXX method electronic excitation energies can be calculated by solving a linear generalized eigenvalue problem while in the original version of the TDEXX method a laborious frequency iteration is required in the calculation of each excitation energy. The lowest eigenvalues of the new TDEXX eigenvalue equation corresponding to the lowest excitation energies can be efficiently obtained by, e.g., a version of the Davidson algorithm appropriate for generalized eigenvalue problems. Alternatively, with the help of a series expansion of the new TDEXX eigenvalue equation, standard eigensolvers for large regular eigenvalue problems, e.g., the standard Davidson algorithm, can be used to efficiently calculate the lowest excitation energies. With the help of the series expansion as well, the relation between the TDEXX method and time-dependent Hartree-Fock is analyzed. Several ways to take into account correlation in addition to the exact treatment of exchange in the TDEXX method are discussed, e.g., a scaling of the Kohn-Sham eigenvalues, the inclusion of (semi)local approximate correlation potentials, or hybrids of the exact-exchange kernel with kernels within the adiabatic local density approximation. The lowest lying excitations of the molecules ethylene, acetaldehyde, and pyridine are considered as examples.

  20. Exact calculation of the time convolutionless master equation generator: Application to the nonequilibrium resonant level model

    NASA Astrophysics Data System (ADS)

    Kidon, Lyran; Wilner, Eli Y.; Rabani, Eran

    2015-12-01

    The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima-Zwanzig-Mori time-convolution (TC) and the other on the Tokuyama-Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called "memory kernel" or "generator," going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operator in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green's function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.

  1. ENDF/B-THERMOS; 30-group ENDF/B scattering kernels. [Auxiliary program written in FORTRAN IV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCrosson, F.J.; Finch, D.R.

    These data are 30-group THERMOS thermal scattering kernels for P0 to P5 Legendre orders for every temperature of every material from s(alpha,beta) data stored in the ENDF/B library. These scattering kernels were generated using the FLANGE2 computer code. To test the kernels, the integral properties of each set of kernels were determined by a precision integration of the diffusion length equation and compared to experimental measurements of these properties. In general, the agreement was very good. Details of the methods used and results obtained are contained in the reference. The scattering kernels are organized into a two volume magnetic tapemore » library from which they may be retrieved easily for use in any 30-group THERMOS library. The contents of the tapes are as follows - VOLUME I Material ZA Temperatures (degrees K) Molecular H2O 100.0 296, 350, 400, 450, 500, 600, 800, 1000 Molecular D2O 101.0 296, 350, 400, 450, 500, 600, 800, 1000 Graphite 6000.0 296, 400, 500, 600, 700, 800, 1000, 1200, 1600, 2000 Polyethylene 205.0 296, 350 Benzene 106.0 296, 350, 400, 450, 500, 600, 800, 1000 VOLUME II Material ZA Temperatures (degrees K) Zr bound in ZrHx 203.0 296, 400, 500, 600, 700, 800, 1000, 1200 H bound in ZrHx 230.0 296, 400, 500, 600, 700, 800, 1000, 1200 Beryllium-9 4009.0 296, 400, 500, 600, 700, 800, 1000, 1200 Beryllium Oxide 200.0 296, 400, 500, 600, 700, 800, 1000, 1200 Uranium Dioxide 207.0 296, 400, 500, 600, 700, 800, 1000, 1200Auxiliary program written in FORTRAN IV; The retrieval program requires 1 tape drive and a small amount of high-speed core.« less

  2. Separation Kernel Protection Profile Revisited: Choices and Rationale

    DTIC Science & Technology

    2010-12-01

    provide the most stringent protection and rigorous security countermeasures” [ IATF ]. In other words, robustness is not the same as assurance. Figure 3... IATF Information Assurance Technical Framework, Chapter 4, Release 3.1, National Security Agency, September 2002. Karjoth01 G. Karjoth, “The

  3. Task-driven imaging in cone-beam computed tomography.

    PubMed

    Gang, G J; Stayman, J W; Ouadah, S; Ehtiati, T; Siewerdsen, J H

    Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered backprojection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.

  4. Approach to atmospheric laser-propagation theory based on the extended Huygens-Fresnel principle and a self-consistency concept.

    PubMed

    Bochove, Erik J; Rao Gudimetla, V S

    2017-01-01

    We propose a self-consistency condition based on the extended Huygens-Fresnel principle, which we apply to the propagation kernel of the mutual coherence function of a partially coherent laser beam propagating through a turbulent atmosphere. The assumption of statistical independence of turbulence in neighboring propagation segments leads to an integral equation in the propagation kernel. This integral equation is satisfied by a Gaussian function, with dependence on the transverse coordinates that is identical to the previous Gaussian formulation by Yura [Appl. Opt.11, 1399 (1972)APOPAI0003-693510.1364/AO.11.001399], but differs in the transverse coherence length's dependence on propagation distance, so that this established version violates our self-consistency principle. Our formulation has one free parameter, which in the context of Kolmogorov's theory is independent of turbulence strength and propagation distance. We determined its value by numerical fitting to the rigorous beam propagation theory of Yura and Hanson [J. Opt. Soc. Am. A6, 564 (1989)JOAOD60740-323210.1364/JOSAA.6.000564], demonstrating in addition a significant improvement over other Gaussian models.

  5. Modified homotopy perturbation method for solving hypersingular integral equations of the first kind.

    PubMed

    Eshkuvatov, Z K; Zulkarnain, F S; Nik Long, N M A; Muminov, Z

    2016-01-01

    Modified homotopy perturbation method (HPM) was used to solve the hypersingular integral equations (HSIEs) of the first kind on the interval [-1,1] with the assumption that the kernel of the hypersingular integral is constant on the diagonal of the domain. Existence of inverse of hypersingular integral operator leads to the convergence of HPM in certain cases. Modified HPM and its norm convergence are obtained in Hilbert space. Comparisons between modified HPM, standard HPM, Bernstein polynomials approach Mandal and Bhattacharya (Appl Math Comput 190:1707-1716, 2007), Chebyshev expansion method Mahiub et al. (Int J Pure Appl Math 69(3):265-274, 2011) and reproducing kernel Chen and Zhou (Appl Math Lett 24:636-641, 2011) are made by solving five examples. Theoretical and practical examples revealed that the modified HPM dominates the standard HPM and others. Finally, it is found that the modified HPM is exact, if the solution of the problem is a product of weights and polynomial functions. For rational solution the absolute error decreases very fast by increasing the number of collocation points.

  6. On the solution of integral equations with a generalized cauchy kernal

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    A certain class of singular integral equations that may arise from the mixed boundary value problems in nonhonogeneous materials is considered. The distinguishing feature of these equations is that in addition to the Cauchy singularity, the kernels contain terms that are singular only at the end points. In the form of the singular integral equations adopted, the density function is a potential or a displacement and consequently the kernal has strong singularities of the form (t-x)(-2), x(n-2) (t+x)(n), (n is = or 2, 0 x, t b). The complex function theory is used to determine the fundamental function of the problem for the general case and a simple numerical technique is described to solve the integral equation. Two examples from the theory of elasticity are then considered to show the application of the technique.

  7. Stochastic modeling of stock price process induced from the conjugate heat equation

    NASA Astrophysics Data System (ADS)

    Paeng, Seong-Hun

    2015-02-01

    Currency can be considered as a ruler for values of commodities. Then the price is the measured value by the ruler. We can suppose that inflation and variation of exchange rate are caused by variation of the scale of the ruler. In geometry, variation of the scale means that the metric is time-dependent. The conjugate heat equation is the modified heat equation which satisfies the heat conservation law for the time-dependent metric space. We propose a new model of stock prices by using the stochastic process whose transition probability is determined by the kernel of the conjugate heat equation. Our model of stock prices shows how the volatility term is affected by inflation and exchange rate. This model modifies the Black-Scholes equation in light of inflation and exchange rate.

  8. Atomic theory of viscoelastic response and memory effects in metallic glasses

    NASA Astrophysics Data System (ADS)

    Cui, Bingyu; Yang, Jie; Qiao, Jichao; Jiang, Minqiang; Dai, Lanhong; Wang, Yun-Jiang; Zaccone, Alessio

    2017-09-01

    An atomic-scale theory of the viscoelastic response of metallic glasses is derived from first principles, using a Zwanzig-Caldeira-Leggett system-bath Hamiltonian as a starting point within the framework of nonaffine linear response to mechanical deformation. This approach provides a generalized Langevin equation (GLE) as the average equation of motion for an atom or ion in the material, from which non-Markovian nonaffine viscoelastic moduli are extracted. These can be evaluated using the vibrational density of states (DOS) as input, where the boson peak plays a prominent role in the mechanics. To compare with experimental data for binary ZrCu alloys, a numerical DOS was obtained from simulations of this system, which also take electronic degrees of freedom into account via the embedded-atom method for the interatomic potential. It is shown that the viscoelastic α -relaxation, including the α -wing asymmetry in the loss modulus, can be very well described by the theory if the memory kernel (the non-Markovian friction) in the GLE is taken to be a stretched-exponential decaying function of time. This finding directly implies strong memory effects in the atomic-scale dynamics and suggests that the α -relaxation time is related to the characteristic time scale over which atoms retain memory of their previous collision history. This memory time grows dramatically below the glass transition.

  9. Transient and asymptotic behaviour of the binary breakage problem

    NASA Astrophysics Data System (ADS)

    Mantzaris, Nikos V.

    2005-06-01

    The general binary breakage problem with power-law breakage functions and two families of symmetric and asymmetric breakage kernels is studied in this work. A useful transformation leads to an equation that predicts self-similar solutions in its asymptotic limit and offers explicit knowledge of the mean size and particle density at each point in dimensionless time. A novel moving boundary algorithm in the transformed coordinate system is developed, allowing the accurate prediction of the full transient behaviour of the system from the initial condition up to the point where self-similarity is achieved, and beyond if necessary. The numerical algorithm is very rapid and its results are in excellent agreement with known analytical solutions. In the case of the symmetric breakage kernels only unimodal, self-similar number density functions are obtained asymptotically for all parameter values and independent of the initial conditions, while in the case of asymmetric breakage kernels, bimodality appears for high degrees of asymmetry and sharp breakage functions. For symmetric and discrete breakage kernels, self-similarity is not achieved. The solution exhibits sustained oscillations with amplitude that depends on the initial condition and the sharpness of the breakage mechanism, while the period is always fixed and equal to ln 2 with respect to dimensionless time.

  10. Analysis of the power flow in nonlinear oscillators driven by random excitation using the first Wiener kernel

    NASA Astrophysics Data System (ADS)

    Hawes, D. H.; Langley, R. S.

    2018-01-01

    Random excitation of mechanical systems occurs in a wide variety of structures and, in some applications, calculation of the power dissipated by such a system will be of interest. In this paper, using the Wiener series, a general methodology is developed for calculating the power dissipated by a general nonlinear multi-degree-of freedom oscillatory system excited by random Gaussian base motion of any spectrum. The Wiener series method is most commonly applied to systems with white noise inputs, but can be extended to encompass a general non-white input. From the extended series a simple expression for the power dissipated can be derived in terms of the first term, or kernel, of the series and the spectrum of the input. Calculation of the first kernel can be performed either via numerical simulations or from experimental data and a useful property of the kernel, namely that the integral over its frequency domain representation is proportional to the oscillating mass, is derived. The resulting equations offer a simple conceptual analysis of the power flow in nonlinear randomly excited systems and hence assist the design of any system where power dissipation is a consideration. The results are validated both numerically and experimentally using a base-excited cantilever beam with a nonlinear restoring force produced by magnets.

  11. Predicting activity approach based on new atoms similarity kernel function.

    PubMed

    Abu El-Atta, Ahmed H; Moussa, M I; Hassanien, Aboul Ella

    2015-07-01

    Drug design is a high cost and long term process. To reduce time and costs for drugs discoveries, new techniques are needed. Chemoinformatics field implements the informational techniques and computer science like machine learning and graph theory to discover the chemical compounds properties, such as toxicity or biological activity. This is done through analyzing their molecular structure (molecular graph). To overcome this problem there is an increasing need for algorithms to analyze and classify graph data to predict the activity of molecules. Kernels methods provide a powerful framework which combines machine learning with graph theory techniques. These kernels methods have led to impressive performance results in many several chemoinformatics problems like biological activity prediction. This paper presents a new approach based on kernel functions to solve activity prediction problem for chemical compounds. First we encode all atoms depending on their neighbors then we use these codes to find a relationship between those atoms each other. Then we use relation between different atoms to find similarity between chemical compounds. The proposed approach was compared with many other classification methods and the results show competitive accuracy with these methods. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    NASA Astrophysics Data System (ADS)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.

  13. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  14. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    PubMed Central

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods. PMID:28120883

  15. Spatio-temporal Event Classification using Time-series Kernel based Structured Sparsity

    PubMed Central

    Jeni, László A.; Lőrincz, András; Szabó, Zoltán; Cohn, Jeffrey F.; Kanade, Takeo

    2016-01-01

    In many behavioral domains, such as facial expression and gesture, sparse structure is prevalent. This sparsity would be well suited for event detection but for one problem. Features typically are confounded by alignment error in space and time. As a consequence, high-dimensional representations such as SIFT and Gabor features have been favored despite their much greater computational cost and potential loss of information. We propose a Kernel Structured Sparsity (KSS) method that can handle both the temporal alignment problem and the structured sparse reconstruction within a common framework, and it can rely on simple features. We characterize spatio-temporal events as time-series of motion patterns and by utilizing time-series kernels we apply standard structured-sparse coding techniques to tackle this important problem. We evaluated the KSS method using both gesture and facial expression datasets that include spontaneous behavior and differ in degree of difficulty and type of ground truth coding. KSS outperformed both sparse and non-sparse methods that utilize complex image features and their temporal extensions. In the case of early facial event classification KSS had 10% higher accuracy as measured by F1 score over kernel SVM methods1. PMID:27830214

  16. Oversampling the Minority Class in the Feature Space.

    PubMed

    Perez-Ortiz, Maria; Gutierrez, Pedro Antonio; Tino, Peter; Hervas-Martinez, Cesar

    2016-09-01

    The imbalanced nature of some real-world data is one of the current challenges for machine learning researchers. One common approach oversamples the minority class through convex combination of its patterns. We explore the general idea of synthetic oversampling in the feature space induced by a kernel function (as opposed to input space). If the kernel function matches the underlying problem, the classes will be linearly separable and synthetically generated patterns will lie on the minority class region. Since the feature space is not directly accessible, we use the empirical feature space (EFS) (a Euclidean space isomorphic to the feature space) for oversampling purposes. The proposed method is framed in the context of support vector machines, where the imbalanced data sets can pose a serious hindrance. The idea is investigated in three scenarios: 1) oversampling in the full and reduced-rank EFSs; 2) a kernel learning technique maximizing the data class separation to study the influence of the feature space structure (implicitly defined by the kernel function); and 3) a unified framework for preferential oversampling that spans some of the previous approaches in the literature. We support our investigation with extensive experiments over 50 imbalanced data sets.

  17. A UML profile for framework modeling.

    PubMed

    Xu, Xiao-liang; Wang, Le-yu; Zhou, Hong

    2004-01-01

    The current standard Unified Modeling Language(UML) could not model framework flexibility and extendability adequately due to lack of appropriate constructs to distinguish framework hot-spots from kernel elements. A new UML profile that may customize UML for framework modeling was presented using the extension mechanisms of UML, providing a group of UML extensions to meet the needs of framework modeling. In this profile, the extended class diagrams and sequence diagrams were defined to straightforwardly identify the hot-spots and describe their instantiation restrictions. A transformation model based on design patterns was also put forward, such that the profile based framework design diagrams could be automatically mapped to the corresponding implementation diagrams. It was proved that the presented profile makes framework modeling more straightforwardly and therefore easier to understand and instantiate.

  18. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population.

    PubMed

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  19. An Agent-Based Modeling Framework and Application for the Generic Nuclear Fuel Cycle

    NASA Astrophysics Data System (ADS)

    Gidden, Matthew J.

    Key components of a novel methodology and implementation of an agent-based, dynamic nuclear fuel cycle simulator, Cyclus , are presented. The nuclear fuel cycle is a complex, physics-dependent supply chain. To date, existing dynamic simulators have not treated constrained fuel supply, time-dependent, isotopic-quality based demand, or fuel fungibility particularly well. Utilizing an agent-based methodology that incorporates sophisticated graph theory and operations research techniques can overcome these deficiencies. This work describes a simulation kernel and agents that interact with it, highlighting the Dynamic Resource Exchange (DRE), the supply-demand framework at the heart of the kernel. The key agent-DRE interaction mechanisms are described, which enable complex entity interaction through the use of physics and socio-economic models. The translation of an exchange instance to a variant of the Multicommodity Transportation Problem, which can be solved feasibly or optimally, follows. An extensive investigation of solution performance and fidelity is then presented. Finally, recommendations for future users of Cyclus and the DRE are provided.

  20. Contour-Driven Atlas-Based Segmentation

    PubMed Central

    Wachinger, Christian; Fritscher, Karl; Sharp, Greg; Golland, Polina

    2016-01-01

    We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images. PMID:26068202

  1. A high performance data parallel tensor contraction framework: Application to coupled electro-mechanics

    NASA Astrophysics Data System (ADS)

    Poya, Roman; Gil, Antonio J.; Ortigosa, Rogelio

    2017-07-01

    The paper presents aspects of implementation of a new high performance tensor contraction framework for the numerical analysis of coupled and multi-physics problems on streaming architectures. In addition to explicit SIMD instructions and smart expression templates, the framework introduces domain specific constructs for the tensor cross product and its associated algebra recently rediscovered by Bonet et al. (2015, 2016) in the context of solid mechanics. The two key ingredients of the presented expression template engine are as follows. First, the capability to mathematically transform complex chains of operations to simpler equivalent expressions, while potentially avoiding routes with higher levels of computational complexity and, second, to perform a compile time depth-first or breadth-first search to find the optimal contraction indices of a large tensor network in order to minimise the number of floating point operations. For optimisations of tensor contraction such as loop transformation, loop fusion and data locality optimisations, the framework relies heavily on compile time technologies rather than source-to-source translation or JIT techniques. Every aspect of the framework is examined through relevant performance benchmarks, including the impact of data parallelism on the performance of isomorphic and nonisomorphic tensor products, the FLOP and memory I/O optimality in the evaluation of tensor networks, the compilation cost and memory footprint of the framework and the performance of tensor cross product kernels. The framework is then applied to finite element analysis of coupled electro-mechanical problems to assess the speed-ups achieved in kernel-based numerical integration of complex electroelastic energy functionals. In this context, domain-aware expression templates combined with SIMD instructions are shown to provide a significant speed-up over the classical low-level style programming techniques.

  2. Kernel-imbedded Gaussian processes for disease classification using microarray gene expression data

    PubMed Central

    Zhao, Xin; Cheung, Leo Wang-Kit

    2007-01-01

    Background Designing appropriate machine learning methods for identifying genes that have a significant discriminating power for disease outcomes has become more and more important for our understanding of diseases at genomic level. Although many machine learning methods have been developed and applied to the area of microarray gene expression data analysis, the majority of them are based on linear models, which however are not necessarily appropriate for the underlying connection between the target disease and its associated explanatory genes. Linear model based methods usually also bring in false positive significant features more easily. Furthermore, linear model based algorithms often involve calculating the inverse of a matrix that is possibly singular when the number of potentially important genes is relatively large. This leads to problems of numerical instability. To overcome these limitations, a few non-linear methods have recently been introduced to the area. Many of the existing non-linear methods have a couple of critical problems, the model selection problem and the model parameter tuning problem, that remain unsolved or even untouched. In general, a unified framework that allows model parameters of both linear and non-linear models to be easily tuned is always preferred in real-world applications. Kernel-induced learning methods form a class of approaches that show promising potentials to achieve this goal. Results A hierarchical statistical model named kernel-imbedded Gaussian process (KIGP) is developed under a unified Bayesian framework for binary disease classification problems using microarray gene expression data. In particular, based on a probit regression setting, an adaptive algorithm with a cascading structure is designed to find the appropriate kernel, to discover the potentially significant genes, and to make the optimal class prediction accordingly. A Gibbs sampler is built as the core of the algorithm to make Bayesian inferences. Simulation studies showed that, even without any knowledge of the underlying generative model, the KIGP performed very close to the theoretical Bayesian bound not only in the case with a linear Bayesian classifier but also in the case with a very non-linear Bayesian classifier. This sheds light on its broader usability to microarray data analysis problems, especially to those that linear methods work awkwardly. The KIGP was also applied to four published microarray datasets, and the results showed that the KIGP performed better than or at least as well as any of the referred state-of-the-art methods did in all of these cases. Conclusion Mathematically built on the kernel-induced feature space concept under a Bayesian framework, the KIGP method presented in this paper provides a unified machine learning approach to explore both the linear and the possibly non-linear underlying relationship between the target features of a given binary disease classification problem and the related explanatory gene expression data. More importantly, it incorporates the model parameter tuning into the framework. The model selection problem is addressed in the form of selecting a proper kernel type. The KIGP method also gives Bayesian probabilistic predictions for disease classification. These properties and features are beneficial to most real-world applications. The algorithm is naturally robust in numerical computation. The simulation studies and the published data studies demonstrated that the proposed KIGP performs satisfactorily and consistently. PMID:17328811

  3. Asymptotic stability of a nonlinear Korteweg-de Vries equation with critical lengths

    NASA Astrophysics Data System (ADS)

    Chu, Jixun; Coron, Jean-Michel; Shang, Peipei

    2015-10-01

    We study an initial-boundary-value problem of a nonlinear Korteweg-de Vries equation posed on the finite interval (0, 2 kπ) where k is a positive integer. The whole system has Dirichlet boundary condition at the left end-point, and both of Dirichlet and Neumann homogeneous boundary conditions at the right end-point. It is known that the origin is not asymptotically stable for the linearized system around the origin. We prove that the origin is (locally) asymptotically stable for the nonlinear system if the integer k is such that the kernel of the linear Korteweg-de Vries stationary equation is of dimension 1. This is for example the case if k = 1.

  4. Exact solutions of the population balance equation including particle transport, using group analysis

    NASA Astrophysics Data System (ADS)

    Lin, Fubiao; Meleshko, Sergey V.; Flood, Adrian E.

    2018-06-01

    The population balance equation (PBE) has received an unprecedented amount of attention in recent years from both academics and industrial practitioners because of its long history, widespread use in engineering, and applicability to a wide variety of particulate and discrete-phase processes. However it is typically impossible to obtain analytical solutions, although in almost every case a numerical solution of the PBEs can be obtained. In this article, the symmetries of PBEs with homogeneous coagulation kernels involving aggregation, breakage and growth processes and particle transport in one dimension are found by direct solving the determining equations. Using the optimal system of one and two-dimensional subalgebras, all invariant solutions and reduced equations are obtained. In particular, an explicit analytical physical solution is also presented.

  5. Graph embedding and extensions: a general framework for dimensionality reduction.

    PubMed

    Yan, Shuicheng; Xu, Dong; Zhang, Benyu; Zhang, Hong-Jiang; Yang, Qiang; Lin, Stephen

    2007-01-01

    Over the past few decades, a large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions.

  6. Fast and accurate implementation of Fourier spectral approximations of nonlocal diffusion operators and its applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Qiang, E-mail: jyanghkbu@gmail.com; Yang, Jiang, E-mail: qd2125@columbia.edu

    This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simplemore » ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge–Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge–Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen–Cahn equations, nonlocal Cahn–Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.« less

  7. AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics

    NASA Astrophysics Data System (ADS)

    Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.

    2017-05-01

    We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.

  8. Kinetics of molecular transitions with dynamic disorder in single-molecule pulling experiments

    NASA Astrophysics Data System (ADS)

    Zheng, Yue; Li, Ping; Zhao, Nanrong; Hou, Zhonghuai

    2013-05-01

    Macromolecular transitions are subject to large fluctuations of rate constant, termed as dynamic disorder. The individual or intrinsic transition rates and activation free energies can be extracted from single-molecule pulling experiments. Here we present a theoretical framework based on a generalized Langevin equation with fractional Gaussian noise and power-law memory kernel to study the kinetics of macromolecular transitions to address the effects of dynamic disorder on barrier-crossing kinetics under external pulling force. By using the Kramers' rate theory, we have calculated the fluctuating rate constant of molecular transition, as well as the experimentally accessible quantities such as the force-dependent mean lifetime, the rupture force distribution, and the speed-dependent mean rupture force. Particular attention is paid to the discrepancies between the kinetics with and without dynamic disorder. We demonstrate that these discrepancies show strong and nontrivial dependence on the external force or the pulling speed, as well as the barrier height of the potential of mean force. Our results suggest that dynamic disorder is an important factor that should be taken into account properly in accurate interpretations of single-molecule pulling experiments.

  9. A model of recovering the parameters of fast nonlocal heat transport in magnetic fusion plasmas

    NASA Astrophysics Data System (ADS)

    Kukushkin, A. B.; Kulichenko, A. A.; Sdvizhenskii, P. A.; Sokolov, A. V.; Voloshinov, V. V.

    2017-12-01

    A model is elaborated for interpreting the initial stage of the fast nonlocal transport events, which exhibit immediate response, in the diffusion time scale, of the spatial profile of electron temperature to its local perturbation, while the net heat flux is directed opposite to ordinary diffusion (i.e. along the temperature gradient). We solve the inverse problem of recovering the kernel of the integral equation, which describes nonlocal (superdiffusive) transport of energy due to emission and absorption of electromagnetic (EM) waves with long free path and strong reflection from the vacuum vessel’s wall. To allow for the errors of experimental data, we use the method based on the regularized (in the framework of an ill-posed problem, using the parametric models) approximation of available experimental data. The model is applied to interpreting the data from stellarator LHD and tokamak TFTR. The EM wave transport is considered here in the single-group approximation, however the limitations of the physics model enable us to identify the spectral range of the EM waves which might be responsible for the observed phenomenon.

  10. Nonlinear Fano interferences in open quantum systems: An exactly solvable model

    NASA Astrophysics Data System (ADS)

    Finkelstein-Shapiro, Daniel; Calatayud, Monica; Atabek, Osman; Mujica, Vladimiro; Keller, Arne

    2016-06-01

    We obtain an explicit solution for the stationary-state populations of a dissipative Fano model, where a discrete excited state is coupled to a continuum set of states; both excited sets of states are reachable by photoexcitation from the ground state. The dissipative dynamic is described by a Liouville equation in Lindblad form and the field intensity can take arbitrary values within the model. We show that the population of the continuum states as a function of laser frequency can always be expressed as a Fano profile plus a Lorentzian function with effective parameters whose explicit expressions are given in the case of a closed system coupled to a bath as well as for the original Fano scattering framework. Although the solution is intricate, it can be elegantly expressed as a linear transformation of the kernel of a 4 ×4 matrix which has the meaning of an effective Liouvillian. We unveil key notable processes related to the optical nonlinearity and which had not been reported to date: electromagnetic-induced transparency, population inversions, power narrowing and broadening, as well as an effective reduction of the Fano asymmetry parameter.

  11. On the Relativistic Separable Functions for the Breakup Reactions

    NASA Astrophysics Data System (ADS)

    Bondarenko, Serge G.; Burov, Valery V.; Rogochaya, Elena P.

    2018-02-01

    In the paper the so-called modified Yamaguchi function for the Bethe-Salpeter equation with a separable kernel is discussed. The type of the functions is defined by the analytic stucture of the hadron current with breakup - the reactions with interacting nucleon-nucleon pair in the final state (electro-, photo-, and nucleon-disintegration of the deuteron).

  12. Use of Continuous Exponential Families to Link Forms via Anchor Tests. Research Report. ETS RR-11-11

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Yan, Duanli

    2011-01-01

    Continuous exponential families are applied to linking test forms via an internal anchor. This application combines work on continuous exponential families for single-group designs and work on continuous exponential families for equivalent-group designs. Results are compared to those for kernel and equipercentile equating in the case of chained…

  13. On nonsingular potentials of Cox-Thompson inversion scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmai, Tamas; Apagyi, Barnabas

    2010-02-15

    We establish a condition for obtaining nonsingular potentials using the Cox-Thompson inverse scattering method with one phase shift. The anomalous singularities of the potentials are avoided by maintaining unique solutions of the underlying Regge-Newton integral equation for the transformation kernel. As a by-product, new inequality sequences of zeros of Bessel functions are discovered.

  14. Effective quadrature formula in solving linear integro-differential equations of order two

    NASA Astrophysics Data System (ADS)

    Eshkuvatov, Z. K.; Kammuji, M.; Long, N. M. A. Nik; Yunus, Arif A. M.

    2017-08-01

    In this note, we solve general form of Fredholm-Volterra integro-differential equations (IDEs) of order 2 with boundary condition approximately and show that proposed method is effective and reliable. Initially, IDEs is reduced into integral equation of the third kind by using standard integration techniques and identity between multiple and single integrals then truncated Legendre series are used to estimate the unknown function. For the kernel integrals, we have applied Gauss-Legendre quadrature formula and collocation points are chosen as the roots of the Legendre polynomials. Finally, reduce the integral equations of the third kind into the system of algebraic equations and Gaussian elimination method is applied to get approximate solutions. Numerical examples and comparisons with other methods reveal that the proposed method is very effective and dominated others in many cases. General theory of existence of the solution is also discussed.

  15. Roy-Steiner equations for pion-nucleon scattering

    NASA Astrophysics Data System (ADS)

    Ditsche, C.; Hoferichter, M.; Kubis, B.; Meißner, U.-G.

    2012-06-01

    Starting from hyperbolic dispersion relations, we derive a closed system of Roy-Steiner equations for pion-nucleon scattering that respects analyticity, unitarity, and crossing symmetry. We work out analytically all kernel functions and unitarity relations required for the lowest partial waves. In order to suppress the dependence on the high energy regime we also consider once- and twice-subtracted versions of the equations, where we identify the subtraction constants with subthreshold parameters. Assuming Mandelstam analyticity we determine the maximal range of validity of these equations. As a first step towards the solution of the full system we cast the equations for the π π to overline N N partial waves into the form of a Muskhelishvili-Omnès problem with finite matching point, which we solve numerically in the single-channel approximation. We investigate in detail the role of individual contributions to our solutions and discuss some consequences for the spectral functions of the nucleon electromagnetic form factors.

  16. Study on monostable and bistable reaction-diffusion equations by iteration of travelling wave maps

    NASA Astrophysics Data System (ADS)

    Yi, Taishan; Chen, Yuming

    2017-12-01

    In this paper, based on the iterative properties of travelling wave maps, we develop a new method to obtain spreading speeds and asymptotic propagation for monostable and bistable reaction-diffusion equations. Precisely, for Dirichlet problems of monostable reaction-diffusion equations on the half line, by making links between travelling wave maps and integral operators associated with the Dirichlet diffusion kernel (the latter is NOT invariant under translation), we obtain some iteration properties of the Dirichlet diffusion and some a priori estimates on nontrivial solutions of Dirichlet problems under travelling wave transformation. We then provide the asymptotic behavior of nontrivial solutions in the space-time region for Dirichlet problems. These enable us to develop a unified method to obtain results on heterogeneous steady states, travelling waves, spreading speeds, and asymptotic spreading behavior for Dirichlet problem of monostable reaction-diffusion equations on R+ as well as of monostable/bistable reaction-diffusion equations on R.

  17. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    PubMed

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.

  18. Multiple Kernel Learning with Random Effects for Predicting Longitudinal Outcomes and Data Integration

    PubMed Central

    Chen, Tianle; Zeng, Donglin

    2015-01-01

    Summary Predicting disease risk and progression is one of the main goals in many clinical research studies. Cohort studies on the natural history and etiology of chronic diseases span years and data are collected at multiple visits. Although kernel-based statistical learning methods are proven to be powerful for a wide range of disease prediction problems, these methods are only well studied for independent data but not for longitudinal data. It is thus important to develop time-sensitive prediction rules that make use of the longitudinal nature of the data. In this paper, we develop a novel statistical learning method for longitudinal data by introducing subject-specific short-term and long-term latent effects through a designed kernel to account for within-subject correlation of longitudinal measurements. Since the presence of multiple sources of data is increasingly common, we embed our method in a multiple kernel learning framework and propose a regularized multiple kernel statistical learning with random effects to construct effective nonparametric prediction rules. Our method allows easy integration of various heterogeneous data sources and takes advantage of correlation among longitudinal measures to increase prediction power. We use different kernels for each data source taking advantage of the distinctive feature of each data modality, and then optimally combine data across modalities. We apply the developed methods to two large epidemiological studies, one on Huntington's disease and the other on Alzheimer's Disease (Alzheimer's Disease Neuroimaging Initiative, ADNI) where we explore a unique opportunity to combine imaging and genetic data to study prediction of mild cognitive impairment, and show a substantial gain in performance while accounting for the longitudinal aspect of the data. PMID:26177419

  19. An Immersed Boundary method with divergence-free velocity interpolation and force spreading

    NASA Astrophysics Data System (ADS)

    Bao, Yuanxun; Donev, Aleksandar; Griffith, Boyce E.; McQueen, David M.; Peskin, Charles S.

    2017-10-01

    The Immersed Boundary (IB) method is a mathematical framework for constructing robust numerical methods to study fluid-structure interaction in problems involving an elastic structure immersed in a viscous fluid. The IB formulation uses an Eulerian representation of the fluid and a Lagrangian representation of the structure. The Lagrangian and Eulerian frames are coupled by integral transforms with delta function kernels. The discretized IB equations use approximations to these transforms with regularized delta function kernels to interpolate the fluid velocity to the structure, and to spread structural forces to the fluid. It is well-known that the conventional IB method can suffer from poor volume conservation since the interpolated Lagrangian velocity field is not generally divergence-free, and so this can cause spurious volume changes. In practice, the lack of volume conservation is especially pronounced for cases where there are large pressure differences across thin structural boundaries. The aim of this paper is to greatly reduce the volume error of the IB method by introducing velocity-interpolation and force-spreading schemes with the properties that the interpolated velocity field in which the structure moves is at least C1 and satisfies a continuous divergence-free condition, and that the force-spreading operator is the adjoint of the velocity-interpolation operator. We confirm through numerical experiments in two and three spatial dimensions that this new IB method is able to achieve substantial improvement in volume conservation compared to other existing IB methods, at the expense of a modest increase in the computational cost. Further, the new method provides smoother Lagrangian forces (tractions) than traditional IB methods. The method presented here is restricted to periodic computational domains. Its generalization to non-periodic domains is important future work.

  20. Continuity properties of the semi-group and its integral kernel in non-relativistic QED

    NASA Astrophysics Data System (ADS)

    Matte, Oliver

    2016-07-01

    Employing recent results on stochastic differential equations associated with the standard model of non-relativistic quantum electrodynamics by B. Güneysu, J. S. Møller, and the present author, we study the continuity of the corresponding semi-group between weighted vector-valued Lp-spaces, continuity properties of elements in the range of the semi-group, and the pointwise continuity of an operator-valued semi-group kernel. We further discuss the continuous dependence of the semi-group and its integral kernel on model parameters. All these results are obtained for Kato decomposable electrostatic potentials and the actual assumptions on the model are general enough to cover the Nelson model as well. As a corollary, we obtain some new pointwise exponential decay and continuity results on elements of low-energetic spectral subspaces of atoms or molecules that also take spin into account. In a simpler situation where spin is neglected, we explain how to verify the joint continuity of positive ground state eigenvectors with respect to spatial coordinates and model parameters. There are no smallness assumptions imposed on any model parameter.

  1. Exploring the Brighter-fatter Effect with the Hyper Suprime-Cam

    NASA Astrophysics Data System (ADS)

    Coulton, William R.; Armstrong, Robert; Smith, Kendrick M.; Lupton, Robert H.; Spergel, David N.

    2018-06-01

    The brighter-fatter effect has been postulated to arise due to the build up of a transverse electric field, produced as photocharges accumulate in the pixels’ potential wells. We investigate the brighter-fatter effect in the Hyper Suprime-Cam by examining flat fields and moments of stars. We observe deviations from the expected linear relation in the photon transfer curve (PTC), luminosity-dependent correlations between pixels in flat-field images, and a luminosity-dependent point-spread function (PSF) in stellar observations. Under the key assumptions of translation invariance and Maxwell’s equations in the quasi-static limit, we give a first-principles proof that the effect can be parameterized by a translationally invariant scalar kernel. We describe how this kernel can be estimated from flat fields and discuss how this kernel has been used to remove the brighter-fatter distortions in Hyper Suprime-Cam images. We find that our correction restores the expected linear relation in the PTCs and significantly reduces, but does not completely remove, the luminosity dependence of the PSF over a wide range of magnitudes.

  2. A Configuration Framework and Implementation for the Least Privilege Separation Kernel

    DTIC Science & Technology

    2010-12-01

    The Altova Web site states that virtualization software, Parallels for Mac and Wine , is required for running it on MacOS and RedHat Linux...University of Singapore Singapore 28. Tan Lai Poh National University of Singapore Singapore 29. Quek Chee Luan Defence Science & Technology Agency Singapore

  3. A Framework for the Ethical Practice of Action Learning

    ERIC Educational Resources Information Center

    Johnson, Craig

    2010-01-01

    By tradition the action learning community has encouraged an eclectic view of practice. This involves a number of different permutations around a kernel of nebulous ideas. However, the disadvantages of such an open philosophy have never been considered. In particular consumer protection against inauthentic action learning experiences has been…

  4. Nanosurveyor: a framework for real-time data processing

    DOE PAGES

    Daurer, Benedikt J.; Krishnan, Hari; Perciano, Talita; ...

    2017-01-31

    Background: The ever improving brightness of accelerator based sources is enabling novel observations and discoveries with faster frame rates, larger fields of view, higher resolution, and higher dimensionality. Results: Here we present an integrated software/algorithmic framework designed to capitalize on high-throughput experiments through efficient kernels, load-balanced workflows, which are scalable in design. We describe the streamlined processing pipeline of ptychography data analysis. Conclusions: The pipeline provides throughput, compression, and resolution as well as rapid feedback to the microscope operators.

  5. Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation

    DOE PAGES

    Huang, Hao; Yoo, Shinjae; Yu, Dantong; ...

    2015-06-01

    Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, somore » it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.« less

  6. A Framework for Propagation of Uncertainties in the Kepler Data Analysis Pipeline

    NASA Technical Reports Server (NTRS)

    Clarke, Bruce D.; Allen, Christopher; Bryson, Stephen T.; Caldwell, Douglas A.; Chandrasekaran, Hema; Cote, Miles T.; Girouard, Forrest; Jenkins, Jon M.; Klaus, Todd C.; Li, Jie; hide

    2010-01-01

    The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by simultaneously observing 100,000 stellar targets nearly continuously over a three and a half year period. The 96-megapixel focal plane consists of 42 charge-coupled devices (CCD) each containing two 1024 x 1100 pixel arrays. Cross-correlations between calibrated pixels are introduced by common calibrations performed on each CCD requiring downstream data products access to the calibrated pixel covariance matrix in order to properly estimate uncertainties. The prohibitively large covariance matrices corresponding to the 75,000 calibrated pixels per CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used to implement standard propagation of uncertainties (POU) in the Kepler Science Operations Center (SOC) data processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent calibration transformation allowing the full covariance matrix of any subset of calibrated pixels to be recalled on-the-fly at any step in the calibration process. Singular value decomposition (SVD) is used to compress and low-pass filter the raw uncertainty data as well as any data dependent kernels. The combination of POU framework and SVD compression provide downstream consumers of the calibrated pixel data access to the full covariance matrix of any subset of the calibrated pixels traceable to pixel level measurement uncertainties without having to store, retrieve and operate on prohibitively large covariance matrices. We describe the POU Framework and SVD compression scheme and its implementation in the Kepler SOC pipeline.

  7. Constraining modified theories of gravity with the galaxy bispectrum

    NASA Astrophysics Data System (ADS)

    Yamauchi, Daisuke; Yokoyama, Shuichiro; Tashiro, Hiroyuki

    2017-12-01

    We explore the use of the galaxy bispectrum induced by the nonlinear gravitational evolution as a possible probe to test general scalar-tensor theories with second-order equations of motion. We find that time dependence of the leading second-order kernel is approximately characterized by one parameter, the second-order index, which is expected to trace the higher-order growth history of the Universe. We show that our new parameter can significantly carry new information about the nonlinear growth of structure. We forecast future constraints on the second-order index as well as the equation-of-state parameter and the growth index.

  8. Rapid scatter estimation for CBCT using the Boltzmann transport equation

    NASA Astrophysics Data System (ADS)

    Sun, Mingshan; Maslowski, Alex; Davis, Ian; Wareing, Todd; Failla, Gregory; Star-Lack, Josh

    2014-03-01

    Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU's running in parallel.

  9. A perturbative solution to metadynamics ordinary differential equation

    NASA Astrophysics Data System (ADS)

    Tiwary, Pratyush; Dama, James F.; Parrinello, Michele

    2015-12-01

    Metadynamics is a popular enhanced sampling scheme wherein by periodic application of a repulsive bias, one can surmount high free energy barriers and explore complex landscapes. Recently, metadynamics was shown to be mathematically well founded, in the sense that the biasing procedure is guaranteed to converge to the true free energy surface in the long time limit irrespective of the precise choice of biasing parameters. A differential equation governing the post-transient convergence behavior of metadynamics was also derived. In this short communication, we revisit this differential equation, expressing it in a convenient and elegant Riccati-like form. A perturbative solution scheme is then developed for solving this differential equation, which is valid for any generic biasing kernel. The solution clearly demonstrates the robustness of metadynamics to choice of biasing parameters and gives further confidence in the widely used method.

  10. A perturbative solution to metadynamics ordinary differential equation.

    PubMed

    Tiwary, Pratyush; Dama, James F; Parrinello, Michele

    2015-12-21

    Metadynamics is a popular enhanced sampling scheme wherein by periodic application of a repulsive bias, one can surmount high free energy barriers and explore complex landscapes. Recently, metadynamics was shown to be mathematically well founded, in the sense that the biasing procedure is guaranteed to converge to the true free energy surface in the long time limit irrespective of the precise choice of biasing parameters. A differential equation governing the post-transient convergence behavior of metadynamics was also derived. In this short communication, we revisit this differential equation, expressing it in a convenient and elegant Riccati-like form. A perturbative solution scheme is then developed for solving this differential equation, which is valid for any generic biasing kernel. The solution clearly demonstrates the robustness of metadynamics to choice of biasing parameters and gives further confidence in the widely used method.

  11. What Would a Graph Look Like in this Layout? A Machine Learning Approach to Large Graph Visualization.

    PubMed

    Kwon, Oh-Hyun; Crnovrsanin, Tarik; Ma, Kwan-Liu

    2018-01-01

    Using different methods for laying out a graph can lead to very different visual appearances, with which the viewer perceives different information. Selecting a "good" layout method is thus important for visualizing a graph. The selection can be highly subjective and dependent on the given task. A common approach to selecting a good layout is to use aesthetic criteria and visual inspection. However, fully calculating various layouts and their associated aesthetic metrics is computationally expensive. In this paper, we present a machine learning approach to large graph visualization based on computing the topological similarity of graphs using graph kernels. For a given graph, our approach can show what the graph would look like in different layouts and estimate their corresponding aesthetic metrics. An important contribution of our work is the development of a new framework to design graph kernels. Our experimental study shows that our estimation calculation is considerably faster than computing the actual layouts and their aesthetic metrics. Also, our graph kernels outperform the state-of-the-art ones in both time and accuracy. In addition, we conducted a user study to demonstrate that the topological similarity computed with our graph kernel matches perceptual similarity assessed by human users.

  12. Kernel machine methods for integrative analysis of genome-wide methylation and genotyping studies.

    PubMed

    Zhao, Ni; Zhan, Xiang; Huang, Yen-Tsung; Almli, Lynn M; Smith, Alicia; Epstein, Michael P; Conneely, Karen; Wu, Michael C

    2018-03-01

    Many large GWAS consortia are expanding to simultaneously examine the joint role of DNA methylation in addition to genotype in the same subjects. However, integrating information from both data types is challenging. In this paper, we propose a composite kernel machine regression model to test the joint epigenetic and genetic effect. Our approach works at the gene level, which allows for a common unit of analysis across different data types. The model compares the pairwise similarities in the phenotype to the pairwise similarities in the genotype and methylation values; and high correspondence is suggestive of association. A composite kernel is constructed to measure the similarities in the genotype and methylation values between pairs of samples. We demonstrate through simulations and real data applications that the proposed approach can correctly control type I error, and is more robust and powerful than using only the genotype or methylation data in detecting trait-associated genes. We applied our method to investigate the genetic and epigenetic regulation of gene expression in response to stressful life events using data that are collected from the Grady Trauma Project. Within the kernel machine testing framework, our methods allow for heterogeneity in effect sizes, nonlinear, and interactive effects, as well as rapid P-value computation. © 2017 WILEY PERIODICALS, INC.

  13. Speeding Up the Bilateral Filter: A Joint Acceleration Way.

    PubMed

    Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng

    2016-06-01

    Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.

  14. Topics in structural dynamics: Nonlinear unsteady transonic flows and Monte Carlo methods in acoustics

    NASA Technical Reports Server (NTRS)

    Haviland, J. K.

    1974-01-01

    The results are reported of two unrelated studies. The first was an investigation of the formulation of the equations for non-uniform unsteady flows, by perturbation of an irrotational flow to obtain the linear Green's equation. The resulting integral equation was found to contain a kernel which could be expressed as the solution of the adjoint flow equation, a linear equation for small perturbations, but with non-constant coefficients determined by the steady flow conditions. It is believed that the non-uniform flow effects may prove important in transonic flutter, and that in such cases, the use of doublet type solutions of the wave equation would then prove to be erroneous. The second task covered an initial investigation into the use of the Monte Carlo method for solution of acoustical field problems. Computed results are given for a rectangular room problem, and for a problem involving a circular duct with a source located at the closed end.

  15. Green's functions for dislocations in bonded strips and related crack problems

    NASA Technical Reports Server (NTRS)

    Ballarini, R.; Luo, H. A.

    1990-01-01

    Green's functions are derived for the plane elastostatics problem of a dislocation in a bimaterial strip. Using these fundamental solutions as kernels, various problems involving cracks in a bimaterial strip are analyzed using singular integral equations. For each problem considered, stress intensity factors are calculated for several combinations of the parameters which describe loading, geometry and material mismatch.

  16. Macroscopic and microscopic components of exchange-correlation interactions

    NASA Astrophysics Data System (ADS)

    Sottile, F.; Karlsson, K.; Reining, L.; Aryasetiawan, F.

    2003-11-01

    We consider two commonly used approaches for the ab initio calculation of optical-absorption spectra, namely, many-body perturbation theory based on Green’s functions and time-dependent density-functional theory (TDDFT). The former leads to the two-particle Bethe-Salpeter equation that contains a screened electron-hole interaction. We approximate this interaction in various ways, and discuss in particular the results obtained for a local contact potential. This, in fact, allows us to straightforwardly make the link to the TDDFT approach, and to discuss the exchange-correlation kernel fxc that corresponds to the contact exciton. Our main results, illustrated in the examples of bulk silicon, GaAs, argon, and LiF, are the following. (i) The simple contact exciton model, used on top of an ab initio calculated band structure, yields reasonable absorption spectra. (ii) Qualitatively extremely different fxc can be derived approximatively from the same Bethe-Salpeter equation. These kernels can however yield very similar spectra. (iii) A static fxc, both with or without a long-range component, can create transitions in the quasiparticle gap. To the best of our knowledge, this is the first time that TDDFT has been shown to be able to reproduce bound excitons.

  17. Data-Driven Risk Assessment from Small Scale Epidemics: Estimation and Model Choice for Spatio-Temporal Data with Application to a Classical Swine Fever Outbreak

    PubMed Central

    Gamado, Kokouvi; Marion, Glenn; Porphyre, Thibaud

    2017-01-01

    Livestock epidemics have the potential to give rise to significant economic, welfare, and social costs. Incursions of emerging and re-emerging pathogens may lead to small and repeated outbreaks. Analysis of the resulting data is statistically challenging but can inform disease preparedness reducing potential future losses. We present a framework for spatial risk assessment of disease incursions based on data from small localized historic outbreaks. We focus on between-farm spread of livestock pathogens and illustrate our methods by application to data on the small outbreak of Classical Swine Fever (CSF) that occurred in 2000 in East Anglia, UK. We apply models based on continuous time semi-Markov processes, using data-augmentation Markov Chain Monte Carlo techniques within a Bayesian framework to infer disease dynamics and detection from incompletely observed outbreaks. The spatial transmission kernel describing pathogen spread between farms, and the distribution of times between infection and detection, is estimated alongside unobserved exposure times. Our results demonstrate inference is reliable even for relatively small outbreaks when the data-generating model is known. However, associated risk assessments depend strongly on the form of the fitted transmission kernel. Therefore, for real applications, methods are needed to select the most appropriate model in light of the data. We assess standard Deviance Information Criteria (DIC) model selection tools and recently introduced latent residual methods of model assessment, in selecting the functional form of the spatial transmission kernel. These methods are applied to the CSF data, and tested in simulated scenarios which represent field data, but assume the data generation mechanism is known. Analysis of simulated scenarios shows that latent residual methods enable reliable selection of the transmission kernel even for small outbreaks whereas the DIC is less reliable. Moreover, compared with DIC, model choice based on latent residual assessment correlated better with predicted risk. PMID:28293559

  18. Generalized and efficient algorithm for computing multipole energies and gradients based on Cartesian tensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Dejun, E-mail: dejun.lin@gmail.com

    2015-09-21

    Accurate representation of intermolecular forces has been the central task of classical atomic simulations, known as molecular mechanics. Recent advancements in molecular mechanics models have put forward the explicit representation of permanent and/or induced electric multipole (EMP) moments. The formulas developed so far to calculate EMP interactions tend to have complicated expressions, especially in Cartesian coordinates, which can only be applied to a specific kernel potential function. For example, one needs to develop a new formula each time a new kernel function is encountered. The complication of these formalisms arises from an intriguing and yet obscured mathematical relation between themore » kernel functions and the gradient operators. Here, I uncover this relation via rigorous derivation and find that the formula to calculate EMP interactions is basically invariant to the potential kernel functions as long as they are of the form f(r), i.e., any Green’s function that depends on inter-particle distance. I provide an algorithm for efficient evaluation of EMP interaction energies, forces, and torques for any kernel f(r) up to any arbitrary rank of EMP moments in Cartesian coordinates. The working equations of this algorithm are essentially the same for any kernel f(r). Recently, a few recursive algorithms were proposed to calculate EMP interactions. Depending on the kernel functions, the algorithm here is about 4–16 times faster than these algorithms in terms of the required number of floating point operations and is much more memory efficient. I show that it is even faster than a theoretically ideal recursion scheme, i.e., one that requires 1 floating point multiplication and 1 addition per recursion step. This algorithm has a compact vector-based expression that is optimal for computer programming. The Cartesian nature of this algorithm makes it fit easily into modern molecular simulation packages as compared with spherical coordinate-based algorithms. A software library based on this algorithm has been implemented in C++11 and has been released.« less

  19. A WPS Based Architecture for Climate Data Analytic Services (CDAS) at NASA

    NASA Astrophysics Data System (ADS)

    Maxwell, T. P.; McInerney, M.; Duffy, D.; Carriere, L.; Potter, G. L.; Doutriaux, C.

    2015-12-01

    Faced with unprecedented growth in the Big Data domain of climate science, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute trusted and tested analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using trusted climate data analysis tools (ESMF, CDAT, NCO, etc.). The framework is structured as a set of interacting modules allowing maximal flexibility in deployment choices. The current set of module managers include: Staging Manager: Runs the computation locally on the WPS server or remotely using tools such as celery or SLURM. Compute Engine Manager: Runs the computation serially or distributed over nodes using a parallelization framework such as celery or spark. Decomposition Manger: Manages strategies for distributing the data over nodes. Data Manager: Handles the import of domain data from long term storage and manages the in-memory and disk-based caching architectures. Kernel manager: A kernel is an encapsulated computational unit which executes a processor's compute task. Each kernel is implemented in python exploiting existing analysis packages (e.g. CDAT) and is compatible with all CDAS compute engines and decompositions. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be executed using either direct web service calls, a python script or application, or a javascript-based web application. Client packages in python or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends, compare multiple reanalysis datasets, and variability.

  20. Quantum dynamics in continuum for proton transport—Generalized correlation

    NASA Astrophysics Data System (ADS)

    Chen, Duan; Wei, Guo-Wei

    2012-04-01

    As a key process of many biological reactions such as biological energy transduction or human sensory systems, proton transport has attracted much research attention in biological, biophysical, and mathematical fields. A quantum dynamics in continuum framework has been proposed to study proton permeation through membrane proteins in our earlier work and the present work focuses on the generalized correlation of protons with their environment. Being complementary to electrostatic potentials, generalized correlations consist of proton-proton, proton-ion, proton-protein, and proton-water interactions. In our approach, protons are treated as quantum particles while other components of generalized correlations are described classically and in different levels of approximations upon simulation feasibility and difficulty. Specifically, the membrane protein is modeled as a group of discrete atoms, while ion densities are approximated by Boltzmann distributions, and water molecules are represented as a dielectric continuum. These proton-environment interactions are formulated as convolutions between number densities of species and their corresponding interaction kernels, in which parameters are obtained from experimental data. In the present formulation, generalized correlations are important components in the total Hamiltonian of protons, and thus is seamlessly embedded in the multiscale/multiphysics total variational model of the system. It takes care of non-electrostatic interactions, including the finite size effect, the geometry confinement induced channel barriers, dehydration and hydrogen bond effects, etc. The variational principle or the Euler-Lagrange equation is utilized to minimize the total energy functional, which includes the total Hamiltonian of protons, and obtain a new version of generalized Laplace-Beltrami equation, generalized Poisson-Boltzmann equation and generalized Kohn-Sham equation. A set of numerical algorithms, such as the matched interface and boundary method, the Dirichlet to Neumann mapping, Gummel iteration, and Krylov space techniques, is employed to improve the accuracy, efficiency, and robustness of model simulations. Finally, comparisons between the present model predictions and experimental data of current-voltage curves, as well as current-concentration curves of the Gramicidin A channel, verify our new model.

  1. IMPLEMENTATION OF THE SMOKE EMISSION DATA PROCESSOR AND SMOKE TOOL INPUT DATA PROCESSOR IN MODELS-3

    EPA Science Inventory

    The U.S. Environmental Protection Agency has implemented Version 1.3 of SMOKE (Sparse Matrix Object Kernel Emission) processor for preparation of area, mobile, point, and biogenic sources emission data within Version 4.1 of the Models-3 air quality modeling framework. The SMOK...

  2. Procedural Explanations in Mathematics Writing: A Framework for Understanding College Students' Effective Communication Practices

    ERIC Educational Resources Information Center

    Kline, Susan L.; Ishii, Drew K.

    2008-01-01

    This study analyzes the procedural explanations written by remedial college mathematics students. Relevant literatures suggest that six communication activities might be key in effective procedural explanations in mathematics writing: (a) orienting the learner, (b) providing kernels or definitions of concepts and procedures, (c) using exemplars or…

  3. Nondestructive In Situ Measurement Method for Kernel Moisture Content in Corn Ear.

    PubMed

    Zhang, Han-Lin; Ma, Qin; Fan, Li-Feng; Zhao, Peng-Fei; Wang, Jian-Xu; Zhang, Xiao-Dong; Zhu, De-Hai; Huang, Lan; Zhao, Dong-Jie; Wang, Zhong-Yi

    2016-12-20

    Moisture content is an important factor in corn breeding and cultivation. A corn breed with low moisture at harvest is beneficial for mechanical operations, reduces drying and storage costs after harvesting and, thus, reduces energy consumption. Nondestructive measurement of kernel moisture in an intact corn ear allows us to select corn varieties with seeds that have high dehydration speeds in the mature period. We designed a sensor using a ring electrode pair for nondestructive measurement of the kernel moisture in a corn ear based on a high-frequency detection circuit. Through experiments using the effective scope of the electrodes' electric field, we confirmed that the moisture in the corn cob has little effect on corn kernel moisture measurement. Before the sensor was applied in practice, we investigated temperature and conductivity effects on the output impedance. Results showed that the temperature was linearly related to the output impedance (both real and imaginary parts) of the measurement electrodes and the detection circuit's output voltage. However, the conductivity has a non-monotonic dependence on the output impedance (both real and imaginary parts) of the measurement electrodes and the output voltage of the high-frequency detection circuit. Therefore, we reduced the effect of conductivity on the measurement results through measurement frequency selection. Corn moisture measurement results showed a quadric regression between corn ear moisture and the imaginary part of the output impedance, and there is also a quadric regression between corn kernel moisture and the high-frequency detection circuit output voltage at 100 MHz. In this study, two corn breeds were measured using our sensor and gave R ² values for the quadric regression equation of 0.7853 and 0.8496.

  4. Software Framework for Development of Web-GIS Systems for Analysis of Georeferenced Geophysical Data

    NASA Astrophysics Data System (ADS)

    Okladnikov, I.; Gordov, E. P.; Titov, A. G.

    2011-12-01

    Georeferenced datasets (meteorological databases, modeling and reanalysis results, remote sensing products, etc.) are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated software framework for rapid development of providing such support information-computational systems based on Web-GIS technologies has been created. The software framework consists of 3 basic parts: computational kernel developed using ITTVIS Interactive Data Language (IDL), a set of PHP-controllers run within specialized web portal, and JavaScript class library for development of typical components of web mapping application graphical user interface (GUI) based on AJAX technology. Computational kernel comprise of number of modules for datasets access, mathematical and statistical data analysis and visualization of results. Specialized web-portal consists of web-server Apache, complying OGC standards Geoserver software which is used as a base for presenting cartographical information over the Web, and a set of PHP-controllers implementing web-mapping application logic and governing computational kernel. JavaScript library aiming at graphical user interface development is based on GeoExt library combining ExtJS Framework and OpenLayers software. Based on the software framework an information-computational system for complex analysis of large georeferenced data archives was developed. Structured environmental datasets available for processing now include two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 Reanalysis, ECMWF ERA Interim Reanalysis, MRI/JMA APHRODITE's Water Resources Project Reanalysis, meteorological observational data for the territory of the former USSR for the 20th century, and others. Current version of the system is already involved into a scientific research process. Particularly, recently the system was successfully used for analysis of Siberia climate changes and its impact in the region. The software framework presented allows rapid development of Web-GIS systems for geophysical data analysis thus providing specialists involved into multidisciplinary research projects with reliable and practical instruments for complex analysis of climate and ecosystems changes on global and regional scales. This work is partially supported by RFBR grants #10-07-00547, #11-05-01190, and SB RAS projects 4.31.1.5, 4.31.2.7, 4, 8, 9, 50 and 66.

  5. Stochastic calibration and learning in nonstationary hydroeconomic models

    NASA Astrophysics Data System (ADS)

    Maneta, M. P.; Howitt, R.

    2014-05-01

    Concern about water scarcity and adverse climate events over agricultural regions has motivated a number of efforts to develop operational integrated hydroeconomic models to guide adaptation and optimal use of water. Once calibrated, these models are used for water management and analysis assuming they remain valid under future conditions. In this paper, we present and demonstrate a methodology that permits the recursive calibration of economic models of agricultural production from noisy but frequently available data. We use a standard economic calibration approach, namely positive mathematical programming, integrated in a data assimilation algorithm based on the ensemble Kalman filter equations to identify the economic model parameters. A moving average kernel ensures that new and past information on agricultural activity are blended during the calibration process, avoiding loss of information and overcalibration for the conditions of a single year. A regularization constraint akin to the standard Tikhonov regularization is included in the filter to ensure its stability even in the presence of parameters with low sensitivity to observations. The results show that the implementation of the PMP methodology within a data assimilation framework based on the enKF equations is an effective method to calibrate models of agricultural production even with noisy information. The recursive nature of the method incorporates new information as an added value to the known previous observations of agricultural activity without the need to store historical information. The robustness of the method opens the door to the use of new remote sensing algorithms for operational water management.

  6. GERICOS: A Generic Framework for the Development of On-Board Software

    NASA Astrophysics Data System (ADS)

    Plasson, P.; Cuomo, C.; Gabriel, G.; Gauthier, N.; Gueguen, L.; Malac-Allain, L.

    2016-08-01

    This paper presents an overview of the GERICOS framework (GEneRIC Onboard Software), its architecture, its various layers and its future evolutions. The GERICOS framework, developed and qualified by LESIA, offers a set of generic, reusable and customizable software components for the rapid development of payload flight software. The GERICOS framework has a layered structure. The first layer (GERICOS::CORE) implements the concept of active objects and forms an abstraction layer over the top of real-time kernels. The second layer (GERICOS::BLOCKS) offers a set of reusable software components for building flight software based on generic solutions to recurrent functionalities. The third layer (GERICOS::DRIVERS) implements software drivers for several COTS IP cores of the LEON processor ecosystem.

  7. A high-order strong stability preserving Runge-Kutta method for three-dimensional full waveform modeling and inversion of anelastic models

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.; Yang, D.; Bao, X.; Li, J.; Zhang, W.

    2017-12-01

    Accurate and efficient forward modeling methods are important for high resolution full waveform inversion. Compared with the elastic case, solving anelastic wave equation requires more computational time, because of the need to compute additional material-independent anelastic functions. A numerical scheme with a large Courant-Friedrichs-Lewy (CFL) condition number enables us to use a large time step to simulate wave propagation, which improves computational efficiency. In this work, we apply the fourth-order strong stability preserving Runge-Kutta method with an optimal CFL coeffiecient to solve the anelastic wave equation. We use a fourth order DRP/opt MacCormack scheme for the spatial discretization, and we approximate the rheological behaviors of the Earth by using the generalized Maxwell body model. With a larger CFL condition number, we find that the computational efficient is significantly improved compared with the traditional fourth-order Runge-Kutta method. Then, we apply the scattering-integral method for calculating travel time and amplitude sensitivity kernels with respect to velocity and attenuation structures. For each source, we carry out one forward simulation and save the time-dependent strain tensor. For each station, we carry out three `backward' simulations for the three components and save the corresponding strain tensors. The sensitivity kernels at each point in the medium are the convolution of the two sets of the strain tensors. Finally, we show several synthetic tests to verify the effectiveness of the strong stability preserving Runge-Kutta method in generating accurate synthetics in full waveform modeling, and in generating accurate strain tensors for calculating sensitivity kernels at regional and global scales.

  8. An explanation of forms of planetary orbits and estimation of angular shift of the Mercury' perihelion using the statistical theory of gravitating spheroidal bodies

    NASA Astrophysics Data System (ADS)

    Krot, A. M.

    2013-09-01

    This work develops a statistical theory of gravitating spheroidal bodies to calculate the orbits of planets and explore forms of planetary orbits with regard to the Alfvén oscillating force [1] in the Solar system and other exoplanetary systems. The statistical theory of formation of gravitating spheroidal bodies has been proposed in [2]-[5]. Starting the conception for forming a spheroidal body inside a gas-dust protoplanetary nebula, this theory solves the problem of gravitational condensation of a gas-dust protoplanetary cloud with a view to planetary formation in its own gravitational field [3] as well as derives a new law of the Solar system planetary distances which generalizes the wellknown laws [2], [3]. This work also explains an origin of the Alfvén oscillating force modifying forms of planetary orbits within the framework of the statistical theory of gravitating spheroidal bodies [5]. Due to the Alfvén oscillating force moving solid bodies in a distant zone of a rotating spheroidal body have elliptic trajectories. It means that orbits for the enough remote planets from the Sun in Solar system are described by ellipses with focus in the origin of coordinates and with small eccentricities. The nearby planet to Sun named Mercury has more complex trajectory. Namely, in case of Mercury the angular displacement of a Newtonian ellipse is observed during its one rotation on an orbit, i.e. a regular (century) shift of the perihelion of Mercury' orbit occurs. According to the statistical theory of gravitating spheroidal bodies [2]-[5] under the usage of laws of celestial mechanics in conformity to cosmogonic bodies (especially, to stars) it is necessary to take into account an extended substance called a stellar corona. In this connection the stellar corona can be described by means of model of rotating and gravitating spheroidal body [5]. Moreover, the parameter of gravitational compression α of a spheroidal body (describing the Sun, in particular) has been estimated on the basis of the linear size of its kernel, i.e. the thickness of a visible part of the solar corona. Really, NASA' astronomer S. Odenwald in his notice «How thick is the solar corona?» wrote: "The corona actually extends throughout the entire solar system as a "wind" of particles, however, the densist parts of the corona is usually seen not more than about 1-2 solar radii from the surface or about 690,000 to 1.5 million kilometers at the equator. Near the poles, it seems to be a bit flatter..." [6]. In the fact, as mentioned in [5], a recession of plots of dependences of relative brightness of components of spectrum of the Solar corona occurs on distance of 3-3.5 radii from the center, i.e. on 2-2.5 radii from the edge of the solar disk. Thus, accepting thickness of a visible part of the solar corona equal to Δ = 2R (here R is radius of the solar disk) we find that r* = R + Δ = 3R , where r* =1/ α . In other words, the parameter of gravitational compression 2 α =1/ r* of a spheroidal body in case of the Sun with its corona (for which the equatorial radius ofdisk R = 6.955ṡ108 m) can be estimated by the value [2]-[5]: 2.29701177718 10 (m ) (3 ) 1 19 2 2 = ≈ ṡ - - R α . (1) So, the procedure of finding α is based on the known 3σ -rule in the statistical theory. Really, as shown in the monograph [5], namely the solar corona accounting under calculation of perturbed orbit of the planet of Mercury allows to find the estimation of a displacement of perihelion of Mercury' orbit for the one period within the framework of the statistical theory of gravitating spheroidal bodies. As it is known, on a way of specification of the law of Newton using the general relativity theory the Mercury problem solving was found [5]. Nevertheless, from a common position of the statistical theory of gravitating spheroidal bodies the points of view as Leverrier (about existence of an unknown matter) and Einstein (about insufficiency of the theory of Newton) practically differ nothing. Really, there exist plasma as well as gas-dust substance around of kernel of cosmogonic body (in particular, the solar corona in case of the Sun), i.e. the account of circumstance that forming cosmogonic bodies have not precise outlines and are represented by means of spheroidal forms demands some specification of the Newton' law in connection with a gravitating spheroidal body [2]-[5]. So, with the purpose of Mercury' trajectory finding within the framework of the statistical theory of gravitating and rotating spheroidal bodies it is necessary to estimate gravitational potential in nearby removal from the Sun, i.e. in a remote zone of a gravitational field and in immediate proximity to a kernel of a rotating spheroidal body. Taking into account that the orbit of planet Mercury entirely lays in one plane of polar angle θ =θ 0 = const we should use the formula [5]: 0 2 2 1 0 sin () * ɛ θ γ ϕ - = - > r r M g r r , (2) where r* =1/ α , α is a parameter of gravitational compression of a spheroidal body, M is its mass, γ is the Newtonian gravitational constant, ɛ 0 is a geometrical eccentricity of kernel of a rotating and gravitating spheroidal body (2 1 ɛ 0 << ) [2]-[5]. This work shows that in view of greatest proximity on distance to the Sun and essential inclination of orbit of Mercury the projection of a point of perihelion of its orbit can directly get in a nearby vicinity of the Sun, namely, in the visible part of the solar corona. In the monograph [5], using Binet' equation and formula (2) the equation of disturbed orbit of a planet (the Mercury) in a vicinity of a kernel of a rotating and gravitating spheroidal body has been derived. The obtained relation expresses the equation of the so-called "disturbed" ellipse in polar coordinates with the origin of coordinates in focus, i.e. the planet Mercury is moving on a precessing elliptic orbit in view of the fact that there is a modulating multiplier of a phase (or azimuth angle). So, within the framework of the statistical theory of gravitating spheroidal bodies the required angular moving of Newtonian ellipse during one turn of Mercury on the disturbed orbit (or displacement of perihelion of its orbit for the period) has been estimated [5]: 2 2 2 2 0 (1 )2 (3 ) a e e ṡ - + ṡ = α π ɛ δɛ , (3) where through a and e a major semi-axis and an eccentricity of Mercury's orbit are designated respectively, α is a parameter of gravitational compression (1) and ɛ 0 is a geometrical eccentricity of kernel of a rotating and gravitating spheroidal body (the Sun) [5]. Thus, according to the proposed formula (3) the turn of perihelion of Mercury' orbit is equal to 43.93'' in century that well is consistent with conclusions of the general relativity theory of Einstein (whose analogous estimation is equal to 43.03'') and astronomical observation data (43.11 ± 0.45'') [5].

  9. Classification of Astrocytomas and Oligodendrogliomas from Mass Spectrometry Data Using Sparse Kernel Machines

    PubMed Central

    Huang, Jacob; Gholami, Behnood; Agar, Nathalie Y. R.; Norton, Isaiah; Haddad, Wassim M.; Tannenbaum, Allen R.

    2013-01-01

    Glioma histologies are the primary factor in prognostic estimates and are used in determining the proper course of treatment. Furthermore, due to the sensitivity of cranial environments, real-time tumor-cell classification and boundary detection can aid in the precision and completeness of tumor resection. A recent improvement to mass spectrometry known as desorption electrospray ionization operates in an ambient environment without the application of a preparation compound. This allows for a real-time acquisition of mass spectra during surgeries and other live operations. In this paper, we present a framework using sparse kernel machines to determine a glioma sample’s histopathological subtype by analyzing its chemical composition acquired by desorption electrospray ionization mass spectrometry. PMID:22256188

  10. Multi-PSF fusion in image restoration of range-gated systems

    NASA Astrophysics Data System (ADS)

    Wang, Canjin; Sun, Tao; Wang, Tingfeng; Miao, Xikui; Wang, Rui

    2018-07-01

    For the task of image restoration, an accurate estimation of degrading PSF/kernel is the premise of recovering a visually superior image. The imaging process of range-gated imaging system in atmosphere associates with lots of factors, such as back scattering, background radiation, diffraction limit and the vibration of the platform. On one hand, due to the difficulty of constructing models for all factors, the kernels from physical-model based methods are not strictly accurate and practical. On the other hand, there are few strong edges in images, which brings significant errors to most of image-feature-based methods. Since different methods focus on different formation factors of the kernel, their results often complement each other. Therefore, we propose an approach which combines physical model with image features. With an fusion strategy using GCRF (Gaussian Conditional Random Fields) framework, we get a final kernel which is closer to the actual one. Aiming at the problem that ground-truth image is difficult to obtain, we then propose a semi data-driven fusion method in which different data sets are used to train fusion parameters. Finally, a semi blind restoration strategy based on EM (Expectation Maximization) and RL (Richardson-Lucy) algorithm is proposed. Our methods not only models how the lasers transfer in the atmosphere and imaging in the ICCD (Intensified CCD) plane, but also quantifies other unknown degraded factors using image-based methods, revealing how multiple kernel elements interact with each other. The experimental results demonstrate that our method achieves better performance than state-of-the-art restoration approaches.

  11. Selected Aspects of Markovian and Non-Markovian Quantum Master Equations

    NASA Astrophysics Data System (ADS)

    Lendi, K.

    A few particular marked properties of quantum dynamical equations accounting for general relaxation and dissipation are selected and summarized in brief. Most results derive from the universal concept of complete positivity. The considerations mainly regard genuinely irreversible processes as characterized by a unique asymptotically stationary final state for arbitrary initial conditions. From ordinary Markovian master equations and associated quantum dynamical semigroup time-evolution, derivations of higher order Onsager coefficients and related entropy production are discussed. For general processes including non-faithful states a regularized version of quantum relative entropy is introduced. Further considerations extend to time-dependent infinitesimal generators of time-evolution and to a possible description of propagation of initial states entangled between open system and environment. In the coherence-vector representation of the full non-Markovian equations including entangled initial states, first results are outlined towards identifying mathematical properties of a restricted class of trial integral-kernel functions suited to phenomenological applications.

  12. A multi-domain spectral method for time-fractional differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.

    2015-07-01

    This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.

  13. Birth-jump processes and application to forest fire spotting.

    PubMed

    Hillen, T; Greese, B; Martin, J; de Vries, G

    2015-01-01

    Birth-jump models are designed to describe population models for which growth and spatial spread cannot be decoupled. A birth-jump model is a nonlinear integro-differential equation. We present two different derivations of this equation, one based on a random walk approach and the other based on a two-compartmental reaction-diffusion model. In the case that the redistribution kernels are highly concentrated, we show that the integro-differential equation can be approximated by a reaction-diffusion equation, in which the proliferation rate contributes to both the diffusion term and the reaction term. We completely solve the corresponding critical domain size problem and the minimal wave speed problem. Birth-jump models can be applied in many areas in mathematical biology. We highlight an application of our results in the context of forest fire spread through spotting. We show that spotting increases the invasion speed of a forest fire front.

  14. Ion Channel Conductance Measurements on a Silicon-Based Platform

    DTIC Science & Technology

    2006-01-01

    calculated using the molecular dynamics code, GROMACS . Reasonable agreement is obtained in the simulated versus measured conductance over the range of...measurements of the lipid giga-seal characteristics have been performed, including AC conductance measurements and statistical analysis in order to...Dynamics kernel self-consistently coupled to Poisson equations using a P3M force field scheme and the GROMACS description of protein structure and

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alexander J.

    Hybrid programming models for beyond-CMOS technologies will prove critical for integrating new computing technologies alongside our existing infrastructure. Unfortunately the software infrastructure required to enable this is lacking or not available. XACC is a programming framework for extreme-scale, post-exascale accelerator architectures that integrates alongside existing conventional applications. It is a pluggable framework for programming languages developed for next-gen computing hardware architectures like quantum and neuromorphic computing. It lets computational scientists efficiently off-load classically intractable work to attached accelerators through user-friendly Kernel definitions. XACC makes post-exascale hybrid programming approachable for domain computational scientists.

  16. Modeling of thin-walled structures interacting with acoustic media as constrained two-dimensional continua

    NASA Astrophysics Data System (ADS)

    Rabinskiy, L. N.; Zhavoronok, S. I.

    2018-04-01

    The transient interaction of acoustic media and elastic shells is considered on the basis of the transition function approach. The three-dimensional hyperbolic initial boundary-value problem is reduced to a two-dimensional problem of shell theory with integral operators approximating the acoustic medium effect on the shell dynamics. The kernels of these integral operators are determined by the elementary solution of the problem of acoustic waves diffraction at a rigid obstacle with the same boundary shape as the wetted shell surface. The closed-form elementary solution for arbitrary convex obstacles can be obtained at the initial interaction stages on the background of the so-called “thin layer hypothesis”. Thus, the shell–wave interaction model defined by integro-differential dynamic equations with analytically determined kernels of integral operators becomes hence two-dimensional but nonlocal in time. On the other hand, the initial interaction stage results in localized dynamic loadings and consequently in complex strain and stress states that require higher-order shell theories. Here the modified theory of I.N.Vekua–A.A.Amosov-type is formulated in terms of analytical continuum dynamics. The shell model is constructed on a two-dimensional manifold within a set of field variables, Lagrangian density, and constraint equations following from the boundary conditions “shifted” from the shell faces to its base surface. Such an approach allows one to construct consistent low-order shell models within a unified formal hierarchy. The equations of the N th-order shell theory are singularly perturbed and contain second-order partial derivatives with respect to time and surface coordinates whereas the numerical integration of systems of first-order equations is more efficient. Such systems can be obtained as Hamilton–de Donder–Weyl-type equations for the Lagrangian dynamical system. The Hamiltonian formulation of the elementary N th-order shell theory is here briefly described.

  17. Compactness and robustness: Applications in the solution of integral equations for chemical kinetics and electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Zhou, Yajun

    This thesis employs the topological concept of compactness to deduce robust solutions to two integral equations arising from chemistry and physics: the inverse Laplace problem in chemical kinetics and the vector wave scattering problem in dielectric optics. The inverse Laplace problem occurs in the quantitative understanding of biological processes that exhibit complex kinetic behavior: different subpopulations of transition events from the "reactant" state to the "product" state follow distinct reaction rate constants, which results in a weighted superposition of exponential decay modes. Reconstruction of the rate constant distribution from kinetic data is often critical for mechanistic understandings of chemical reactions related to biological macromolecules. We devise a "phase function approach" to recover the probability distribution of rate constants from decay data in the time domain. The robustness (numerical stability) of this reconstruction algorithm builds upon the continuity of the transformations connecting the relevant function spaces that are compact metric spaces. The robust "phase function approach" not only is useful for the analysis of heterogeneous subpopulations of exponential decays within a single transition step, but also is generalizable to the kinetic analysis of complex chemical reactions that involve multiple intermediate steps. A quantitative characterization of the light scattering is central to many meteoro-logical, optical, and medical applications. We give a rigorous treatment to electromagnetic scattering on arbitrarily shaped dielectric media via the Born equation: an integral equation with a strongly singular convolution kernel that corresponds to a non-compact Green operator. By constructing a quadratic polynomial of the Green operator that cancels out the kernel singularity and satisfies the compactness criterion, we reveal the universality of a real resonance mode in dielectric optics. Meanwhile, exploiting the properties of compact operators, we outline the geometric and physical conditions that guarantee a robust solution to the light scattering problem, and devise an asymptotic solution to the Born equation of electromagnetic scattering for arbitrarily shaped dielectric in a non-perturbative manner.

  18. Livermore Compiler Analysis Loop Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hornung, R. D.

    2013-03-01

    LCALS is designed to evaluate compiler optimizations and performance of a variety of loop kernels and loop traversal software constructs. Some of the loop kernels are pulled directly from "Livermore Loops Coded in C", developed at LLNL (see item 11 below for details of earlier code versions). The older suites were used to evaluate floating-point performances of hardware platforms prior to porting larger application codes. The LCALS suite is geared toward assissing C++ compiler optimizations and platform performance related to SIMD vectorization, OpenMP threading, and advanced C++ language features. LCALS contains 20 of 24 loop kernels from the older Livermoremore » Loop suites, plus various others representative of loops found in current production appkication codes at LLNL. The latter loops emphasize more diverse loop constructs and data access patterns than the others, such as multi-dimensional difference stencils. The loops are included in a configurable framework, which allows control of compilation, loop sampling for execution timing, which loops are run and their lengths. It generates timing statistics for analysis and comparing variants of individual loops. Also, it is easy to add loops to the suite as desired.« less

  19. Correlated Topic Vector for Scene Classification.

    PubMed

    Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang

    2017-07-01

    Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.

  20. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  1. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  2. Quantum-mechanical transport equation for atomic systems.

    NASA Technical Reports Server (NTRS)

    Berman, P. R.

    1972-01-01

    A quantum-mechanical transport equation (QMTE) is derived which should be applicable to a wide range of problems involving the interaction of radiation with atoms or molecules which are also subject to collisions with perturber atoms. The equation follows the time evolution of the macroscopic atomic density matrix elements of atoms located at classical position R and moving with classical velocity v. It is quantum mechanical in the sense that all collision kernels or rates which appear have been obtained from a quantum-mechanical theory and, as such, properly take into account the energy-level variations and velocity changes of the active (emitting or absorbing) atom produced in collisions with perturber atoms. The present formulation is better suited to problems involving high-intensity external fields, such as those encountered in laser physics.

  3. A general framework for regularized, similarity-based image restoration.

    PubMed

    Kheradmand, Amin; Milanfar, Peyman

    2014-12-01

    Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.

  4. New numerical method for radiation heat transfer in nonhomogeneous participating media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, J.R.; Tan, Zhiqiang

    A new numerical method, which solves the exact integral equations of distance-angular integration form for radiation transfer, is introduced in this paper. By constructing and prestoring the numerical integral formulas for the distance integral for appropriate kernel functions, this method eliminates the time consuming evaluations of the kernels of the space integrals in the formal computations. In addition, when the number of elements in the system is large, the resulting coefficient matrix is quite sparse. Thus, either considerable time or much storage can be saved. A weakness of the method is discussed, and some remedies are suggested. As illustrations, somemore » one-dimensional and two-dimensional problems in both homogeneous and inhomogeneous emitting, absorbing, and linear anisotropic scattering media are studied. Some results are compared with available data. 13 refs.« less

  5. Fractional Stochastic Differential Equations Satisfying Fluctuation-Dissipation Theorem

    NASA Astrophysics Data System (ADS)

    Li, Lei; Liu, Jian-Guo; Lu, Jianfeng

    2017-10-01

    We propose in this work a fractional stochastic differential equation (FSDE) model consistent with the over-damped limit of the generalized Langevin equation model. As a result of the `fluctuation-dissipation theorem', the differential equations driven by fractional Brownian noise to model memory effects should be paired with Caputo derivatives, and this FSDE model should be understood in an integral form. We establish the existence of strong solutions for such equations and discuss the ergodicity and convergence to Gibbs measure. In the linear forcing regime, we show rigorously the algebraic convergence to Gibbs measure when the `fluctuation-dissipation theorem' is satisfied, and this verifies that satisfying `fluctuation-dissipation theorem' indeed leads to the correct physical behavior. We further discuss possible approaches to analyze the ergodicity and convergence to Gibbs measure in the nonlinear forcing regime, while leave the rigorous analysis for future works. The FSDE model proposed is suitable for systems in contact with heat bath with power-law kernel and subdiffusion behaviors.

  6. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE PAGES

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    2016-01-01

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  7. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  8. On an integro-differential equation model for the study of the response of an acoustically coupled panel

    NASA Technical Reports Server (NTRS)

    Yen, D. H. Y.; Maestrello, L.; Padula, S.

    1975-01-01

    The response of a clamped panel to supersonically convected turbulence is considered. A theoretical model in the form of an integro-differential equation is employed that takes into account the coupling between the panel motion and the surrounding acoustic medium. The kernels of the integrals, which represent induced pressures due to the panel motion, are Green's functions for sound radiations under various moving and stationary sources. An approximate analysis is made by following a finite-element Ritz-Galerkin procedure. Preliminary numerical results, in agreement with experimental findings, indicate that the acoustic damping is the controlling mechanism of the response.

  9. Study of molecular N D bound states in the Bethe-Salpeter equation approach

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-Yang; Qi, Jing-Juan; Guo, Xin-Heng; Wei, Ke-Wei

    2018-05-01

    We study the Λc(2595 )+ and Σc(2800 )0 states as the N D bound systems in the Bethe-Salpeter formalism in the ladder and instantaneous approximations. With the kernel induced by ρ , ω and σ exchanges, we solve the Bethe-Salpeter equations for the N D bound systems numerically and find that the bound states may exist. We assume that the observed states Λc(2595 )+ and Σc(2800 )0 are S -wave N D molecular bound states and calculate the decay widths of Λc(2595 )+→Σc0π+ and Σc(2800 )0→Λc+π-.

  10. Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan

    2012-05-15

    Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less

  11. Predicting spatial patterns of plant recruitment using animal-displacement kernels.

    PubMed

    Santamaría, Luis; Rodríguez-Pérez, Javier; Larrinaga, Asier R; Pias, Beatriz

    2007-10-10

    For plants dispersed by frugivores, spatial patterns of recruitment are primarily influenced by the spatial arrangement and characteristics of parent plants, the digestive characteristics, feeding behaviour and movement patterns of animal dispersers, and the structure of the habitat matrix. We used an individual-based, spatially-explicit framework to characterize seed dispersal and seedling fate in an endangered, insular plant-disperser system: the endemic shrub Daphne rodriguezii and its exclusive disperser, the endemic lizard Podarcis lilfordi. Plant recruitment kernels were chiefly determined by the disperser's patterns of space utilization (i.e. the lizard's displacement kernels), the position of the various plant individuals in relation to them, and habitat structure (vegetation cover vs. bare soil). In contrast to our expectations, seed gut-passage rate and its effects on germination, and lizard speed-of-movement, habitat choice and activity rhythm were of minor importance. Predicted plant recruitment kernels were strongly anisotropic and fine-grained, preventing their description using one-dimensional, frequency-distance curves. We found a general trade-off between recruitment probability and dispersal distance; however, optimal recruitment sites were not necessarily associated to sites of maximal adult-plant density. Conservation efforts aimed at enhancing the regeneration of endangered plant-disperser systems may gain in efficacy by manipulating the spatial distribution of dispersers (e.g. through the creation of refuges and feeding sites) to create areas favourable to plant recruitment.

  12. Theoretical foundations of spatially-variant mathematical morphology part ii: gray-level images.

    PubMed

    Bouaynaya, Nidhal; Schonfeld, Dan

    2008-05-01

    In this paper, we develop a spatially-variant (SV) mathematical morphology theory for gray-level signals and images in the Euclidean space. The proposed theory preserves the geometrical concept of the structuring function, which provides the foundation of classical morphology and is essential in signal and image processing applications. We define the basic SV gray-level morphological operators (i.e., SV gray-level erosion, dilation, opening, and closing) and investigate their properties. We demonstrate the ubiquity of SV gray-level morphological systems by deriving a kernel representation for a large class of systems, called V-systems, in terms of the basic SV graylevel morphological operators. A V-system is defined to be a gray-level operator, which is invariant under gray-level (vertical) translations. Particular attention is focused on the class of SV flat gray-level operators. The kernel representation for increasing V-systems is a generalization of Maragos' kernel representation for increasing and translation-invariant function-processing systems. A representation of V-systems in terms of their kernel elements is established for increasing and upper-semi-continuous V-systems. This representation unifies a large class of spatially-variant linear and non-linear systems under the same mathematical framework. Finally, simulation results show the potential power of the general theory of gray-level spatially-variant mathematical morphology in several image analysis and computer vision applications.

  13. Experimental pencil beam kernels derivation for 3D dose calculation in flattening filter free modulated fields

    NASA Astrophysics Data System (ADS)

    Diego Azcona, Juan; Barbés, Benigno; Wang, Lilie; Burguete, Javier

    2016-01-01

    This paper presents a method to obtain the pencil-beam kernels that characterize a megavoltage photon beam generated in a flattening filter free (FFF) linear accelerator (linac) by deconvolution from experimental measurements at different depths. The formalism is applied to perform independent dose calculations in modulated fields. In our previous work a formalism was developed for ideal flat fluences exiting the linac’s head. That framework could not deal with spatially varying energy fluences, so any deviation from the ideal flat fluence was treated as a perturbation. The present work addresses the necessity of implementing an exact analysis where any spatially varying fluence can be used such as those encountered in FFF beams. A major improvement introduced here is to handle the actual fluence in the deconvolution procedure. We studied the uncertainties associated to the kernel derivation with this method. Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from two linacs from different vendors, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water-equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50mm diameter circular field, collimated with a lead block. The 3D kernel for a FFF beam was obtained by deconvolution using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. Error uncertainty in the kernel derivation procedure was estimated to be within 0.2%. Eighteen modulated fields used clinically in different treatment localizations were irradiated at four measurement depths (total of fifty-four film measurements). Comparison through the gamma-index to their corresponding calculated absolute dose distributions showed a number of passing points (3%, 3mm) mostly above 99%. This new procedure is more reliable and robust than the previous one. Its ability to perform accurate independent dose calculations was demonstrated.

  14. Applications of algebraic topology to compatible spatial discretizations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bochev, Pavel Blagoveston; Hyman, James M.

    We provide a common framework for compatible discretizations using algebraic topology to guide our analysis. The main concept is the natural inner product on cochains, which induces a combinatorial Hodge theory. The framework comprises of mutually consistent operations of differentiation and integration, has a discrete Stokes theorem, and preserves the invariants of the DeRham cohomology groups. The latter allows for an elementary calculation of the kernel of the discrete Laplacian. Our framework provides an abstraction that includes examples of compatible finite element, finite volume and finite difference methods. We describe how these methods result from the choice of a reconstructionmore » operator and when they are equivalent.« less

  15. A semi-analytical method for near-trapped mode and fictitious frequencies of multiple scattering by an array of elliptical cylinders in water waves

    NASA Astrophysics Data System (ADS)

    Chen, Jeng-Tzong; Lee, Jia-Wei

    2013-09-01

    In this paper, we focus on the water wave scattering by an array of four elliptical cylinders. The null-field boundary integral equation method (BIEM) is used in conjunction with degenerate kernels and eigenfunctions expansion. The closed-form fundamental solution is expressed in terms of the degenerate kernel containing the Mathieu and the modified Mathieu functions in the elliptical coordinates. Boundary densities are represented by using the eigenfunction expansion. To avoid using the addition theorem to translate the Mathieu functions, the present approach can solve the water wave problem containing multiple elliptical cylinders in a semi-analytical manner by introducing the adaptive observer system. Regarding water wave problems, the phenomena of numerical instability of fictitious frequencies may appear when the BIEM/boundary element method (BEM) is used. Besides, the near-trapped mode for an array of four identical elliptical cylinders is observed in a special layout. Both physical (near-trapped mode) and mathematical (fictitious frequency) resonances simultaneously appear in the present paper for a water wave problem by an array of four identical elliptical cylinders. Two regularization techniques, the combined Helmholtz interior integral equation formulation (CHIEF) method and the Burton and Miller approach, are adopted to alleviate the numerical resonance due to fictitious frequency.

  16. A random walk description of individual animal movement accounting for periods of rest

    NASA Astrophysics Data System (ADS)

    Tilles, Paulo F. C.; Petrovskii, Sergei V.; Natti, Paulo L.

    2016-11-01

    Animals do not move all the time but alternate the period of actual movement (foraging) with periods of rest (e.g. eating or sleeping). Although the existence of rest times is widely acknowledged in the literature and has even become a focus of increased attention recently, the theoretical approaches to describe animal movement by calculating the dispersal kernel and/or the mean squared displacement (MSD) rarely take rests into account. In this study, we aim to bridge this gap. We consider a composite stochastic process where the periods of active dispersal or `bouts' (described by a certain baseline probability density function (pdf) of animal dispersal) alternate with periods of immobility. For this process, we derive a general equation that determines the pdf of this composite movement. The equation is analysed in detail in two special but important cases such as the standard Brownian motion described by a Gaussian kernel and the Levy flight described by a Cauchy distribution. For the Brownian motion, we show that in the large-time asymptotics the effect of rests results in a rescaling of the diffusion coefficient. The movement occurs as a subdiffusive transition between the two diffusive asymptotics. Interestingly, the Levy flight case shows similar properties, which indicates a certain universality of our findings.

  17. A random walk description of individual animal movement accounting for periods of rest.

    PubMed

    Tilles, Paulo F C; Petrovskii, Sergei V; Natti, Paulo L

    2016-11-01

    Animals do not move all the time but alternate the period of actual movement (foraging) with periods of rest (e.g. eating or sleeping). Although the existence of rest times is widely acknowledged in the literature and has even become a focus of increased attention recently, the theoretical approaches to describe animal movement by calculating the dispersal kernel and/or the mean squared displacement (MSD) rarely take rests into account. In this study, we aim to bridge this gap. We consider a composite stochastic process where the periods of active dispersal or 'bouts' (described by a certain baseline probability density function (pdf) of animal dispersal) alternate with periods of immobility. For this process, we derive a general equation that determines the pdf of this composite movement. The equation is analysed in detail in two special but important cases such as the standard Brownian motion described by a Gaussian kernel and the Levy flight described by a Cauchy distribution. For the Brownian motion, we show that in the large-time asymptotics the effect of rests results in a rescaling of the diffusion coefficient. The movement occurs as a subdiffusive transition between the two diffusive asymptotics. Interestingly, the Levy flight case shows similar properties, which indicates a certain universality of our findings.

  18. A random walk description of individual animal movement accounting for periods of rest

    PubMed Central

    Tilles, Paulo F. C.

    2016-01-01

    Animals do not move all the time but alternate the period of actual movement (foraging) with periods of rest (e.g. eating or sleeping). Although the existence of rest times is widely acknowledged in the literature and has even become a focus of increased attention recently, the theoretical approaches to describe animal movement by calculating the dispersal kernel and/or the mean squared displacement (MSD) rarely take rests into account. In this study, we aim to bridge this gap. We consider a composite stochastic process where the periods of active dispersal or ‘bouts’ (described by a certain baseline probability density function (pdf) of animal dispersal) alternate with periods of immobility. For this process, we derive a general equation that determines the pdf of this composite movement. The equation is analysed in detail in two special but important cases such as the standard Brownian motion described by a Gaussian kernel and the Levy flight described by a Cauchy distribution. For the Brownian motion, we show that in the large-time asymptotics the effect of rests results in a rescaling of the diffusion coefficient. The movement occurs as a subdiffusive transition between the two diffusive asymptotics. Interestingly, the Levy flight case shows similar properties, which indicates a certain universality of our findings. PMID:28018645

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brull, S., E-mail: Stephane.Brull@math.u-bordeaux.fr; Charrier, P., E-mail: Pierre.Charrier@math.u-bordeaux.fr; Mieussens, L., E-mail: Luc.Mieussens@math.u-bordeaux.fr

    It is well known that the roughness of the wall has an effect on microscale gas flows. This effect can be shown for large Knudsen numbers by using a numerical solution of the Boltzmann equation. However, when the wall is rough at a nanometric scale, it is necessary to use a very small mesh size which is much too expansive. An alternative approach is to incorporate the roughness effect in the scattering kernel of the boundary condition, such as the Maxwell-like kernel introduced by the authors in a previous paper. Here, we explain how this boundary condition can be implementedmore » in a discrete velocity approximation of the Boltzmann equation. Moreover, the influence of the roughness is shown by computing the structure scattering pattern of mono-energetic beams of the incident gas molecules. The effect of the angle of incidence of these molecules, of their mass, and of the morphology of the wall is investigated and discussed in a simplified two-dimensional configuration. The effect of the azimuthal angle of the incident beams is shown for a three-dimensional configuration. Finally, the case of non-elastic scattering is considered. All these results suggest that our approach is a promising way to incorporate enough physics of gas-surface interaction, at a reasonable computing cost, to improve kinetic simulations of micro- and nano-flows.« less

  20. A problem with inverse time for a singularly perturbed integro-differential equation with diagonal degeneration of the kernel of high order

    NASA Astrophysics Data System (ADS)

    Bobodzhanov, A. A.; Safonov, V. F.

    2016-04-01

    We consider an algorithm for constructing asymptotic solutions regularized in the sense of Lomov (see [1], [2]). We show that such problems can be reduced to integro-differential equations with inverse time. But in contrast to known papers devoted to this topic (see, for example, [3]), in this paper we study a fundamentally new case, which is characterized by the absence, in the differential part, of a linear operator that isolates, in the asymptotics of the solution, constituents described by boundary functions and by the fact that the integral operator has kernel with diagonal degeneration of high order. Furthermore, the spectrum of the regularization operator A(t) (see below) may contain purely imaginary eigenvalues, which causes difficulties in the application of the methods of construction of asymptotic solutions proposed in the monograph [3]. Based on an analysis of the principal term of the asymptotics, we isolate a class of inhomogeneities and initial data for which the exact solution of the original problem tends to the limit solution (as \\varepsilon\\to+0) on the entire time interval under consideration, also including a boundary-layer zone (that is, we solve the so-called initialization problem). The paper is of a theoretical nature and is designed to lead to a greater understanding of the problems in the theory of singular perturbations. There may be applications in various applied areas where models described by integro-differential equations are used (for example, in elasticity theory, the theory of electrical circuits, and so on).

  1. Hierarchical Aligned Cluster Analysis for Temporal Clustering of Human Motion.

    PubMed

    Zhou, Feng; De la Torre, Fernando; Hodgins, Jessica K

    2013-03-01

    Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.

  2. Automatic Thread-Level Parallelization in the Chombo AMR Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christen, Matthias; Keen, Noel; Ligocki, Terry

    2011-05-26

    The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less

  3. A new numerical approach for uniquely solvable exterior Riemann-Hilbert problem on region with corners

    NASA Astrophysics Data System (ADS)

    Zamzamir, Zamzana; Murid, Ali H. M.; Ismail, Munira

    2014-06-01

    Numerical solution for uniquely solvable exterior Riemann-Hilbert problem on region with corners at offcorner points has been explored by discretizing the related integral equation using Picard iteration method without any modifications to the left-hand side (LHS) and right-hand side (RHS) of the integral equation. Numerical errors for all iterations are converge to the required solution. However, for certain problems, it gives lower accuracy. Hence, this paper presents a new numerical approach for the problem by treating the generalized Neumann kernel at LHS and the function at RHS of the integral equation. Due to the existence of the corner points, Gaussian quadrature is employed which avoids the corner points during numerical integration. Numerical example on a test region is presented to demonstrate the effectiveness of this formulation.

  4. Toward lattice fractional vector calculus

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2014-09-01

    An analog of fractional vector calculus for physical lattice models is suggested. We use an approach based on the models of three-dimensional lattices with long-range inter-particle interactions. The lattice analogs of fractional partial derivatives are represented by kernels of lattice long-range interactions, where the Fourier series transformations of these kernels have a power-law form with respect to wave vector components. In the continuum limit, these lattice partial derivatives give derivatives of non-integer order with respect to coordinates. In the three-dimensional description of the non-local continuum, the fractional differential operators have the form of fractional partial derivatives of the Riesz type. As examples of the applications of the suggested lattice fractional vector calculus, we give lattice models with long-range interactions for the fractional Maxwell equations of non-local continuous media and for the fractional generalization of the Mindlin and Aifantis continuum models of gradient elasticity.

  5. Meshfree truncated hierarchical refinement for isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Atri, H. R.; Shojaee, S.

    2018-05-01

    In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.

  6. A Nonrigid Kernel-Based Framework for 2D-3D Pose Estimation and 2D Image Segmentation

    PubMed Central

    Sandhu, Romeil; Dambreville, Samuel; Yezzi, Anthony; Tannenbaum, Allen

    2013-01-01

    In this work, we present a nonrigid approach to jointly solving the tasks of 2D-3D pose estimation and 2D image segmentation. In general, most frameworks that couple both pose estimation and segmentation assume that one has exact knowledge of the 3D object. However, under nonideal conditions, this assumption may be violated if only a general class to which a given shape belongs is given (e.g., cars, boats, or planes). Thus, we propose to solve the 2D-3D pose estimation and 2D image segmentation via nonlinear manifold learning of 3D embedded shapes for a general class of objects or deformations for which one may not be able to associate a skeleton model. Thus, the novelty of our method is threefold: First, we present and derive a gradient flow for the task of nonrigid pose estimation and segmentation. Second, due to the possible nonlinear structures of one’s training set, we evolve the preimage obtained through kernel PCA for the task of shape analysis. Third, we show that the derivation for shape weights is general. This allows us to use various kernels, as well as other statistical learning methodologies, with only minimal changes needing to be made to the overall shape evolution scheme. In contrast with other techniques, we approach the nonrigid problem, which is an infinite-dimensional task, with a finite-dimensional optimization scheme. More importantly, we do not explicitly need to know the interaction between various shapes such as that needed for skeleton models as this is done implicitly through shape learning. We provide experimental results on several challenging pose estimation and segmentation scenarios. PMID:20733218

  7. Agile convolutional neural network for pulmonary nodule classification using CT images.

    PubMed

    Zhao, Xinzhuo; Liu, Liyao; Qi, Shouliang; Teng, Yueyang; Li, Jianhua; Qian, Wei

    2018-04-01

    To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.

  8. Principal Dynamic Mode Analysis of the Hodgkin–Huxley Equations

    PubMed Central

    Eikenberry, Steffen E.; Marmarelis, Vasilis Z.

    2015-01-01

    We develop an autoregressive model framework based on the concept of Principal Dynamic Modes (PDMs) for the process of action potential (AP) generation in the excitable neuronal membrane described by the Hodgkin–Huxley (H–H) equations. The model's exogenous input is injected current, and whenever the membrane potential output exceeds a specified threshold, it is fed back as a second input. The PDMs are estimated from the previously developed Nonlinear Autoregressive Volterra (NARV) model, and represent an efficient functional basis for Volterra kernel expansion. The PDM-based model admits a modular representation, consisting of the forward and feedback PDM bases as linear filterbanks for the exogenous and autoregressive inputs, respectively, whose outputs are then fed to a static nonlinearity composed of polynomials operating on the PDM outputs and cross-terms of pair-products of PDM outputs. A two-step procedure for model reduction is performed: first, influential subsets of the forward and feedback PDM bases are identified and selected as the reduced PDM bases. Second, the terms of the static nonlinearity are pruned. The first step reduces model complexity from a total of 65 coefficients to 27, while the second further reduces the model coefficients to only eight. It is demonstrated that the performance cost of model reduction in terms of out-of-sample prediction accuracy is minimal. Unlike the full model, the eight coefficient pruned model can be easily visualized to reveal the essential system components, and thus the data-derived PDM model can yield insight into the underlying system structure and function. PMID:25630480

  9. Local uncontrollability for affine control systems with jumps

    NASA Astrophysics Data System (ADS)

    Treanţă, Savin

    2017-09-01

    This paper investigates affine control systems with jumps for which the ideal If(g1, …, gm) generated by the drift vector field f in the Lie algebra L(f, g1, …, gm) can be imbedded as a kernel of a linear first-order partial differential equation. It will lead us to uncontrollable affine control systems with jumps for which the corresponding reachable sets are included in explicitly described differentiable manifolds.

  10. Modelling, Information, Processing, and Control

    DTIC Science & Technology

    1989-01-15

    PAGE COUNT Sc..JA I, ll4,4 FROM I S*,LTON SepSk 15. SUPPLEMENTARY NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necenary and...and graduate re- search assistants, and also short term consultants and visitors. In addition to salary support, funds were used to support scien- tific...and Optimization, 34 (1986), pp. 1276-1308. 2. D. L. Russell: A Floquet Decomposition for Volterra Equations with Periodic Kernel and a Transform

  11. Electromagnetics. Volume 1, Number 4, October-December 1981.

    DTIC Science & Technology

    1981-01-01

    terms. 1.6 Matrix and Operator Theory Integral equations have been cast in approximate numerical form by the moment method (MoM). In this numerical...introduced the eigenmode expansion method to find more properties of the SEM [3.4]. One defines eigenvalues and eigenmodes for the integral operator (kernel...exterior surface of the system. Mechanisms that play a role in the penetration are (1) diffusion through metal skins , (2) field leakage through

  12. Generalized Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew

    2004-01-01

    A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…

  13. L1-norm kernel discriminant analysis via Bayes error bound optimization for robust feature extraction.

    PubMed

    Zheng, Wenming; Lin, Zhouchen; Wang, Haixian

    2014-04-01

    A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.

  14. Encoding Dissimilarity Data for Statistical Model Building.

    PubMed

    Wahba, Grace

    2010-12-01

    We summarize, review and comment upon three papers which discuss the use of discrete, noisy, incomplete, scattered pairwise dissimilarity data in statistical model building. Convex cone optimization codes are used to embed the objects into a Euclidean space which respects the dissimilarity information while controlling the dimension of the space. A "newbie" algorithm is provided for embedding new objects into this space. This allows the dissimilarity information to be incorporated into a Smoothing Spline ANOVA penalized likelihood model, a Support Vector Machine, or any model that will admit Reproducing Kernel Hilbert Space components, for nonparametric regression, supervised learning, or semi-supervised learning. Future work and open questions are discussed. The papers are: F. Lu, S. Keles, S. Wright and G. Wahba 2005. A framework for kernel regularization with application to protein clustering. Proceedings of the National Academy of Sciences 102, 12332-1233.G. Corrada Bravo, G. Wahba, K. Lee, B. Klein, R. Klein and S. Iyengar 2009. Examining the relative influence of familial, genetic and environmental covariate information in flexible risk models. Proceedings of the National Academy of Sciences 106, 8128-8133F. Lu, Y. Lin and G. Wahba. Robust manifold unfolding with kernel regularization. TR 1008, Department of Statistics, University of Wisconsin-Madison.

  15. Pure endmember extraction using robust kernel archetypoid analysis for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Yang, Gang; Wu, Ke; Li, Weiyue; Zhang, Dianfa

    2017-09-01

    A robust kernel archetypoid analysis (RKADA) method is proposed to extract pure endmembers from hyperspectral imagery (HSI). The RKADA assumes that each pixel is a sparse linear mixture of all endmembers and each endmember corresponds to a real pixel in the image scene. First, it improves the re8gular archetypal analysis with a new binary sparse constraint, and the adoption of the kernel function constructs the principal convex hull in an infinite Hilbert space and enlarges the divergences between pairwise pixels. Second, the RKADA transfers the pure endmember extraction problem into an optimization problem by minimizing residual errors with the Huber loss function. The Huber loss function reduces the effects from big noises and outliers in the convergence procedure of RKADA and enhances the robustness of the optimization function. Third, the random kernel sinks for fast kernel matrix approximation and the two-stage algorithm for optimizing initial pure endmembers are utilized to improve its computational efficiency in realistic implementations of RKADA, respectively. The optimization equation of RKADA is solved by using the block coordinate descend scheme and the desired pure endmembers are finally obtained. Six state-of-the-art pure endmember extraction methods are employed to make comparisons with the RKADA on both synthetic and real Cuprite HSI datasets, including three geometrical algorithms vertex component analysis (VCA), alternative volume maximization (AVMAX) and orthogonal subspace projection (OSP), and three matrix factorization algorithms the preconditioning for successive projection algorithm (PreSPA), hierarchical clustering based on rank-two nonnegative matrix factorization (H2NMF) and self-dictionary multiple measurement vector (SDMMV). Experimental results show that the RKADA outperforms all the six methods in terms of spectral angle distance (SAD) and root-mean-square-error (RMSE). Moreover, the RKADA has short computational times in offline operations and shows significant improvement in identifying pure endmembers for ground objects with smaller spectrum differences. Therefore, the RKADA could be an alternative for pure endmember extraction from hyperspectral images.

  16. Resumming double logarithms in the QCD evolution of color dipoles

    DOE PAGES

    Iancu, E.; Madrigal, J. D.; Mueller, A. H.; ...

    2015-05-01

    The higher-order perturbative corrections, beyond leading logarithmic accuracy, to the BFKL evolution in QCD at high energy are well known to suffer from a severe lack-of-convergence problem, due to radiative corrections enhanced by double collinear logarithms. Via an explicit calculation of Feynman graphs in light cone (time-ordered) perturbation theory, we show that the corrections enhanced by double logarithms (either energy-collinear, or double collinear) are associated with soft gluon emissions which are strictly ordered in lifetime. These corrections can be resummed to all orders by solving an evolution equation which is non-local in rapidity. This equation can be equivalently rewritten inmore » local form, but with modified kernel and initial conditions, which resum double collinear logs to all orders. We extend this resummation to the next-to-leading order BFKL and BK equations. The first numerical studies of the collinearly-improved BK equation demonstrate the essential role of the resummation in both stabilizing and slowing down the evolution.« less

  17. A solution to coupled Dyson-Schwinger equations for gluons and ghosts in Landau gauge.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Smekal, L.; Alkofer, R.; Hauck, A.

    1998-07-20

    A truncation scheme for the Dyson-Schwinger equations of QCD in Landau gauge is presented which implements the Slavnov-Taylor identities for the 3-point vertex functions. Neglecting contributions from 4-point correlations such as the 4-gluon vertex function and irreducible scattering kernels, a closed system of equations for the propagators is obtained. For the pure gauge theory without quarks this system of equations for the propagators of gluons and ghosts is solved in an approximation which allows for an analytic discussion of its solutions in the infrared: The gluon propagator is shown to vanish for small spacelike momenta whereas the ghost propagator ismore » found to be infrared enhanced. The running coupling of the non-perturbative subtraction scheme approaches an infrared stable fixed point at a critical value of the coupling alpha c of approx. 9.5. The gluon propagator is shown to have no Lehmann representation. The results for the propagators obtained here compare favorably with recent lattice calculations.« less

  18. An analytical method for the inverse Cauchy problem of Lame equation in a rectangle

    NASA Astrophysics Data System (ADS)

    Grigor’ev, Yu

    2018-04-01

    In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.

  19. TIME-DOMAIN METHODS FOR DIFFUSIVE TRANSPORT IN SOFT MATTER

    PubMed Central

    Fricks, John; Yao, Lingxing; Elston, Timothy C.; Gregory Forest, And M.

    2015-01-01

    Passive microrheology [12] utilizes measurements of noisy, entropic fluctuations (i.e., diffusive properties) of micron-scale spheres in soft matter to infer bulk frequency-dependent loss and storage moduli. Here, we are concerned exclusively with diffusion of Brownian particles in viscoelastic media, for which the Mason-Weitz theoretical-experimental protocol is ideal, and the more challenging inference of bulk viscoelastic moduli is decoupled. The diffusive theory begins with a generalized Langevin equation (GLE) with a memory drag law specified by a kernel [7, 16, 22, 23]. We start with a discrete formulation of the GLE as an autoregressive stochastic process governing microbead paths measured by particle tracking. For the inverse problem (recovery of the memory kernel from experimental data) we apply time series analysis (maximum likelihood estimators via the Kalman filter) directly to bead position data, an alternative to formulas based on mean-squared displacement statistics in frequency space. For direct modeling, we present statistically exact GLE algorithms for individual particle paths as well as statistical correlations for displacement and velocity. Our time-domain methods rest upon a generalization of well-known results for a single-mode exponential kernel [1, 7, 22, 23] to an arbitrary M-mode exponential series, for which the GLE is transformed to a vector Ornstein-Uhlenbeck process. PMID:26412904

  20. Online polarimetry of the Nuclotron internal deuteron and proton beams

    NASA Astrophysics Data System (ADS)

    Isupov, A. Yu

    2017-12-01

    The spin studies at Nuclotron require fast and precise determination of the deuteron and proton beam polarization. For these purposes new powerful VME-based data acquisition (DAQ) system has been designed for the Deuteron Spin Structure setup placed at the Nuclotron Internal Target Station. The DAQ system is built using the netgraph-based data acquisition and processing framework ngdp. The software dealing with VME hardware is a set of netgraph nodes in form of the loadable kernel modules, so works in the operating system kernel context. The specific for current implementation nodes and user context utilities are described. The online events representation by ROOT classes allows us to generalize code for histograms filling and polarization calculations. The DAQ system was successfully used during 53rd and 54th Nuclotron runs, and their suitability for online polarimetry is demonstrated.

  1. A voting-based statistical cylinder detection framework applied to fallen tree mapping in terrestrial laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2017-07-01

    This paper introduces a statistical framework for detecting cylindrical shapes in dense point clouds. We target the application of mapping fallen trees in datasets obtained through terrestrial laser scanning. This is a challenging task due to the presence of ground vegetation, standing trees, DTM artifacts, as well as the fragmentation of dead trees into non-collinear segments. Our method shares the concept of voting in parameter space with the generalized Hough transform, however two of its significant drawbacks are improved upon. First, the need to generate samples on the shape's surface is eliminated. Instead, pairs of nearby input points lying on the surface cast a vote for the cylinder's parameters based on the intrinsic geometric properties of cylindrical shapes. Second, no discretization of the parameter space is required: the voting is carried out in continuous space by means of constructing a kernel density estimator and obtaining its local maxima, using automatic, data-driven kernel bandwidth selection. Furthermore, we show how the detected cylindrical primitives can be efficiently merged to obtain object-level (entire tree) semantic information using graph-cut segmentation and a tailored dynamic algorithm for eliminating cylinder redundancy. Experiments were performed on 3 plots from the Bavarian Forest National Park, with ground truth obtained through visual inspection of the point clouds. It was found that relative to sample consensus (SAC) cylinder fitting, the proposed voting framework can improve the detection completeness by up to 10 percentage points while maintaining the correctness rate.

  2. Analogy between the Navier-Stokes equations and Maxwell's equations: Application to turbulence

    NASA Astrophysics Data System (ADS)

    Marmanis, Haralambos

    1998-06-01

    A new theory of turbulence is initiated, based on the analogy between electromagnetism and turbulent hydrodynamics, for the purpose of describing the dynamical behavior of averaged flow quantities in incompressible fluid flows of high Reynolds numbers. The starting point is the recognition that the vorticity (w=∇×u) and the Lamb vector (l=w×u) should be taken as the kernel of a dynamical theory of turbulence. The governing equations for these fields can be obtained by the Navier-Stokes equations, which underlie the whole evolution. Then whatever parts are not explicitly expressed as a function of w or l only are gathered and treated as source terms. This is done by introducing the concepts of turbulent charge and turbulent current. Thus we are led to a closed set of linear equations for the averaged field quantities. The premise is that the earlier introduced sources will be apt for modeling, in the sense that their distribution will depend only on the geometry and the total energetics of the flow. The dynamics described in the preceding manner is what we call the metafluid dynamics.

  3. A fast and well-conditioned spectral method for singular integral equations

    NASA Astrophysics Data System (ADS)

    Slevinsky, Richard Mikael; Olver, Sheehan

    2017-03-01

    We develop a spectral method for solving univariate singular integral equations over unions of intervals by utilizing Chebyshev and ultraspherical polynomials to reformulate the equations as almost-banded infinite-dimensional systems. This is accomplished by utilizing low rank approximations for sparse representations of the bivariate kernels. The resulting system can be solved in O (m2 n) operations using an adaptive QR factorization, where m is the bandwidth and n is the optimal number of unknowns needed to resolve the true solution. The complexity is reduced to O (mn) operations by pre-caching the QR factorization when the same operator is used for multiple right-hand sides. Stability is proved by showing that the resulting linear operator can be diagonally preconditioned to be a compact perturbation of the identity. Applications considered include the Faraday cage, and acoustic scattering for the Helmholtz and gravity Helmholtz equations, including spectrally accurate numerical evaluation of the far- and near-field solution. The JULIA software package SingularIntegralEquations.jl implements our method with a convenient, user-friendly interface.

  4. Dusty Pair Plasma—Wave Propagation and Diffusive Transition of Oscillations

    NASA Astrophysics Data System (ADS)

    Atamaniuk, Barbara; Turski, Andrzej J.

    2011-11-01

    The crucial point of the paper is the relation between equilibrium distributions of plasma species and the type of propagation or diffusive transition of plasma response to a disturbance. The paper contains a unified treatment of disturbance propagation (transport) in the linearized Vlasov electron-positron and fullerene pair plasmas containing charged dust impurities, based on the space-time convolution integral equations. Electron-positron-dust/ion (e-p-d/i) plasmas are rather widespread in nature. Space-time responses of multi-component linearized Vlasov plasmas on the basis of multiple integral equations are invoked. An initial-value problem for Vlasov-Poisson/Ampère equations is reduced to the one multiple integral equation and the solution is expressed in terms of forcing function and its space-time convolution with the resolvent kernel. The forcing function is responsible for the initial disturbance and the resolvent is responsible for the equilibrium velocity distributions of plasma species. By use of resolvent equations, time-reversibility, space-reflexivity and the other symmetries are revealed. The symmetries carry on physical properties of Vlasov pair plasmas, e.g., conservation laws. Properly choosing equilibrium distributions for dusty pair plasmas, we can reduce the resolvent equation to: (i) the undamped dispersive wave equations, (ii) and diffusive transport equations of oscillations.

  5. A Framework For Dynamic Subversion

    DTIC Science & Technology

    2003-06-01

    informal methods. These methods examine the security requirements, security specification, also called the Formal Top Level Specification and its ...not be always invoked due to its possible deactivation by errant or malicious code. Further, the RVM, if no separation exists between the kernel...that this thesis focused on, is the means by which the dynamic portion of the artifice finds space to operate or is loaded, is relocated in its

  6. WE-EF-207-01: FEATURED PRESENTATION and BEST IN PHYSICS (IMAGING): Task-Driven Imaging for Cone-Beam CT in Interventional Guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gang, G; Stayman, J; Ouadah, S

    2015-06-15

    Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less

  7. Reversible exciplex formation followed charge separation.

    PubMed

    Petrova, M V; Burshtein, A I

    2008-12-25

    The reversible exciplex formation followed by its decomposition into an ion pair is considered, taking into account the subsequent geminate and bulk ion recombination to the triplet and singlet products (in excited and ground states). The integral kinetic equations are derived for all state populations, assuming that the spin conversion is performed by the simplest incoherent (rate) mechanism. When the forward and backward electron transfer is in contact as well as all dissociation/association reactions of heavy particles, the kernels of integral equations are specified and expressed through numerous reaction constants and characteristics of encounter diffusion. The solutions of these equations are used to specify the quantum yields of the excited state and exciplex fluorescence induced by pulse or stationary pumping. In the former case, the yields of the free ions and triplet products are also found, while in the latter case their stationary concentrations are obtained.

  8. Interaction between a circular inclusion and an arbitrarily oriented crack

    NASA Technical Reports Server (NTRS)

    Erdogan, F.; Gupta, G. D.; Ratwani, M.

    1975-01-01

    The plane interaction problem for a circular elastic inclusion embedded in an elastic matrix which contains an arbitrarily oriented crack is considered. Using the existing solutions for the edge dislocations as Green's functions, first the general problem of a through crack in the form of an arbitrary smooth arc located in the matrix in the vicinity of the inclusion is formulated. The integral equations for the line crack are then obtained as a system of singular integral equations with simple Cauchy kernels. The singular behavior of the stresses around the crack tips is examined and the expressions for the stress-intensity factors representing the strength of the stress singularities are obtained in terms of the asymptotic values of the density functions of the integral equations. The problem is solved for various typical crack orientations and the corresponding stress-intensity factors are given.

  9. An extension to the Chahine method of inverting the radiative transfer equation. [application to ozone distribution in atmosphere

    NASA Technical Reports Server (NTRS)

    Twomey, S.; Herman, B.; Rabinoff, R.

    1977-01-01

    An extension of the Chahine relaxation method (1970) for inverting the radiative transfer equation is presented. This method is superior to the original method in that it takes into account in a realistic manner the shape of the kernel function, and its extension to nonlinear systems is much more straightforward. A comparison of the new method with a matrix method due to Twomey (1965), in a problem involving inference of vertical distribution of ozone from spectroscopic measurements in the near ultraviolet, indicates that in this situation this method is stable with errors in the input data up to 4%, whereas the matrix method breaks down at these levels. The problem of non-uniqueness of the solution, which is a property of the system of equations rather than of any particular algorithm for solving them, remains, although it takes on slightly different forms for the two algorithms.

  10. Test particle propagation in magnetostatic turbulence. 2: The local approximation method

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.

    1976-01-01

    An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.

  11. Nonlocal equation for the superconducting gap parameter

    NASA Astrophysics Data System (ADS)

    Simonucci, S.; Strinati, G. Calvanese

    2017-08-01

    The properties are considered in detail of a nonlocal (integral) equation for the superconducting gap parameter, which is obtained by a coarse-graining procedure applied to the Bogoliubov-de Gennes (BdG) equations over the whole coupling-versus-temperature phase diagram associated with the superfluid phase. It is found that the limiting size of the coarse-graining procedure, which is dictated by the range of the kernel of this integral equation, corresponds to the size of the Cooper pairs over the whole coupling-versus-temperature phase diagram up to the critical temperature, even when Cooper pairs turn into composite bosons on the BEC side of the BCS-BEC crossover. A practical method is further implemented to solve numerically this integral equation in an efficient way, which is based on a novel algorithm for calculating the Fourier transforms. Application of this method to the case of an isolated vortex, throughout the BCS-BEC crossover and for all temperatures in the superfluid phase, helps clarifying the nature of the length scales associated with a single vortex and the kinds of details that are in practice disposed off by the coarse-graining procedure on the BdG equations.

  12. Some operational tools for solving fractional and higher integer order differential equations: A survey on their mutual relations

    NASA Astrophysics Data System (ADS)

    Kiryakova, Virginia S.

    2012-11-01

    The Laplace Transform (LT) serves as a basis of the Operational Calculus (OC), widely explored by engineers and applied scientists in solving mathematical models for their practical needs. This transform is closely related to the exponential and trigonometric functions (exp, cos, sin) and to the classical differentiation and integration operators, reducing them to simple algebraic operations. Thus, the classical LT and the OC give useful tool to handle differential equations and systems with constant coefficients. Several generalizations of the LT have been introduced to allow solving, in a similar way, of differential equations with variable coefficients and of higher integer orders, as well as of fractional (arbitrary non-integer) orders. Note that fractional order mathematical models are recently widely used to describe better various systems and phenomena of the real world. This paper surveys briefly some of our results on classes of such integral transforms, that can be obtained from the LT by means of "transmutations" which are operators of the generalized fractional calculus (GFC). On the list of these Laplace-type integral transforms, we consider the Borel-Dzrbashjan, Meijer, Krätzel, Obrechkoff, generalized Obrechkoff (multi-index Borel-Dzrbashjan) transforms, etc. All of them are G- and H-integral transforms of convolutional type, having as kernels Meijer's G- or Fox's H-functions. Besides, some special functions (also being G- and H-functions), among them - the generalized Bessel-type and Mittag-Leffler (M-L) type functions, are generating Gel'fond-Leontiev (G-L) operators of generalized differentiation and integration, which happen to be also operators of GFC. Our integral transforms have operational properties analogous to those of the LT - they do algebrize the G-L generalized integrations and differentiations, and thus can serve for solving wide classes of differential equations with variable coefficients of arbitrary, including non-integer order. Throughout the survey, we illustrate the parallels in the relationships: Laplace type integral transforms - special functions as kernels - operators of generalized integration and differentiation generated by special functions - special functions as solutions of related differential equations. The role of the so-called Special Functions of Fractional Calculus is emphasized.

  13. Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions

    NASA Astrophysics Data System (ADS)

    Chen, N.; Majda, A.

    2017-12-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.

  14. Modeling and Simulation With Operational Databases to Enable Dynamic Situation Assessment & Prediction

    DTIC Science & Technology

    2010-11-01

    subsections discuss the design of the simulations. 3.12.1 Lanchester5D Simulation A Lanchester simulation was developed to conduct performance...benchmarks using the WarpIV Kernel and HyperWarpSpeed. The Lanchester simulation contains a user-definable number of grid cells in which blue and red...forces engage in battle using Lanchester equations. Having a user-definable number of grid cells enables the simulation to be stressed with high entity

  15. Removal of gadolinium-based contrast agents: adsorption on activated carbon.

    PubMed

    Elizalde-González, María P; García-Díaz, Esmeralda; González-Perea, Mario; Mattusch, Jürgen

    2017-03-01

    Three carbon samples were employed in this work, including commercial (1690 m 2  g -1 ), activated carbon prepared from guava seeds (637 m 2  g -1 ), and activated carbon prepared from avocado kernel (1068 m 2  g -1 ), to study the adsorption of the following gadolinium-based contrast agents (GBCAs): gadoterate meglumine Dotarem®, gadopentetate dimeglumine Magnevist®, and gadoxetate disodium Primovist®. The activation conditions with H 3 PO 4 were optimized using a Taguchi methodology to obtain mesoporous materials. The best removal efficiency by square meter in a batch system in aqueous solution and model urine was achieved by avocado kernel carbon, in which mesoporosity prevails over microporosity. The kinetic adsorption curves were described by a pseudo-second-order equation, and the adsorption isotherms in the concentration range 0.5-6 mM fit the Freundlich equation. The chemical characterization of the surfaces shows that materials with a greater amount of phenolic functional groups adsorb the GBCA better. Adsorption strongly depends on the pH due to the combination of the following factors: contrast agent protonated forms and carbon surface charge. The tested carbon samples were able to adsorb 70-90% of GBCA in aqueous solution and less in model urine. This research proposes a method for the elimination of GBCA from patient urine before its discharge into wastewater.

  16. Congested Aggregation via Newtonian Interaction

    NASA Astrophysics Data System (ADS)

    Craig, Katy; Kim, Inwon; Yao, Yao

    2018-01-01

    We consider a congested aggregation model that describes the evolution of a density through the competing effects of nonlocal Newtonian attraction and a hard height constraint. This provides a counterpoint to existing literature on repulsive-attractive nonlocal interaction models, where the repulsive effects instead arise from an interaction kernel or the addition of diffusion. We formulate our model as the Wasserstein gradient flow of an interaction energy, with a penalization to enforce the constraint on the height of the density. From this perspective, the problem can be seen as a singular limit of the Keller-Segel equation with degenerate diffusion. Two key properties distinguish our problem from previous work on height constrained equations: nonconvexity of the interaction kernel (which places the model outside the scope of classical gradient flow theory) and nonlocal dependence of the velocity field on the density (which causes the problem to lack a comparison principle). To overcome these obstacles, we combine recent results on gradient flows of nonconvex energies with viscosity solution theory. We characterize the dynamics of patch solutions in terms of a Hele-Shaw type free boundary problem and, using this characterization, show that in two dimensions patch solutions converge to a characteristic function of a disk in the long-time limit, with an explicit rate on the decay of the energy. We believe that a key contribution of the present work is our blended approach, combining energy methods with viscosity solution theory.

  17. An analytical theory of a scattering of radio waves on meteoric ionization - II. Solution of the integro-differential equation in case of backscatter

    NASA Astrophysics Data System (ADS)

    Pecina, P.

    2016-12-01

    The integro-differential equation for the polarization vector P inside the meteor trail, representing the analytical solution of the set of Maxwell equations, is solved for the case of backscattering of radio waves on meteoric ionization. The transversal and longitudinal dimensions of a typical meteor trail are small in comparison to the distances to both transmitter and receiver and so the phase factor appearing in the kernel of the integral equation is large and rapidly changing. This allows us to use the method of stationary phase to obtain an approximate solution of the integral equation for the scattered field and for the corresponding generalized radar equation. The final solution is obtained by expanding it into the complete set of Bessel functions, which results in solving a system of linear algebraic equations for the coefficients of the expansion. The time behaviour of the meteor echoes is then obtained using the generalized radar equation. Examples are given for values of the electron density spanning a range from underdense meteor echoes to overdense meteor echoes. We show that the time behaviour of overdense meteor echoes using this method is very different from the one obtained using purely numerical solutions of the Maxwell equations. Our results are in much better agreement with the observations performed e.g. by the Ondřejov radar.

  18. An Efficient Method Coupling Kernel Principal Component Analysis with Adjoint-Based Optimal Control and Its Goal-Oriented Extensions

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Tong, C. H.; Chen, X.

    2016-12-01

    The representativeness of available data poses a significant fundamental challenge to the quantification of uncertainty in geophysical systems. Furthermore, the successful application of machine learning methods to geophysical problems involving data assimilation is inherently constrained by the extent to which obtainable data represent the problem considered. We show how the adjoint method, coupled with optimization based on methods of machine learning, can facilitate the minimization of an objective function defined on a space of significantly reduced dimension. By considering uncertain parameters as constituting a stochastic process, the Karhunen-Loeve expansion and its nonlinear extensions furnish an optimal basis with respect to which optimization using L-BFGS can be carried out. In particular, we demonstrate that kernel PCA can be coupled with adjoint-based optimal control methods to successfully determine the distribution of material parameter values for problems in the context of channelized deformable media governed by the equations of linear elasticity. Since certain subsets of the original data are characterized by different features, the convergence rate of the method in part depends on, and may be limited by, the observations used to furnish the kernel principal component basis. By determining appropriate weights for realizations of the stochastic random field, then, one may accelerate the convergence of the method. To this end, we present a formulation of Weighted PCA combined with a gradient-based means using automatic differentiation to iteratively re-weight observations concurrent with the determination of an optimal reduced set control variables in the feature space. We demonstrate how improvements in the accuracy and computational efficiency of the weighted linear method can be achieved over existing unweighted kernel methods, and discuss nonlinear extensions of the algorithm.

  19. Hardware, Languages, and Architectures for Defense Against Hostile Operating Systems

    DTIC Science & Technology

    2015-05-14

    Executive Service Directorate (0704-0188). Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to...in privileged system services . ExpressOS requires only a modest annotation burden (annotations were about 3% of code of the kernel), modest performance...INT benchmarks . In addition to enabling the development of an architecture- neutral instrumentation framework, our approach can take advantage of the

  20. Efficient use of unlabeled data for protein sequence classification: a comparative study.

    PubMed

    Kuksa, Pavel; Huang, Pai-Hsi; Pavlovic, Vladimir

    2009-04-29

    Recent studies in computational primary protein sequence analysis have leveraged the power of unlabeled data. For example, predictive models based on string kernels trained on sequences known to belong to particular folds or superfamilies, the so-called labeled data set, can attain significantly improved accuracy if this data is supplemented with protein sequences that lack any class tags-the unlabeled data. In this study, we present a principled and biologically motivated computational framework that more effectively exploits the unlabeled data by only using the sequence regions that are more likely to be biologically relevant for better prediction accuracy. As overly-represented sequences in large uncurated databases may bias the estimation of computational models that rely on unlabeled data, we also propose a method to remove this bias and improve performance of the resulting classifiers. Combined with state-of-the-art string kernels, our proposed computational framework achieves very accurate semi-supervised protein remote fold and homology detection on three large unlabeled databases. It outperforms current state-of-the-art methods and exhibits significant reduction in running time. The unlabeled sequences used under the semi-supervised setting resemble the unpolished gemstones; when used as-is, they may carry unnecessary features and hence compromise the classification accuracy but once cut and polished, they improve the accuracy of the classifiers considerably.

  1. Testing in Microbiome-Profiling Studies with MiRKAT, the Microbiome Regression-Based Kernel Association Test

    PubMed Central

    Zhao, Ni; Chen, Jun; Carroll, Ian M.; Ringel-Kulka, Tamar; Epstein, Michael P.; Zhou, Hua; Zhou, Jin J.; Ringel, Yehuda; Li, Hongzhe; Wu, Michael C.

    2015-01-01

    High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Distance-based analysis is a popular strategy for evaluating the overall association between microbiome diversity and outcome, wherein the phylogenetic distance between individuals’ microbiome profiles is computed and tested for association via permutation. Despite their practical popularity, distance-based approaches suffer from important challenges, especially in selecting the best distance and extending the methods to alternative outcomes, such as survival outcomes. We propose the microbiome regression-based kernel association test (MiRKAT), which directly regresses the outcome on the microbiome profiles via the semi-parametric kernel machine regression framework. MiRKAT allows for easy covariate adjustment and extension to alternative outcomes while non-parametrically modeling the microbiome through a kernel that incorporates phylogenetic distance. It uses a variance-component score statistic to test for the association with analytical p value calculation. The model also allows simultaneous examination of multiple distances, alleviating the problem of choosing the best distance. Our simulations demonstrated that MiRKAT provides correctly controlled type I error and adequate power in detecting overall association. “Optimal” MiRKAT, which considers multiple candidate distances, is robust in that it suffers from little power loss in comparison to when the best distance is used and can achieve tremendous power gain in comparison to when a poor distance is chosen. Finally, we applied MiRKAT to real microbiome datasets to show that microbial communities are associated with smoking and with fecal protease levels after confounders are controlled for. PMID:25957468

  2. Nonlinear association criterion, nonlinear Granger causality and related issues with applications to neuroimage studies.

    PubMed

    Tao, Chenyang; Feng, Jianfeng

    2016-03-15

    Quantifying associations in neuroscience (and many other scientific disciplines) is often challenged by high-dimensionality, nonlinearity and noisy observations. Many classic methods have either poor power or poor scalability on data sets of the same or different scales such as genetical, physiological and image data. Based on the framework of reproducing kernel Hilbert spaces we proposed a new nonlinear association criteria (NAC) with an efficient numerical algorithm and p-value approximation scheme. We also presented mathematical justification that links the proposed method to related methods such as kernel generalized variance, kernel canonical correlation analysis and Hilbert-Schmidt independence criteria. NAC allows the detection of association between arbitrary input domain as long as a characteristic kernel is defined. A MATLAB package was provided to facilitate applications. Extensive simulation examples and four real world neuroscience examples including functional MRI causality, Calcium imaging and imaging genetic studies on autism [Brain, 138(5):13821393 (2015)] and alcohol addiction [PNAS, 112(30):E4085-E4093 (2015)] are used to benchmark NAC. It demonstrates the superior performance over the existing procedures we tested and also yields biologically significant results for the real world examples. NAC beats its linear counterparts when nonlinearity is presented in the data. It also shows more robustness against different experimental setups compared with its nonlinear counterparts. In this work we presented a new and robust statistical approach NAC for measuring associations. It could serve as an interesting alternative to the existing methods for datasets where nonlinearity and other confounding factors are present. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    PubMed

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants.

  4. Optimized Quasi-Interpolators for Image Reconstruction.

    PubMed

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  5. Lake eutrophication and environmental change: A viability framework for resilience, vulnerability and adaptive capacity

    NASA Astrophysics Data System (ADS)

    Mathias, Jean-Denis; Rougé, Charles; Deffuant, Guillaume

    2013-04-01

    We present a simple stochastic model of lake eutrophication to demonstrate how the mathematical framework of viability theory fosters operational definitions of resilience, vulnerability and adaptive capacity, and then helps understand which response one should bring to environmental changes. The model represents the phosphorus dynamics, given that high concentrations trigger a regime change from oligotrophic to eutrophic, and causes ecological but also economic losses, for instance from tourism. Phosphorus comes from agricultural inputs upstream of the lake, and we will consider a stochastic input. We consider the system made of both the lake and its upstream region, and explore how to maintain the desirable ecological and economic properties of this system. In the viability framework, we translate these desirable properties into state constraints, then examine how, given the dynamics of the model and the available policy options, the properties can be kept. The set of states for which there exists a policy to keep the properties is called the viability kernel. We extend this framework to both major perturbations and long-term environmental changes. In our model, since the phosphorus inputs and outputs from the lake depend on rainfall, we will focus on extreme rainfall events and long-term changes in the rainfall regime. They can be described as changes in the state of the system, and may displace it outside the viability kernel. Its response can then be described using the concepts of resilience, vulnerability and adaptive capacity. Resilience is the capacity to recover by getting back to the viability kernel where the dynamics keep the system safe, and in this work we assume it to be the first objective of management. Computed for a given trajectory, vulnerability is a measure of the consequence of violating a property. We propose a family of functions from which cost functions and other vulnerability indicators can be derived for any trajectory. There can be several vulnerability functions, representing for instance social, economic or ecological vulnerability, and each representing the violation of the associated property, but these functions need to be ultimately aggregated as a single indicator. Due to the stochastic nature of the system, there is a range of possible trajectories. Statistics can be derived from the probability distribution of the vulnerability of the trajectories. Dynamic programming methods can then yield the policies which, among available policies, minimize a given trajectory. Thus, this viability framework gives indication on both the possible consequences of a hazard or an environmental change, and on the policies that can mitigate or avert it. It also enables to assess the benefits of extending the set of available policy options, and we define adaptive capacity as the reduction in a given vulnerability statistic due to the introduction of new policy options.

  6. Investigating Experimental Effects within the Framework of Structural Equation Modeling: An Example with Effects on Both Error Scores and Reaction Times

    ERIC Educational Resources Information Center

    Schweizer, Karl

    2008-01-01

    Structural equation modeling provides the framework for investigating experimental effects on the basis of variances and covariances in repeated measurements. A special type of confirmatory factor analysis as part of this framework enables the appropriate representation of the experimental effect and the separation of experimental and…

  7. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  8. pyomo.dae: a modeling and automatic discretization framework for optimization with differential and algebraic equations

    DOE PAGES

    Nicholson, Bethany; Siirola, John D.; Watson, Jean-Paul; ...

    2017-12-20

    We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differentialmore » equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.« less

  9. pyomo.dae: a modeling and automatic discretization framework for optimization with differential and algebraic equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Bethany; Siirola, John D.; Watson, Jean-Paul

    We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differentialmore » equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.« less

  10. A Systematic Kernel Function Procedure for Determining Aerodynamic Forces on Oscillating or Steady Finite Wings at Subsonic Speeds

    NASA Technical Reports Server (NTRS)

    Watkins, Charles E.; Woolston, Donald S.; Cunningham, Herbert J.

    1959-01-01

    Details are given of a numerical solution of the integral equation which relates oscillatory or steady lift and downwash distributions in subsonic flow. The procedure has been programmed for the IBM 704 electronic data processing machine and yields the pressure distribution and some of its integrated properties for a given Mach number and frequency and for several modes of oscillation in from 3 to 4 minutes, results of several applications are presented.

  11. A new approach to approximating the linear quadratic optimal control law for hereditary systems with control delays

    NASA Technical Reports Server (NTRS)

    Milman, M. H.

    1985-01-01

    A factorization approach is presented for deriving approximations to the optimal feedback gain for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the feedback kernels.

  12. Transactions of the Army Conference on Applied Mathematics and Computing (2nd) Held at Washington, DC on 22-25 May 1984

    DTIC Science & Technology

    1985-02-01

    0 Here Q denotes the midplane of the plate ?assumed to be a Lipschitzian) with a smooth boundary ", and H (Q) and H (Q) are the Hilbert spaces of...using a reproducing kernel Hilbert space approach, Weinert [8,9] et al, developed a structural correspondence between spline interpolation and linear...597 A Mesh Moving Technique for Time Dependent Partial Differential Equations in Two Space Dimensions David C. Arney and Joseph

  13. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    PubMed

    Durstewitz, Daniel

    2017-06-01

    The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties.

  14. Dispersal of Engineered Male Aedes aegypti Mosquitoes.

    PubMed

    Winskill, Peter; Carvalho, Danilo O; Capurro, Margareth L; Alphey, Luke; Donnelly, Christl A; McKemey, Andrew R

    2015-11-01

    Aedes aegypti, the principal vector of dengue fever, have been genetically engineered for use in a sterile insect control programme. To improve our understanding of the dispersal ecology of mosquitoes and to inform appropriate release strategies of 'genetically sterile' male Aedes aegypti detailed knowledge of the dispersal ability of the released insects is needed. The dispersal ability of released 'genetically sterile' male Aedes aegypti at a field site in Brazil has been estimated. Dispersal kernels embedded within a generalized linear model framework were used to analyse data collected from three large scale mark release recapture studies. The methodology has been applied to previously published dispersal data to compare the dispersal ability of 'genetically sterile' male Aedes aegypti in contrasting environments. We parameterised dispersal kernels and estimated the mean distance travelled for insects in Brazil: 52.8 m (95% CI: 49.9 m, 56.8 m) and Malaysia: 58.0 m (95% CI: 51.1 m, 71.0 m). Our results provide specific, detailed estimates of the dispersal characteristics of released 'genetically sterile' male Aedes aegypti in the field. The comparative analysis indicates that despite differing environments and recapture rates, key features of the insects' dispersal kernels are conserved across the two studies. The results can be used to inform both risk assessments and release programmes using 'genetically sterile' male Aedes aegypti.

  15. Stress-intensity factors for a thick-walled cylinder containing an annular imbedded or external or internal surface crack

    NASA Technical Reports Server (NTRS)

    Erdol, R.; Erdogan, F.

    1976-01-01

    The elastostatic axisymmetric problem for a long thick-walled cylinder containing a ring-shaped internal or edge crack is considered. Using the standard transform technique the problem is formulated in terms of an integral equation which has a simple Cauchy kernel for the internal crack and a generalized Cauchy kernel for the edge crack as the dominant part. As examples the uniform axial load and the steady-state thermal stress problems have been solved and the related stress intensity factors have been calculated. Among other findings the results show that in the cylinder under uniform axial stress containing an internal crack the stress intensity factor at the inner tip is always greater than that at the outer tip for equal net ligament thicknesses and in the cylinder with an edge crack which is under a state of thermal stress the stress intensity factor is a decreasing function of the crack depth, tending to zero as the crack depth approaches the wall thickness.

  16. Exact combinatorial approach to finite coagulating systems

    NASA Astrophysics Data System (ADS)

    Fronczak, Agata; Chmiel, Anna; Fronczak, Piotr

    2018-02-01

    This paper outlines an exact combinatorial approach to finite coagulating systems. In this approach, cluster sizes and time are discrete and the binary aggregation alone governs the time evolution of the systems. By considering the growth histories of all possible clusters, an exact expression is derived for the probability of a coagulating system with an arbitrary kernel being found in a given cluster configuration when monodisperse initial conditions are applied. Then this probability is used to calculate the time-dependent distribution for the number of clusters of a given size, the average number of such clusters, and that average's standard deviation. The correctness of our general expressions is proved based on the (analytical and numerical) results obtained for systems with the constant kernel. In addition, the results obtained are compared with the results arising from the solutions to the mean-field Smoluchowski coagulation equation, indicating its weak points. The paper closes with a brief discussion on the extensibility to other systems of the approach presented herein, emphasizing the issue of arbitrary initial conditions.

  17. Learning SVM in Kreĭn Spaces.

    PubMed

    Loosli, Gaelle; Canu, Stephane; Ong, Cheng Soon

    2016-06-01

    This paper presents a theoretical foundation for an SVM solver in Kreĭn spaces. Up to now, all methods are based either on the matrix correction, or on non-convex minimization, or on feature-space embedding. Here we justify and evaluate a solution that uses the original (indefinite) similarity measure, in the original Kreĭn space. This solution is the result of a stabilization procedure. We establish the correspondence between the stabilization problem (which has to be solved) and a classical SVM based on minimization (which is easy to solve). We provide simple equations to go from one to the other (in both directions). This link between stabilization and minimization problems is the key to obtain a solution in the original Kreĭn space. Using KSVM, one can solve SVM with usually troublesome kernels (large negative eigenvalues or large numbers of negative eigenvalues). We show experiments showing that our algorithm KSVM outperforms all previously proposed approaches to deal with indefinite matrices in SVM-like kernel methods.

  18. The complex variable reproducing kernel particle method for bending problems of thin plates on elastic foundations

    NASA Astrophysics Data System (ADS)

    Chen, L.; Cheng, Y. M.

    2018-07-01

    In this paper, the complex variable reproducing kernel particle method (CVRKPM) for solving the bending problems of isotropic thin plates on elastic foundations is presented. In CVRKPM, one-dimensional basis function is used to obtain the shape function of a two-dimensional problem. CVRKPM is used to form the approximation function of the deflection of the thin plates resting on elastic foundation, the Galerkin weak form of thin plates on elastic foundation is employed to obtain the discretized system equations, the penalty method is used to apply the essential boundary conditions, and Winkler and Pasternak foundation models are used to consider the interface pressure between the plate and the foundation. Then the corresponding formulae of CVRKPM for thin plates on elastic foundations are presented in detail. Several numerical examples are given to discuss the efficiency and accuracy of CVRKPM in this paper, and the corresponding advantages of the present method are shown.

  19. Distributed delays in a hybrid model of tumor-immune system interplay.

    PubMed

    Caravagna, Giulio; Graudenzi, Alex; d'Onofrio, Alberto

    2013-02-01

    A tumor is kinetically characterized by the presence of multiple spatio-temporal scales in which its cells interplay with, for instance, endothelial cells or Immune system effectors, exchanging various chemical signals. By its nature, tumor growth is an ideal object of hybrid modeling where discrete stochastic processes model low-numbers entities, and mean-field equations model abundant chemical signals. Thus, we follow this approach to model tumor cells, effector cells and Interleukin-2, in order to capture the Immune surveillance effect. We here present a hybrid model with a generic delay kernel accounting that, due to many complex phenomena such as chemical transportation and cellular differentiation, the tumor-induced recruitment of effectors exhibits a lag period. This model is a Stochastic Hybrid Automata and its semantics is a Piecewise Deterministic Markov process where a two-dimensional stochastic process is interlinked to a multi-dimensional mean-field system. We instantiate the model with two well-known weak and strong delay kernels and perform simulations by using an algorithm to generate trajectories of this process. Via simulations and parametric sensitivity analysis techniques we (i) relate tumor mass growth with the two kernels, we (ii) measure the strength of the Immune surveillance in terms of probability distribution of the eradication times, and (iii) we prove, in the oscillatory regime, the existence of a stochastic bifurcation resulting in delay-induced tumor eradication.

  20. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    NASA Astrophysics Data System (ADS)

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-01

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  1. Persistence in a Two-Dimensional Moving-Habitat Model.

    PubMed

    Phillips, Austin; Kot, Mark

    2015-11-01

    Environmental changes are forcing many species to track suitable conditions or face extinction. In this study, we use a two-dimensional integrodifference equation to analyze whether a population can track a habitat that is moving due to climate change. We model habitat as a simple rectangle. Our model quickly leads to an eigenvalue problem that determines whether the population persists or declines. After surveying techniques to solve the eigenvalue problem, we highlight three findings that impact conservation efforts such as reserve design and species risk assessment. First, while other models focus on habitat length (parallel to the direction of habitat movement), we show that ignoring habitat width (perpendicular to habitat movement) can lead to overestimates of persistence. Dispersal barriers and hostile landscapes that constrain habitat width greatly decrease the population's ability to track its habitat. Second, for some long-distance dispersal kernels, increasing habitat length improves persistence without limit; for other kernels, increasing length is of limited help and has diminishing returns. Third, it is not always best to orient the long side of the habitat in the direction of climate change. Evidence suggests that the kurtosis of the dispersal kernel determines whether it is best to have a long, wide, or square habitat. In particular, populations with platykurtic dispersal benefit more from a wide habitat, while those with leptokurtic dispersal benefit more from a long habitat. We apply our model to the Rocky Mountain Apollo butterfly (Parnassius smintheus).

  2. An O( N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    DOE PAGES

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...

    2016-08-10

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O( N 2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Methodmore » (FMM) to evaluate the integrals in O( N) operations, with O( N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saenz, Juan A.; Chen, Qingshan; Ringler, Todd

    Recent work has shown that taking the thickness-weighted average (TWA) of the Boussinesq equations in buoyancy coordinates results in exact equations governing the prognostic residual mean flow where eddy–mean flow interactions appear in the horizontal momentum equations as the divergence of the Eliassen–Palm flux tensor (EPFT). It has been proposed that, given the mathematical tractability of the TWA equations, the physical interpretation of the EPFT, and its relation to potential vorticity fluxes, the TWA is an appropriate framework for modeling ocean circulation with parameterized eddies. The authors test the feasibility of this proposition and investigate the connections between the TWAmore » framework and the conventional framework used in models, where Eulerian mean flow prognostic variables are solved for. Using the TWA framework as a starting point, this study explores the well-known connections between vertical transfer of horizontal momentum by eddy form drag and eddy overturning by the bolus velocity, used by Greatbatch and Lamb and Gent and McWilliams to parameterize eddies. After implementing the TWA framework in an ocean general circulation model, we verify our analysis by comparing the flows in an idealized Southern Ocean configuration simulated using the TWA and conventional frameworks with the same mesoscale eddy parameterization.« less

  4. Schrödinger problem, Lévy processes, and noise in relativistic quantum mechanics

    NASA Astrophysics Data System (ADS)

    Garbaczewski, Piotr; Klauder, John R.; Olkiewicz, Robert

    1995-05-01

    The main purpose of the paper is an essentially probabilistic analysis of relativistic quantum mechanics. It is based on the assumption that whenever probability distributions arise, there exists a stochastic process that is either responsible for the temporal evolution of a given measure or preserves the measure in the stationary case. Our departure point is the so-called Schrödinger problem of probabilistic evolution, which provides for a unique Markov stochastic interpolation between any given pair of boundary probability densities for a process covering a fixed, finite duration of time, provided we have decided a priori what kind of primordial dynamical semigroup transition mechanism is involved. In the nonrelativistic theory, including quantum mechanics, Feynman-Kac-like kernels are the building blocks for suitable transition probability densities of the process. In the standard ``free'' case (Feynman-Kac potential equal to zero) the familiar Wiener noise is recovered. In the framework of the Schrödinger problem, the ``free noise'' can also be extended to any infinitely divisible probability law, as covered by the Lévy-Khintchine formula. Since the relativistic Hamiltonians ||∇|| and √-Δ+m2 -m are known to generate such laws, we focus on them for the analysis of probabilistic phenomena, which are shown to be associated with the relativistic wave (D'Alembert) and matter-wave (Klein-Gordon) equations, respectively. We show that such stochastic processes exist and are spatial jump processes. In general, in the presence of external potentials, they do not share the Markov property, except for stationary situations. A concrete example of the pseudodifferential Cauchy-Schrödinger evolution is analyzed in detail. The relativistic covariance of related wave equations is exploited to demonstrate how the associated stochastic jump processes comply with the principles of special relativity.

  5. Computational helioseismology in the frequency domain: acoustic waves in axisymmetric solar models with flows

    NASA Astrophysics Data System (ADS)

    Gizon, Laurent; Barucq, Hélène; Duruflé, Marc; Hanson, Chris S.; Leguèbe, Michael; Birch, Aaron C.; Chabassier, Juliette; Fournier, Damien; Hohage, Thorsten; Papini, Emanuele

    2017-04-01

    Context. Local helioseismology has so far relied on semi-analytical methods to compute the spatial sensitivity of wave travel times to perturbations in the solar interior. These methods are cumbersome and lack flexibility. Aims: Here we propose a convenient framework for numerically solving the forward problem of time-distance helioseismology in the frequency domain. The fundamental quantity to be computed is the cross-covariance of the seismic wavefield. Methods: We choose sources of wave excitation that enable us to relate the cross-covariance of the oscillations to the Green's function in a straightforward manner. We illustrate the method by considering the 3D acoustic wave equation in an axisymmetric reference solar model, ignoring the effects of gravity on the waves. The symmetry of the background model around the rotation axis implies that the Green's function can be written as a sum of longitudinal Fourier modes, leading to a set of independent 2D problems. We use a high-order finite-element method to solve the 2D wave equation in frequency space. The computation is embarrassingly parallel, with each frequency and each azimuthal order solved independently on a computer cluster. Results: We compute travel-time sensitivity kernels in spherical geometry for flows, sound speed, and density perturbations under the first Born approximation. Convergence tests show that travel times can be computed with a numerical precision better than one millisecond, as required by the most precise travel-time measurements. Conclusions: The method presented here is computationally efficient and will be used to interpret travel-time measurements in order to infer, e.g., the large-scale meridional flow in the solar convection zone. It allows the implementation of (full-waveform) iterative inversions, whereby the axisymmetric background model is updated at each iteration.

  6. A research framework for pharmacovigilance in health social media: Identification and evaluation of patient adverse drug event reports.

    PubMed

    Liu, Xiao; Chen, Hsinchun

    2015-12-01

    Social media offer insights of patients' medical problems such as drug side effects and treatment failures. Patient reports of adverse drug events from social media have great potential to improve current practice of pharmacovigilance. However, extracting patient adverse drug event reports from social media continues to be an important challenge for health informatics research. In this study, we develop a research framework with advanced natural language processing techniques for integrated and high-performance patient reported adverse drug event extraction. The framework consists of medical entity extraction for recognizing patient discussions of drug and events, adverse drug event extraction with shortest dependency path kernel based statistical learning method and semantic filtering with information from medical knowledge bases, and report source classification to tease out noise. To evaluate the proposed framework, a series of experiments were conducted on a test bed encompassing about postings from major diabetes and heart disease forums in the United States. The results reveal that each component of the framework significantly contributes to its overall effectiveness. Our framework significantly outperforms prior work. Published by Elsevier Inc.

  7. A solution to coupled Dyson{endash}Schwinger equations for gluons and ghosts in Landau gauge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Smekal, L.; Hauck, A.; Alkofer, R.

    1998-07-01

    A truncation scheme for the Dyson{endash}Schwinger equations of QCD in Landau gauge is presented which implements the Slavnov{endash}Taylor identities for the 3-point vertex functions. Neglecting contributions from 4-point correlations such as the 4-gluon vertex function and irreducible scattering kernels, a closed system of equations for the propagators is obtained. For the pure gauge theory without quarks this system of equations for the propagators of gluons and ghosts is solved in an approximation which allows for an analytic discussion of its solutions in the infrared: The gluon propagator is shown to vanish for small spacelike momenta whereas the ghost propagator ismore » found to be infrared enhanced. The running coupling of the non-perturbative subtraction scheme approaches an infrared stable fixed point at a critical value of the coupling, {alpha}{sub c}{approx_equal}9.5. The gluon propagator is shown to have no Lehmann representation. The results for the propagators obtained here compare favorably with recent lattice calculations. {copyright} 1998 Academic Press, Inc.« less

  8. Space-time domain solutions of the wave equation by a non-singular boundary integral method and Fourier transform.

    PubMed

    Klaseboer, Evert; Sepehrirahnama, Shahrokh; Chan, Derek Y C

    2017-08-01

    The general space-time evolution of the scattering of an incident acoustic plane wave pulse by an arbitrary configuration of targets is treated by employing a recently developed non-singular boundary integral method to solve the Helmholtz equation in the frequency domain from which the space-time solution of the wave equation is obtained using the fast Fourier transform. The non-singular boundary integral solution can enforce the radiation boundary condition at infinity exactly and can account for multiple scattering effects at all spacings between scatterers without adverse effects on the numerical precision. More generally, the absence of singular kernels in the non-singular integral equation confers high numerical stability and precision for smaller numbers of degrees of freedom. The use of fast Fourier transform to obtain the time dependence is not constrained to discrete time steps and is particularly efficient for studying the response to different incident pulses by the same configuration of scatterers. The precision that can be attained using a smaller number of Fourier components is also quantified.

  9. L2-norm multiple kernel learning and its application to biomedical data fusion

    PubMed Central

    2010-01-01

    Background This paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as L∞, L1, and L2 MKL. In particular, L2 MKL is a novel method that leads to non-sparse optimal kernel coefficients, which is different from the sparse kernel coefficients optimized by the existing L∞ MKL method. In real biomedical applications, L2 MKL may have more advantages over sparse integration method for thoroughly combining complementary information in heterogeneous data sources. Results We provide a theoretical analysis of the relationship between the L2 optimization of kernels in the dual problem with the L2 coefficient regularization in the primal problem. Understanding the dual L2 problem grants a unified view on MKL and enables us to extend the L2 method to a wide range of machine learning problems. We implement L2 MKL for ranking and classification problems and compare its performance with the sparse L∞ and the averaging L1 MKL methods. The experiments are carried out on six real biomedical data sets and two large scale UCI data sets. L2 MKL yields better performance on most of the benchmark data sets. In particular, we propose a novel L2 MKL least squares support vector machine (LSSVM) algorithm, which is shown to be an efficient and promising classifier for large scale data sets processing. Conclusions This paper extends the statistical framework of genomic data fusion based on MKL. Allowing non-sparse weights on the data sources is an attractive option in settings where we believe most data sources to be relevant to the problem at hand and want to avoid a "winner-takes-all" effect seen in L∞ MKL, which can be detrimental to the performance in prospective studies. The notion of optimizing L2 kernels can be straightforwardly extended to ranking, classification, regression, and clustering algorithms. To tackle the computational burden of MKL, this paper proposes several novel LSSVM based MKL algorithms. Systematic comparison on real data sets shows that LSSVM MKL has comparable performance as the conventional SVM MKL algorithms. Moreover, large scale numerical experiments indicate that when cast as semi-infinite programming, LSSVM MKL can be solved more efficiently than SVM MKL. Availability The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/l2lssvm.html. PMID:20529363

  10. A Semi-supervised Heat Kernel Pagerank MBO Algorithm for Data Classification

    DTIC Science & Technology

    2016-07-01

    financial predictions, etc. and is finding growing use in text mining studies. In this paper, we present an efficient algorithm for classification of high...video data, set of images, hyperspectral data, medical data, text data, etc. Moreover, the framework provides a way to analyze data whose different...also be incorporated. For text classification, one can use tfidf (term frequency inverse document frequency) to form feature vectors for each document

  11. Techniques for Exploiting Unlabeled Data

    DTIC Science & Technology

    2008-10-01

    Moore, Ar- jit Singh, Jure Leskovec, Stano Funiak, Andreas Krause, Gaurav Veda, John Lang- ford, R. Ravi, Peter Lee, Srinath Sridhar, Virginia Vassilevska...information. However, in the past few decades for many tasks the supply of information has outpaced our ability to effectively utilize it. For example in...function which contains kernel functions as a sub-class and show that effective learning can be done in this framework. Although the work in this area is

  12. Cooperative fault-tolerant distributed computing U.S. Department of Energy Grant DE-FG02-02ER25537 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2007-01-09

    The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.

  13. A numerical solution for a variable-order reaction-diffusion model by using fractional derivatives with non-local and non-singular kernel

    NASA Astrophysics Data System (ADS)

    Coronel-Escamilla, A.; Gómez-Aguilar, J. F.; Torres, L.; Escobar-Jiménez, R. F.

    2018-02-01

    A reaction-diffusion system can be represented by the Gray-Scott model. The reaction-diffusion dynamic is described by a pair of time and space dependent Partial Differential Equations (PDEs). In this paper, a generalization of the Gray-Scott model by using variable-order fractional differential equations is proposed. The variable-orders were set as smooth functions bounded in (0 , 1 ] and, specifically, the Liouville-Caputo and the Atangana-Baleanu-Caputo fractional derivatives were used to express the time differentiation. In order to find a numerical solution of the proposed model, the finite difference method together with the Adams method were applied. The simulations results showed the chaotic behavior of the proposed model when different variable-orders are applied.

  14. Iterative discrete ordinates solution of the equation for surface-reflected radiance

    NASA Astrophysics Data System (ADS)

    Radkevich, Alexander

    2017-11-01

    This paper presents a new method of numerical solution of the integral equation for the radiance reflected from an anisotropic surface. The equation relates the radiance at the surface level with BRDF and solutions of the standard radiative transfer problems for a slab with no reflection on its surfaces. It is also shown that the kernel of the equation satisfies the condition of the existence of a unique solution and the convergence of the successive approximations to that solution. The developed method features two basic steps: discretization on a 2D quadrature, and solving the resulting system of algebraic equations with successive over-relaxation method based on the Gauss-Seidel iterative process. Presented numerical examples show good coincidence between the surface-reflected radiance obtained with DISORT and the proposed method. Analysis of contributions of the direct and diffuse (but not yet reflected) parts of the downward radiance to the total solution is performed. Together, they represent a very good initial guess for the iterative process. This fact ensures fast convergence. The numerical evidence is given that the fastest convergence occurs with the relaxation parameter of 1 (no relaxation). An integral equation for BRDF is derived as inversion of the original equation. The potential of this new equation for BRDF retrievals is analyzed. The approach is found not viable as the BRDF equation appears to be an ill-posed problem, and it requires knowledge the surface-reflected radiance on the entire domain of both Sun and viewing zenith angles.

  15. Data-driven discovery of partial differential equations.

    PubMed

    Rudy, Samuel H; Brunton, Steven L; Proctor, Joshua L; Kutz, J Nathan

    2017-04-01

    We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. The regression framework relies on sparsity-promoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models. The method balances model complexity and regression accuracy by selecting a parsimonious model via Pareto analysis. Time series measurements can be made in an Eulerian framework, where the sensors are fixed spatially, or in a Lagrangian framework, where the sensors move with the dynamics. The method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation. Moreover, the method is capable of disambiguating between potentially nonunique dynamical terms by using multiple time series taken with different initial data. Thus, for a traveling wave, the method can distinguish between a linear wave equation and the Korteweg-de Vries equation, for instance. The method provides a promising new technique for discovering governing equations and physical laws in parameterized spatiotemporal systems, where first-principles derivations are intractable.

  16. Kernel abortion in maize : I. Carbohydrate concentration patterns and Acid invertase activity of maize kernels induced to abort in vitro.

    PubMed

    Hanft, J M; Jones, R J

    1986-06-01

    Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.

  17. Consistent Pl Analysis of Aqueous Uranium-235 Critical Assemblies

    NASA Technical Reports Server (NTRS)

    Fieno, Daniel

    1961-01-01

    The lethargy-dependent equations of the consistent Pl approximation to the Boltzmann transport equation for slowing down neutrons have been used as the basis of an IBM 704 computer program. Some of the effects included are (1) linearly anisotropic center of mass elastic scattering, (2) heavy element inelastic scattering based on the evaporation model of the nucleus, and (3) optional variation of the buckling with lethargy. The microscopic cross-section data developed for this program covered 473 lethargy points from lethargy u = 0 (10 Mev) to u = 19.8 (0.025 ev). The value of the fission neutron age in water calculated here is 26.5 square centimeters; this value is to be compared with the recent experimental value given as 27.86 square centimeters. The Fourier transform of the slowing-down kernel for water to indium resonance energy calculated here compared well with the Fourier transform of the kernel for water as measured by Hill, Roberts, and Fitch. This method of calculation has been applied to uranyl fluoride - water solution critical assemblies. Theoretical results established for both unreflected and fully reflected critical assemblies have been compared with available experimental data. The theoretical buckling curve derived as a function of the hydrogen to uranium-235 atom concentration for an energy-independent extrapolation distance was successful in predicting the critical heights of various unreflected cylindrical assemblies. The critical dimensions of fully water-reflected cylindrical assemblies were reasonably well predicted using the theoretical buckling curve and reflector savings for equivalent spherical assemblies.

  18. Integration of Network Topological and Connectivity Properties for Neuroimaging Classification

    PubMed Central

    Jie, Biao; Gao, Wei; Wang, Qian; Wee, Chong-Yaw

    2014-01-01

    Rapid advances in neuroimaging techniques have provided an efficient and noninvasive way for exploring the structural and functional connectivity of the human brain. Quantitative measurement of abnormality of brain connectivity in patients with neurodegenerative diseases, such as mild cognitive impairment (MCI) and Alzheimer’s disease (AD), have also been widely reported, especially at a group level. Recently, machine learning techniques have been applied to the study of AD and MCI, i.e., to identify the individuals with AD/MCI from the healthy controls (HCs). However, most existing methods focus on using only a single property of a connectivity network, although multiple network properties, such as local connectivity and global topological properties, can potentially be used. In this paper, by employing multikernel based approach, we propose a novel connectivity based framework to integrate multiple properties of connectivity network for improving the classification performance. Specifically, two different types of kernels (i.e., vector-based kernel and graph kernel) are used to quantify two different yet complementary properties of the network, i.e., local connectivity and global topological properties. Then, multikernel learning (MKL) technique is adopted to fuse these heterogeneous kernels for neuroimaging classification. We test the performance of our proposed method on two different data sets. First, we test it on the functional connectivity networks of 12 MCI and 25 HC subjects. The results show that our method achieves significant performance improvement over those using only one type of network property. Specifically, our method achieves a classification accuracy of 91.9%, which is 10.8% better than those by single network-property-based methods. Then, we test our method for gender classification on a large set of functional connectivity networks with 133 infants scanned at birth, 1 year, and 2 years, also demonstrating very promising results. PMID:24108708

  19. Efficient approach to include molecular polarizations using charge and atom dipole response kernels to calculate free energy gradients in the QM/MM scheme.

    PubMed

    Asada, Toshio; Ando, Kanta; Sakurai, Koji; Koseki, Shiro; Nagaoka, Masataka

    2015-10-28

    An efficient approach to evaluate free energy gradients (FEGs) within the quantum mechanical/molecular mechanical (QM/MM) framework has been proposed to clarify reaction processes on the free energy surface (FES) in molecular assemblies. The method is based on response kernel approximations denoted as the charge and the atom dipole response kernel (CDRK) model that include explicitly induced atom dipoles. The CDRK model was able to reproduce polarization effects for both electrostatic interactions between QM and MM regions and internal energies in the QM region obtained by conventional QM/MM methods. In contrast to charge response kernel (CRK) models, CDRK models could be applied to various kinds of molecules, even linear or planer molecules, without using imaginary interaction sites. Use of the CDRK model enabled us to obtain FEGs on QM atoms in significantly reduced computational time. It was also clearly demonstrated that the time development of QM forces of the solvated propylene carbonate radical cation (PC˙(+)) provided reliable results for 1 ns molecular dynamics (MD) simulation, which were quantitatively in good agreement with expensive QM/MM results. Using FEG and nudged elastic band (NEB) methods, we found two optimized reaction paths on the FES for decomposition reactions to generate CO2 molecules from PC˙(+), whose reaction is known as one of the degradation mechanisms in the lithium-ion battery. Two of these reactions proceed through an identical intermediate structure whose molecular dipole moment is larger than that of the reactant to be stabilized in the solvent, which has a high relative dielectric constant. Thus, in order to prevent decomposition reactions, PC˙(+) should be modified to have a smaller dipole moment along two reaction paths.

  20. Invited Review. Combustion instability in spray-guided stratified-charge engines. A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fansler, Todd D.; Reuss, D. L.; Sick, V.

    2015-02-02

    Our article reviews systematic research on combustion instabilities (principally rare, random misfires and partial burns) in spray-guided stratified-charge (SGSC) engines operated at part load with highly stratified fuel -air -residual mixtures. Results from high-speed optical imaging diagnostics and numerical simulation provide a conceptual framework and quantify the sensitivity of ignition and flame propagation to strong, cyclically varying temporal and spatial gradients in the flow field and in the fuel -air -residual distribution. For SGSC engines using multi-hole injectors, spark stretching and locally rich ignition are beneficial. Moreover, combustion instability is dominated by convective flow fluctuations that impede motion of themore » spark or flame kernel toward the bulk of the fuel, coupled with low flame speeds due to locally lean mixtures surrounding the kernel. In SGSC engines using outwardly opening piezo-electric injectors, ignition and early flame growth are strongly influenced by the spray's characteristic recirculation vortex. For both injection systems, the spray and the intake/compression-generated flow field influence each other. Factors underlying the benefits of multi-pulse injection are identified. Finally, some unresolved questions include (1) the extent to which piezo-SGSC misfires are caused by failure to form a flame kernel rather than by flame-kernel extinction (as in multi-hole SGSC engines); (2) the relative contributions of partially premixed flame propagation and mixing-controlled combustion under the exceptionally late-injection conditions that permit SGSC operation on E85-like fuels with very low NO x and soot emissions; and (3) the effects of flow-field variability on later combustion, where fuel-air-residual mixing within the piston bowl becomes important.« less

  1. Almost analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2017-11-01

    We present an almost analytical new approach to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of solving this matrix eigenvalue problem purely numerically, which may suffer from the computational inaccuracy for big data, first, we consider a pair of integral and differential equations, which are related to the so-called prolate spheroidal wave functions (PSWF). For the PSWF differential equation, the pair of the eigenvectors (PSWF) and eigenvalues can be obtained from a relatively small number of analytical Legendre functions. Then, the eigenvalues in the PSWF integral equation are expressed in terms of functional values of the PSWF and the eigenvalues of the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data; ordinary irregular waves and rogue waves. We found that the present almost analytical method is better than the conventional data-independent Fourier representation and, also, the conventional direct numerical K-L representation in terms of both accuracy and computational cost. This work was supported by the National Research Foundation of Korea (NRF). (NRF-2017R1D1A1B03028299).

  2. 7 CFR 810.602 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...

  3. Enhanced Data Representation by Kernel Metric Learning for Dementia Diagnosis

    PubMed Central

    Cárdenas-Peña, David; Collazos-Huertas, Diego; Castellanos-Dominguez, German

    2017-01-01

    Alzheimer's disease (AD) is the kind of dementia that affects the most people around the world. Therefore, an early identification supporting effective treatments is required to increase the life quality of a wide number of patients. Recently, computer-aided diagnosis tools for dementia using Magnetic Resonance Imaging scans have been successfully proposed to discriminate between patients with AD, mild cognitive impairment, and healthy controls. Most of the attention has been given to the clinical data, provided by initiatives as the ADNI, supporting reliable researches on intervention, prevention, and treatments of AD. Therefore, there is a need for improving the performance of classification machines. In this paper, we propose a kernel framework for learning metrics that enhances conventional machines and supports the diagnosis of dementia. Our framework aims at building discriminative spaces through the maximization of center kernel alignment function, aiming at improving the discrimination of the three considered neurological classes. The proposed metric learning performance is evaluated on the widely-known ADNI database using three supervised classification machines (k-nn, SVM and NNs) for multi-class and bi-class scenarios from structural MRIs. Specifically, from ADNI collection 286 AD patients, 379 MCI patients and 231 healthy controls are used for development and validation of our proposed metric learning framework. For the experimental validation, we split the data into two subsets: 30% of subjects used like a blindfolded assessment and 70% employed for parameter tuning. Then, in the preprocessing stage, each structural MRI scan a total of 310 morphological measurements are automatically extracted from by FreeSurfer software package and concatenated to build an input feature matrix. Obtained test performance results, show that including a supervised metric learning improves the compared baseline classifiers in both scenarios. In the multi-class scenario, we achieve the best performance (accuracy 60.1%) for pretrained 1-layered NN, and we obtain measures over 90% in the average for HC vs. AD task. From the machine learning point of view, our proposal enhances the classifier performance by building spaces with a better class separability. From the clinical application, our enhancement results in a more balanced performance in each class than the compared approaches from the CADDementia challenge by increasing the sensitivity of pathological groups and the specificity of healthy controls. PMID:28798659

  4. Kernel Abortion in Maize 1

    PubMed Central

    Hanft, Jonathan M.; Jones, Robert J.

    1986-01-01

    Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846

  5. Global existence and exponential decay of the solution for a viscoelastic wave equation with a delay

    NASA Astrophysics Data System (ADS)

    Dai, Qiuyi; Yang, Zhifeng

    2014-10-01

    In this paper, we consider initial-boundary value problem of viscoelastic wave equation with a delay term in the interior feedback. Namely, we study the following equation together with initial-boundary conditions of Dirichlet type in Ω × (0, + ∞) and prove that for arbitrary real numbers μ 1 and μ 2, the above-mentioned problem has a unique global solution under suitable assumptions on the kernel g. This improve the results of the previous literature such as Nicaise and Pignotti (SIAM J. Control Optim 45:1561-1585, 2006) and Kirane and Said-Houari (Z. Angew. Math. Phys. 62:1065-1082, 2011) by removing the restriction imposed on μ 1 and μ 2. Furthermore, we also get an exponential decay results for the energy of the concerned problem in the case μ 1 = 0 which solves an open problem proposed by Kirane and Said-Houari (Z. Angew. Math. Phys. 62:1065-1082, 2011).

  6. Research on the hot deformation behavior of a Fe-Ni-Cr alloy (800H) at temperatures above 1000 °C

    NASA Astrophysics Data System (ADS)

    Cao, Yu; Di, Hongshuang

    2015-10-01

    Considering the pinning effect of fine carbides on grain boundaries, hot compression tests were performed above the dissolution temperature of Cr23C6 to investigate the hot deformation behavior of a Fe-Ni-Cr alloy (800H). The results show that the single peak stress associated with dynamic recrystalization (DRX) became more distinct at higher temperature and lower strain rate. The process of DRX was thoroughly stimulated when deformed above 1000 °C. Constitutive equations for hot deformation were established by regression analysis of conventional hyperbolic sine equation. The relationships between Zener-Hollomon parameter (Z) and the characteristic points of flow curves were established using the power law relation. Furthermore, kernel average misorientation (KAM) and grain orientation spread (GOS) were used to map the distribution of local misorientation and estimate the fraction of DRX, respectively. The critical strain and peak strain were used to predict the kinetics of DRX with the Avrami-type equation.

  7. Stability of the Superconducting d-Wave Pairing Toward the Intersite Coulomb Repulsion in CuO_2 Plane

    NASA Astrophysics Data System (ADS)

    Val'kov, V. V.; Dzebisashvili, D. M.; Korovushkin, M. M.; Barabanov, A. F.

    2018-06-01

    Taking into account the real crystalline structure of the CuO_2 plane and the strong spin-fermion coupling, we study the influence of the intersite Coulomb repulsion between holes on the Cooper instability of the spin-polaron quasiparticles in cuprate superconductors. The analysis shows that only the superconducting d-wave pairing is implemented in the whole region of doping, whereas the solutions of the self-consistent equations for the s-wave pairing are absent. It is shown that intersite Coulomb interaction V_1 between the holes located at the nearest oxygen ions does not affect the d-wave pairing, because its Fourier transform V_q vanishes in the kernel of the corresponding integral equation. The intersite Coulomb interaction V_2 of quasiparticles located at the next-nearest oxygen ions does not vanish in the integral equations, however, but it is also shown that the d-wave pairing is robust toward this interaction for physically reasonable values of V_2.

  8. Geometric scaling behavior of the scattering amplitude for DIS with nuclei

    NASA Astrophysics Data System (ADS)

    Kormilitzin, Andrey; Levin, Eugene; Tapia, Sebastian

    2011-12-01

    The main question, that we answer in this paper, is whether the initial condition can influence on the geometric scaling behavior of the amplitude for DIS at high energy. We re-write the non-linear Balitsky-Kovchegov equation in the form which is useful for treating the interaction with nuclei. Using the simplified BFKL kernel, we find the analytical solution to this equation with the initial condition given by the McLerran-Venugopalan formula. This solution does not show the geometric scaling behavior of the amplitude deeply in the saturation region. On the other hand, the BFKL Pomeron calculus with the initial condition at x=1/mR given by the solution to Balitsky-Kovchegov equation, leads to the geometric scaling behavior. The McLerran-Venugopalan formula is the natural initial condition for the Color Glass Condensate (CGC) approach. Therefore, our result gives a possibility to check experimentally which approach: CGC or BFKL Pomeron calculus, is more satisfactory.

  9. Stability of the Superconducting d-Wave Pairing Toward the Intersite Coulomb Repulsion in CuO_2 Plane

    NASA Astrophysics Data System (ADS)

    Val'kov, V. V.; Dzebisashvili, D. M.; Korovushkin, M. M.; Barabanov, A. F.

    2018-03-01

    Taking into account the real crystalline structure of the CuO_2 plane and the strong spin-fermion coupling, we study the influence of the intersite Coulomb repulsion between holes on the Cooper instability of the spin-polaron quasiparticles in cuprate superconductors. The analysis shows that only the superconducting d-wave pairing is implemented in the whole region of doping, whereas the solutions of the self-consistent equations for the s-wave pairing are absent. It is shown that intersite Coulomb interaction V_1 between the holes located at the nearest oxygen ions does not affect the d-wave pairing, because its Fourier transform V_q vanishes in the kernel of the corresponding integral equation. The intersite Coulomb interaction V_2 of quasiparticles located at the next-nearest oxygen ions does not vanish in the integral equations, however, but it is also shown that the d-wave pairing is robust toward this interaction for physically reasonable values of V_2.

  10. A discontinuous Galerkin method for nonlinear parabolic equations and gradient flow problems with interaction potentials

    NASA Astrophysics Data System (ADS)

    Sun, Zheng; Carrillo, José A.; Shu, Chi-Wang

    2018-01-01

    We consider a class of time-dependent second order partial differential equations governed by a decaying entropy. The solution usually corresponds to a density distribution, hence positivity (non-negativity) is expected. This class of problems covers important cases such as Fokker-Planck type equations and aggregation models, which have been studied intensively in the past decades. In this paper, we design a high order discontinuous Galerkin method for such problems. If the interaction potential is not involved, or the interaction is defined by a smooth kernel, our semi-discrete scheme admits an entropy inequality on the discrete level. Furthermore, by applying the positivity-preserving limiter, our fully discretized scheme produces non-negative solutions for all cases under a time step constraint. Our method also applies to two dimensional problems on Cartesian meshes. Numerical examples are given to confirm the high order accuracy for smooth test cases and to demonstrate the effectiveness for preserving long time asymptotics.

  11. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  12. 7 CFR 810.1202 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...

  13. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize.

    PubMed

    Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.

  14. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize

    PubMed Central

    Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143

  15. Prognostic residual mean flow in an ocean general circulation model and its relation to prognostic Eulerian mean flow

    DOE PAGES

    Saenz, Juan A.; Chen, Qingshan; Ringler, Todd

    2015-05-19

    Recent work has shown that taking the thickness-weighted average (TWA) of the Boussinesq equations in buoyancy coordinates results in exact equations governing the prognostic residual mean flow where eddy–mean flow interactions appear in the horizontal momentum equations as the divergence of the Eliassen–Palm flux tensor (EPFT). It has been proposed that, given the mathematical tractability of the TWA equations, the physical interpretation of the EPFT, and its relation to potential vorticity fluxes, the TWA is an appropriate framework for modeling ocean circulation with parameterized eddies. The authors test the feasibility of this proposition and investigate the connections between the TWAmore » framework and the conventional framework used in models, where Eulerian mean flow prognostic variables are solved for. Using the TWA framework as a starting point, this study explores the well-known connections between vertical transfer of horizontal momentum by eddy form drag and eddy overturning by the bolus velocity, used by Greatbatch and Lamb and Gent and McWilliams to parameterize eddies. After implementing the TWA framework in an ocean general circulation model, we verify our analysis by comparing the flows in an idealized Southern Ocean configuration simulated using the TWA and conventional frameworks with the same mesoscale eddy parameterization.« less

  16. Kernel-Based Approximate Dynamic Programming Using Bellman Residual Elimination

    DTIC Science & Technology

    2010-02-01

    framework is the ability to utilize stochastic system models, thereby allowing the system to make sound decisions even if there is randomness in the system ...approximate policy when a system model is unavailable. We present theoretical analysis of all BRE algorithms proving convergence to the optimal policy in...policies based on MDPs is that there may be parameters of the system model that are poorly known and/or vary with time as the system operates. System

  17. 7 CFR 810.802 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...

  18. Efficient use of unlabeled data for protein sequence classification: a comparative study

    PubMed Central

    Kuksa, Pavel; Huang, Pai-Hsi; Pavlovic, Vladimir

    2009-01-01

    Background Recent studies in computational primary protein sequence analysis have leveraged the power of unlabeled data. For example, predictive models based on string kernels trained on sequences known to belong to particular folds or superfamilies, the so-called labeled data set, can attain significantly improved accuracy if this data is supplemented with protein sequences that lack any class tags–the unlabeled data. In this study, we present a principled and biologically motivated computational framework that more effectively exploits the unlabeled data by only using the sequence regions that are more likely to be biologically relevant for better prediction accuracy. As overly-represented sequences in large uncurated databases may bias the estimation of computational models that rely on unlabeled data, we also propose a method to remove this bias and improve performance of the resulting classifiers. Results Combined with state-of-the-art string kernels, our proposed computational framework achieves very accurate semi-supervised protein remote fold and homology detection on three large unlabeled databases. It outperforms current state-of-the-art methods and exhibits significant reduction in running time. Conclusion The unlabeled sequences used under the semi-supervised setting resemble the unpolished gemstones; when used as-is, they may carry unnecessary features and hence compromise the classification accuracy but once cut and polished, they improve the accuracy of the classifiers considerably. PMID:19426450

  19. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  20. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

Top