Sample records for weakly singular kernels

  1. Singularity Preserving Numerical Methods for Boundary Integral Equations

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki (Principal Investigator)

    1996-01-01

    In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.

  2. Nonlinear vibrations and dynamic stability of viscoelastic orthotropic rectangular plates

    NASA Astrophysics Data System (ADS)

    Eshmatov, B. Kh.

    2007-03-01

    This paper describes the analyses of the nonlinear vibrations and dynamic stability of viscoelastic orthotropic plates. The models are based on the Kirchhoff-Love (K.L.) hypothesis and Reissner-Mindlin (R.M.) generalized theory (with the incorporation of shear deformation and rotatory inertia) in geometrically nonlinear statements. It provides justification for the choice of the weakly singular Koltunov-Rzhanitsyn type kernel, with three rheological parameters. In addition, the implication of each relaxation kernel parameter has been studied. To solve problems of viscoelastic systems with weakly singular kernels of relaxation, a numerical method has been used, based on quadrature formulae. With a combination of the Bubnov-Galerkin and the presented method, problems of nonlinear vibrations and dynamic stability in viscoelastic orthotropic rectangular plates have been solved, according to the K.L. and R.M. hypotheses. A comparison of the results obtained via these theories is also presented. In all problems, the convergence of the Bubnov-Galerkin method has been investigated. The implications of material viscoelasticity on vibration and dynamic stability are presented graphically.

  3. Method of mechanical quadratures for solving singular integral equations of various types

    NASA Astrophysics Data System (ADS)

    Sahakyan, A. V.; Amirjanyan, H. A.

    2018-04-01

    The method of mechanical quadratures is proposed as a common approach intended for solving the integral equations defined on finite intervals and containing Cauchy-type singular integrals. This method can be used to solve singular integral equations of the first and second kind, equations with generalized kernel, weakly singular equations, and integro-differential equations. The quadrature rules for several different integrals represented through the same coefficients are presented. This allows one to reduce the integral equations containing integrals of different types to a system of linear algebraic equations.

  4. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  5. On the solution of integral equations with a generalized cauchy kernel

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    In this paper a certain class of singular integral equations that may arise from the mixed boundary value problems in nonhomogeneous materials is considered. The distinguishing feature of these equations is that in addition to the Cauchy singularity, the kernels contain terms that are singular only at the end points. In the form of the singular integral equations adopted, the density function is a potential or a displacement and consequently the kernel has strong singularities of the form (t-x) sup-2, x sup n-2 (t+x) sup n, (n or = 2, 0x,tb). The complex function theory is used to determine the fundamental function of the problem for the general case and a simple numerical technique is described to solve the integral equation. Two examples from the theory of elasticity are then considered to show the application of the technique.

  6. Analysis of Drude model using fractional derivatives without singular kernels

    NASA Astrophysics Data System (ADS)

    Jiménez, Leonardo Martínez; García, J. Juan Rosales; Contreras, Abraham Ortega; Baleanu, Dumitru

    2017-11-01

    We report study exploring the fractional Drude model in the time domain, using fractional derivatives without singular kernels, Caputo-Fabrizio (CF), and fractional derivatives with a stretched Mittag-Leffler function. It is shown that the velocity and current density of electrons moving through a metal depend on both the time and the fractional order 0 < γ ≤ 1. Due to non-singular fractional kernels, it is possible to consider complete memory effects in the model, which appear neither in the ordinary model, nor in the fractional Drude model with Caputo fractional derivative. A comparison is also made between these two representations of the fractional derivatives, resulting a considered difference when γ < 0.8.

  7. On the solution of integral equations with strongly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m ,m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup -m , terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  8. On the solution of integral equations with strong ly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1985-01-01

    In this paper some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m or = 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t,x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  9. On the solution of integral equations with strongly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1987-01-01

    Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  10. Boundary-element modelling of dynamics in external poroviscoelastic problems

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Litvinchuk, S. Yu; Ipatov, A. A.; Petrov, A. N.

    2018-04-01

    A problem of a spherical cavity in porous media is considered. Porous media are assumed to be isotropic poroelastic or isotropic poroviscoelastic. The poroviscoelastic formulation is treated as a combination of Biot’s theory of poroelasticity and elastic-viscoelastic correspondence principle. Such viscoelastic models as Kelvin–Voigt, Standard linear solid, and a model with weakly singular kernel are considered. Boundary field study is employed with the help of the boundary element method. The direct approach is applied. The numerical scheme is based on the collocation method, regularized boundary integral equation, and Radau stepped scheme.

  11. Fredholm-Volterra Integral Equation with a Generalized Singular Kernel and its Numerical Solutions

    NASA Astrophysics Data System (ADS)

    El-Kalla, I. L.; Al-Bugami, A. M.

    2010-11-01

    In this paper, the existence and uniqueness of solution of the Fredholm-Volterra integral equation (F-VIE), with a generalized singular kernel, are discussed and proved in the spaceL2(Ω)×C(0,T). The Fredholm integral term (FIT) is considered in position while the Volterra integral term (VIT) is considered in time. Using a numerical technique we have a system of Fredholm integral equations (SFIEs). This system of integral equations can be reduced to a linear algebraic system (LAS) of equations by using two different methods. These methods are: Toeplitz matrix method and Product Nyström method. A numerical examples are considered when the generalized kernel takes the following forms: Carleman function, logarithmic form, Cauchy kernel, and Hilbert kernel.

  12. Analysis of the cable equation with non-local and non-singular kernel fractional derivative

    NASA Astrophysics Data System (ADS)

    Karaagac, Berat

    2018-02-01

    Recently a new concept of differentiation was introduced in the literature where the kernel was converted from non-local singular to non-local and non-singular. One of the great advantages of this new kernel is its ability to portray fading memory and also well defined memory of the system under investigation. In this paper the cable equation which is used to develop mathematical models of signal decay in submarine or underwater telegraphic cables will be analysed using the Atangana-Baleanu fractional derivative due to the ability of the new fractional derivative to describe non-local fading memory. The existence and uniqueness of the more generalized model is presented in detail via the fixed point theorem. A new numerical scheme is used to solve the new equation. In addition, stability, convergence and numerical simulations are presented.

  13. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  14. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  15. FRIT characterized hierarchical kernel memory arrangement for multiband palmprint recognition

    NASA Astrophysics Data System (ADS)

    Kisku, Dakshina R.; Gupta, Phalguni; Sing, Jamuna K.

    2015-10-01

    In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.

  16. Singularity analysis based on wavelet transform of fractal measures for identifying geochemical anomaly in mineral exploration

    NASA Astrophysics Data System (ADS)

    Chen, Guoxiong; Cheng, Qiuming

    2016-02-01

    Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.

  17. Modeling electro-magneto-hydrodynamic thermo-fluidic transport of biofluids with new trend of fractional derivative without singular kernel

    NASA Astrophysics Data System (ADS)

    Abdulhameed, M.; Vieru, D.; Roslan, R.

    2017-10-01

    This paper investigates the electro-magneto-hydrodynamic flow of the non-Newtonian behavior of biofluids, with heat transfer, through a cylindrical microchannel. The fluid is acted by an arbitrary time-dependent pressure gradient, an external electric field and an external magnetic field. The governing equations are considered as fractional partial differential equations based on the Caputo-Fabrizio time-fractional derivatives without singular kernel. The usefulness of fractional calculus to study fluid flows or heat and mass transfer phenomena was proven. Several experimental measurements led to conclusion that, in such problems, the models described by fractional differential equations are more suitable. The most common time-fractional derivative used in Continuum Mechanics is Caputo derivative. However, two disadvantages appear when this derivative is used. First, the definition kernel is a singular function and, secondly, the analytical expressions of the problem solutions are expressed by generalized functions (Mittag-Leffler, Lorenzo-Hartley, Robotnov, etc.) which, generally, are not adequate to numerical calculations. The new time-fractional derivative Caputo-Fabrizio, without singular kernel, is more suitable to solve various theoretical and practical problems which involve fractional differential equations. Using the Caputo-Fabrizio derivative, calculations are simpler and, the obtained solutions are expressed by elementary functions. Analytical solutions of the biofluid velocity and thermal transport are obtained by means of the Laplace and finite Hankel transforms. The influence of the fractional parameter, Eckert number and Joule heating parameter on the biofluid velocity and thermal transport are numerically analyzed and graphic presented. This fact can be an important in Biochip technology, thus making it possible to use this analysis technique extremely effective to control bioliquid samples of nanovolumes in microfluidic devices used for biological analysis and medical diagnosis.

  18. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    PubMed

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.

  19. Oscillatory singular integrals and harmonic analysis on nilpotent groups

    PubMed Central

    Ricci, F.; Stein, E. M.

    1986-01-01

    Several related classes of operators on nilpotent Lie groups are considered. These operators involve the following features: (i) oscillatory factors that are exponentials of imaginary polynomials, (ii) convolutions with singular kernels supported on lower-dimensional submanifolds, (iii) validity in the general context not requiring the existence of dilations that are automorphisms. PMID:16593640

  20. New numerical approximation of fractional derivative with non-local and non-singular kernel: Application to chaotic models

    NASA Astrophysics Data System (ADS)

    Toufik, Mekkaoui; Atangana, Abdon

    2017-10-01

    Recently a new concept of fractional differentiation with non-local and non-singular kernel was introduced in order to extend the limitations of the conventional Riemann-Liouville and Caputo fractional derivatives. A new numerical scheme has been developed, in this paper, for the newly established fractional differentiation. We present in general the error analysis. The new numerical scheme was applied to solve linear and non-linear fractional differential equations. We do not need a predictor-corrector to have an efficient algorithm, in this method. The comparison of approximate and exact solutions leaves no doubt believing that, the new numerical scheme is very efficient and converges toward exact solution very rapidly.

  1. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  2. Chaotic processes using the two-parameter derivative with non-singular and non-local kernel: Basic theory and applications

    NASA Astrophysics Data System (ADS)

    Doungmo Goufo, Emile Franc

    2016-08-01

    After having the issues of singularity and locality addressed recently in mathematical modelling, another question regarding the description of natural phenomena was raised: How influent is the second parameter β of the two-parameter Mittag-Leffler function E α , β ( z ) , z ∈ ℂ ? To answer this question, we generalize the newly introduced one-parameter derivative with non-singular and non-local kernel [A. Atangana and I. Koca, Chaos, Solitons Fractals 89, 447 (2016); A. Atangana and D. Bealeanu (e-print)] by developing a similar two-parameter derivative with non-singular and non-local kernel based on Eα,β(z). We exploit the Agarwal/Erdelyi higher transcendental functions together with their Laplace transforms to explicitly establish the Laplace transform's expressions of the two-parameter derivatives, necessary for solving related fractional differential equations. Explicit expression of the associated two-parameter fractional integral is also established. Concrete applications are done on atmospheric convection process by using Lorenz non-linear simple system. Existence result for the model is provided and a numerical scheme established. As expected, solutions exhibit chaotic behaviors for α less than 0.55, and this chaos is not interrupted by the impact of β. Rather, this second parameter seems to indirectly squeeze and rotate the solutions, giving an impression of twisting. The whole graphics seem to have completely changed its orientation to a particular direction. This is a great observation that clearly shows the substantial impact of the second parameter of Eα,β(z), certainly opening new doors to modeling with two-parameter derivatives.

  3. Chaotic processes using the two-parameter derivative with non-singular and non-local kernel: Basic theory and applications.

    PubMed

    Doungmo Goufo, Emile Franc

    2016-08-01

    After having the issues of singularity and locality addressed recently in mathematical modelling, another question regarding the description of natural phenomena was raised: How influent is the second parameter β of the two-parameter Mittag-Leffler function Eα,β(z), z∈ℂ? To answer this question, we generalize the newly introduced one-parameter derivative with non-singular and non-local kernel [A. Atangana and I. Koca, Chaos, Solitons Fractals 89, 447 (2016); A. Atangana and D. Bealeanu (e-print)] by developing a similar two-parameter derivative with non-singular and non-local kernel based on Eα , β(z). We exploit the Agarwal/Erdelyi higher transcendental functions together with their Laplace transforms to explicitly establish the Laplace transform's expressions of the two-parameter derivatives, necessary for solving related fractional differential equations. Explicit expression of the associated two-parameter fractional integral is also established. Concrete applications are done on atmospheric convection process by using Lorenz non-linear simple system. Existence result for the model is provided and a numerical scheme established. As expected, solutions exhibit chaotic behaviors for α less than 0.55, and this chaos is not interrupted by the impact of β. Rather, this second parameter seems to indirectly squeeze and rotate the solutions, giving an impression of twisting. The whole graphics seem to have completely changed its orientation to a particular direction. This is a great observation that clearly shows the substantial impact of the second parameter of Eα , β(z), certainly opening new doors to modeling with two-parameter derivatives.

  4. Solution of two-body relativistic bound state equations with confining plus Coulomb interactions

    NASA Technical Reports Server (NTRS)

    Maung, Khin Maung; Kahana, David E.; Norbury, John W.

    1992-01-01

    Studies of meson spectroscopy have often employed a nonrelativistic Coulomb plus Linear Confining potential in position space. However, because the quarks in mesons move at an appreciable fraction of the speed of light, it is necessary to use a relativistic treatment of the bound state problem. Such a treatment is most easily carried out in momentum space. However, the position space Linear and Coulomb potentials lead to singular kernels in momentum space. Using a subtraction procedure we show how to remove these singularities exactly and thereby solve the Schroedinger equation in momentum space for all partial waves. Furthermore, we generalize the Linear and Coulomb potentials to relativistic kernels in four dimensional momentum space. Again we use a subtraction procedure to remove the relativistic singularities exactly for all partial waves. This enables us to solve three dimensional reductions of the Bethe-Salpeter equation. We solve six such equations for Coulomb plus Confining interactions for all partial waves.

  5. The gravitational potential of axially symmetric bodies from a regularized green kernel

    NASA Astrophysics Data System (ADS)

    Trova, A.; Huré, J.-M.; Hersant, F.

    2011-12-01

    The determination of the gravitational potential inside celestial bodies (rotating stars, discs, planets, asteroids) is a common challenge in numerical Astrophysics. Under axial symmetry, the potential is classically found from a two-dimensional integral over the body's meridional cross-section. Because it involves an improper integral, high accuracy is generally difficult to reach. We have discovered that, for homogeneous bodies, the singular Green kernel can be converted into a regular kernel by direct analytical integration. This new kernel, easily managed with standard techniques, opens interesting horizons, not only for numerical calculus but also to generate approximations, in particular for geometrically thin discs and rings.

  6. The resolvent of singular integral equations. [of kernel functions in mixed boundary value problems

    NASA Technical Reports Server (NTRS)

    Williams, M. H.

    1977-01-01

    The investigation reported is concerned with the construction of the resolvent for any given kernel function. In problems with ill-behaved inhomogeneous terms as, for instance, in the aerodynamic problem of flow over a flapped airfoil, direct numerical methods become very difficult. A description is presented of a solution method by resolvent which can be employed in such problems.

  7. Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals

    NASA Technical Reports Server (NTRS)

    Fink, Patrick W.; Wilton, Donald R.; Khayat, Michael A.

    2007-01-01

    Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both.

  8. Numerical quadrature methods for integrals of singular periodic functions and their application to singular and weakly singular integral equations

    NASA Technical Reports Server (NTRS)

    Sidi, A.; Israeli, M.

    1986-01-01

    High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.

  9. Infrared dim-small target tracking via singular value decomposition and improved Kernelized correlation filter

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong

    2017-05-01

    Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.

  10. A kernel function method for computing steady and oscillatory supersonic aerodynamics with interference.

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1973-01-01

    The method presented uses a collocation technique with the nonplanar kernel function to solve supersonic lifting surface problems with and without interference. A set of pressure functions are developed based on conical flow theory solutions which account for discontinuities in the supersonic pressure distributions. These functions permit faster solution convergence than is possible with conventional supersonic pressure functions. An improper integral of a 3/2 power singularity along the Mach hyperbola of the nonplanar supersonic kernel function is described and treated. The method is compared with other theories and experiment for a variety of cases.

  11. Oscillatory supersonic kernel function method for interfering surfaces

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1974-01-01

    In the method presented in this paper, a collocation technique is used with the nonplanar supersonic kernel function to solve multiple lifting surface problems with interference in steady or oscillatory flow. The pressure functions used are based on conical flow theory solutions and provide faster solution convergence than is possible with conventional functions. In the application of the nonplanar supersonic kernel function, an improper integral of a 3/2 power singularity along the Mach hyperbola is described and treated. The method is compared with other theories and experiment for two wing-tail configurations in steady and oscillatory flow.

  12. Semisupervised kernel marginal Fisher analysis for face recognition.

    PubMed

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  13. On the Kernel function of the integral equation relating lift and downwash distributions of oscillating wings in supersonic flow

    NASA Technical Reports Server (NTRS)

    Watkins, Charles E; Berman, Julian H

    1956-01-01

    This report treats the Kernel function of the integral equation that relates a known or prescribed downwash distribution to an unknown lift distribution for harmonically oscillating wings in supersonic flow. The treatment is essentially an extension to supersonic flow of the treatment given in NACA report 1234 for subsonic flow. For the supersonic case the Kernel function is derived by use of a suitable form of acoustic doublet potential which employs a cutoff or Heaviside unit function. The Kernel functions are reduced to forms that can be accurately evaluated by considering the functions in two parts: a part in which the singularities are isolated and analytically expressed, and a nonsingular part which can be tabulated.

  14. Wavelets on the Group SO(3) and the Sphere S3

    NASA Astrophysics Data System (ADS)

    Bernstein, Swanhild

    2007-09-01

    The construction of wavelets relies on translations and dilations which are perfectly given in R. On the sphere translations can be considered as rotations but it difficult to say what are dilations. For the 2-dimensional sphere there exist two different approaches to obtain wavelets which are worth to be considered. The first concept goes back to Freeden and collaborators [2] which defines wavelets by means of kernels of spherical singular integrals. The other concept developed by Antoine and Vandergheynst and coworkers [3] is a purely group theoretical approach and defines dilations as dilations in the tangent plane. Surprisingly both concepts coincides for zonal functions. We will define wavelets on the 3-dimensional sphere by means of kernels of singular integrals and demonstrate that wavelets constructed by Antoine and Vandergheynst for zonal functions meet our definition.

  15. Variation and oscillation for the multilinear singular integrals satisfying Hörmander type conditions.

    PubMed

    Xia, Yinhong

    2018-01-01

    Suppose that the kernel K satisfies a certain Hörmander type condition. Let b be a function satisfying [Formula: see text] for [Formula: see text], and let [Formula: see text] be a family of multilinear singular integral operators, i.e., [Formula: see text] The main purpose of this paper is to establish the weighted [Formula: see text]-boundedness of the variation operator and the oscillation operator for [Formula: see text].

  16. On the solution of integral equations with a generalized cauchy kernal

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    A certain class of singular integral equations that may arise from the mixed boundary value problems in nonhonogeneous materials is considered. The distinguishing feature of these equations is that in addition to the Cauchy singularity, the kernels contain terms that are singular only at the end points. In the form of the singular integral equations adopted, the density function is a potential or a displacement and consequently the kernal has strong singularities of the form (t-x)(-2), x(n-2) (t+x)(n), (n is = or 2, 0 x, t b). The complex function theory is used to determine the fundamental function of the problem for the general case and a simple numerical technique is described to solve the integral equation. Two examples from the theory of elasticity are then considered to show the application of the technique.

  17. Regularity gradient estimates for weak solutions of singular quasi-linear parabolic equations

    NASA Astrophysics Data System (ADS)

    Phan, Tuoc

    2017-12-01

    This paper studies the Sobolev regularity for weak solutions of a class of singular quasi-linear parabolic problems of the form ut -div [ A (x , t , u , ∇u) ] =div [ F ] with homogeneous Dirichlet boundary conditions over bounded spatial domains. Our main focus is on the case that the vector coefficients A are discontinuous and singular in (x , t)-variables, and dependent on the solution u. Global and interior weighted W 1 , p (ΩT , ω)-regularity estimates are established for weak solutions of these equations, where ω is a weight function in some Muckenhoupt class of weights. The results obtained are even new for linear equations, and for ω = 1, because of the singularity of the coefficients in (x , t)-variables.

  18. From Newton's Law to the Linear Boltzmann Equation Without Cut-Off

    NASA Astrophysics Data System (ADS)

    Ayi, Nathalie

    2017-03-01

    We provide a rigorous derivation of the linear Boltzmann equation without cut-off starting from a system of particles interacting via a potential with infinite range as the number of particles N goes to infinity under the Boltzmann-Grad scaling. More particularly, we will describe the motion of a tagged particle in a gas close to global equilibrium. The main difficulty in our context is that, due to the infinite range of the potential, a non-integrable singularity appears in the angular collision kernel, making no longer valid the single-use of Lanford's strategy. Our proof relies then on a combination of Lanford's strategy, of tools developed recently by Bodineau, Gallagher and Saint-Raymond to study the collision process, and of new duality arguments to study the additional terms associated with the long-range interaction, leading to some explicit weak estimates.

  19. A novel equivalent definition of Caputo fractional derivative without singular kernel and superconvergent analysis

    NASA Astrophysics Data System (ADS)

    Liu, Zhengguang; Li, Xiaoli

    2018-05-01

    In this article, we present a new second-order finite difference discrete scheme for a fractal mobile/immobile transport model based on equivalent transformative Caputo formulation. The new transformative formulation takes the singular kernel away to make the integral calculation more efficient. Furthermore, this definition is also effective where α is a positive integer. Besides, the T-Caputo derivative also helps us to increase the convergence rate of the discretization of the α-order(0 < α < 1) Caputo derivative from O(τ2-α) to O(τ3-α), where τ is the time step. For numerical analysis, a Crank-Nicolson finite difference scheme to solve the fractal mobile/immobile transport model is introduced and analyzed. The unconditional stability and a priori estimates of the scheme are given rigorously. Moreover, the applicability and accuracy of the scheme are demonstrated by numerical experiments to support our theoretical analysis.

  20. Refinement of Methods for Evaluation of Near-Hypersingular Integrals in BEM Formulations

    NASA Technical Reports Server (NTRS)

    Fink, Patricia W.; Khayat, Michael A.; Wilton, Donald R.

    2006-01-01

    In this paper, we present advances in singularity cancellation techniques applied to integrals in BEM formulations that are nearly hypersingular. Significant advances have been made recently in singularity cancellation techniques applied to 1 R type kernels [M. Khayat, D. Wilton, IEEE Trans. Antennas and Prop., 53, pp. 3180-3190, 2005], as well as to the gradients of these kernels [P. Fink, D. Wilton, and M. Khayat, Proc. ICEAA, pp. 861-864, Torino, Italy, 2005] on curved subdomains. In these approaches, the source triangle is divided into three tangent subtriangles with a common vertex at the normal projection of the observation point onto the source element or the extended surface containing it. The geometry of a typical tangent subtriangle and its local rectangular coordinate system with origin at the projected observation point is shown in Fig. 1. Whereas singularity cancellation techniques for 1 R type kernels are now nearing maturity, the efficient handling of near-hypersingular kernels still needs attention. For example, in the gradient reference above, techniques are presented for computing the normal component of the gradient relative to the plane containing the tangent subtriangle. These techniques, summarized in the transformations in Table 1, are applied at the sub-triangle level and correspond particularly to the case in which the normal projection of the observation point lies within the boundary of the source element. They are found to be highly efficient as z approaches zero. Here, we extend the approach to cover two instances not previously addressed. First, we consider the case in which the normal projection of the observation point lies external to the source element. For such cases, we find that simple modifications to the transformations of Table 1 permit significant savings in computational cost. Second, we present techniques that permit accurate computation of the tangential components of the gradient; i.e., tangent to the plane containing the source element.

  1. Palmprint and Face Multi-Modal Biometric Recognition Based on SDA-GSVD and Its Kernelization

    PubMed Central

    Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu

    2012-01-01

    When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance. PMID:22778600

  2. Palmprint and face multi-modal biometric recognition based on SDA-GSVD and its kernelization.

    PubMed

    Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu

    2012-01-01

    When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance.

  3. Gravitational lensing by rotating naked singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gyulchev, Galin N.; Yazadjiev, Stoytcho S.; Institut fuer Theoretische Physik, Universitaet Goettingen, Friedrich-Hund-Platz 1, D-37077 Goettingen

    We model massive compact objects in galactic nuclei as stationary, axially symmetric naked singularities in the Einstein-massless scalar field theory and study the resulting gravitational lensing. In the weak deflection limit we study analytically the position of the two weak field images, the corresponding signed and absolute magnifications as well as the centroid up to post-Newtonian order. We show that there are static post-Newtonian corrections to the signed magnification and their sum as well as to the critical curves, which are functions of the scalar charge. The shift of the critical curves as a function of the lens angular momentummore » is found, and it is shown that they decrease slightly for the weakly naked and vastly for the strongly naked singularities with the increase of the scalar charge. The pointlike caustics drift away from the optical axis and do not depend on the scalar charge. In the strong deflection limit approximation, we compute numerically the position of the relativistic images and their separability for weakly naked singularities. All of the lensing quantities are compared to particular cases as Schwarzschild and Kerr black holes as well as Janis-Newman-Winicour naked singularities.« less

  4. Interface with weakly singular points always scatter

    NASA Astrophysics Data System (ADS)

    Li, Long; Hu, Guanghui; Yang, Jiansheng

    2018-07-01

    Assume that a bounded scatterer is embedded into an infinite homogeneous isotropic background medium in two dimensions. The refractive index function is supposed to be piecewise constant. If the scattering interface contains a weakly singular point, we prove that the scattered field cannot vanish identically. This implies the absence of non-scattering energies for piecewise analytic interfaces with one singular point. Local uniqueness is obtained for shape identification problems in inverse medium scattering with a single far-field pattern.

  5. Weak solutions of the three-dimensional vorticity equation with vortex singularities

    NASA Technical Reports Server (NTRS)

    Winckelmans, G.; Leonard, A.

    1988-01-01

    The extension of the concept of vortex singularities, developed by Saffman and Meiron (1986) for the case of two-dimensional point vortices in an incompressible vortical flow, to the three-dimensional case of vortex sticks (vortons) is investigated analytically. The derivation of the governing equations is explained, and it is demonstrated that the formulation obtained conserves total vorticity and is a weak solution of the vorticity equation, making it an appropriate means for representing three-dimensional vortical flows with limited numbers of vortex singularities.

  6. On the solution of integral equations with a generalized Cauchy kernel

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1987-01-01

    A numerical technique is developed analytically to solve a class of singular integral equations occurring in mixed boundary-value problems for nonhomogeneous elastic media with discontinuities. The approach of Kaya and Erdogan (1987) is extended to treat equations with generalized Cauchy kernels, reformulating the boundary-value problems in terms of potentials as the unknown functions. The numerical implementation of the solution is discussed, and results for an epoxy-Al plate with a crack terminating at the interface and loading normal to the crack are presented in tables.

  7. Multidimensional NMR inversion without Kronecker products: Multilinear inversion

    NASA Astrophysics Data System (ADS)

    Medellín, David; Ravi, Vivek R.; Torres-Verdín, Carlos

    2016-08-01

    Multidimensional NMR inversion using Kronecker products poses several challenges. First, kernel compression is only possible when the kernel matrices are separable, and in recent years, there has been an increasing interest in NMR sequences with non-separable kernels. Second, in three or more dimensions, the singular value decomposition is not unique; therefore kernel compression is not well-defined for higher dimensions. Without kernel compression, the Kronecker product yields matrices that require large amounts of memory, making the inversion intractable for personal computers. Finally, incorporating arbitrary regularization terms is not possible using the Lawson-Hanson (LH) or the Butler-Reeds-Dawson (BRD) algorithms. We develop a minimization-based inversion method that circumvents the above problems by using multilinear forms to perform multidimensional NMR inversion without using kernel compression or Kronecker products. The new method is memory efficient, requiring less than 0.1% of the memory required by the LH or BRD methods. It can also be extended to arbitrary dimensions and adapted to include non-separable kernels, linear constraints, and arbitrary regularization terms. Additionally, it is easy to implement because only a cost function and its first derivative are required to perform the inversion.

  8. A nonlinear quality-related fault detection approach based on modified kernel partial least squares.

    PubMed

    Jiao, Jianfang; Zhao, Ning; Wang, Guang; Yin, Shen

    2017-01-01

    In this paper, a new nonlinear quality-related fault detection method is proposed based on kernel partial least squares (KPLS) model. To deal with the nonlinear characteristics among process variables, the proposed method maps these original variables into feature space in which the linear relationship between kernel matrix and output matrix is realized by means of KPLS. Then the kernel matrix is decomposed into two orthogonal parts by singular value decomposition (SVD) and the statistics for each part are determined appropriately for the purpose of quality-related fault detection. Compared with relevant existing nonlinear approaches, the proposed method has the advantages of simple diagnosis logic and stable performance. A widely used literature example and an industrial process are used for the performance evaluation for the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Unsteady free convection flow of viscous fluids with analytical results by employing time-fractional Caputo-Fabrizio derivative (without singular kernel)

    NASA Astrophysics Data System (ADS)

    Ali Shah, Nehad; Mahsud, Yasir; Ali Zafar, Azhar

    2017-10-01

    This article introduces a theoretical study for unsteady free convection flow of an incompressible viscous fluid. The fluid flows near an isothermal vertical plate. The plate has a translational motion with time-dependent velocity. The equations governing the fluid flow are expressed in fractional differential equations by using a newly defined time-fractional Caputo-Fabrizio derivative without singular kernel. Explicit solutions for velocity, temperature and solute concentration are obtained by applying the Laplace transform technique. As the fractional parameter approaches to one, solutions for the ordinary fluid model are extracted from the general solutions of the fractional model. The results showed that, for the fractional model, the obtained solutions for velocity, temperature and concentration exhibit stationary jumps discontinuity across the plane at t=0 , while the solutions are continuous functions in the case of the ordinary model. Finally, numerical results for flow features at small-time are illustrated through graphs for various pertinent parameters.

  10. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    PubMed

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  11. A novel strategy for signal denoising using reweighted SVD and its applications to weak fault feature enhancement of rotating machinery

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Jia, Xiaodong

    2017-09-01

    Singular value decomposition (SVD), as an effective signal denoising tool, has been attracting considerable attention in recent years. The basic idea behind SVD denoising is to preserve the singular components (SCs) with significant singular values. However, it is shown that the singular values mainly reflect the energy of decomposed SCs, therefore traditional SVD denoising approaches are essentially energy-based, which tend to highlight the high-energy regular components in the measured signal, while ignoring the weak feature caused by early fault. To overcome this issue, a reweighted singular value decomposition (RSVD) strategy is proposed for signal denoising and weak feature enhancement. In this work, a novel information index called periodic modulation intensity is introduced to quantify the diagnostic information in a mechanical signal. With this index, the decomposed SCs can be evaluated and sorted according to their information levels, rather than energy. Based on that, a truncated linear weighting function is proposed to control the contribution of each SC in the reconstruction of the denoised signal. In this way, some weak but informative SCs could be highlighted effectively. The advantages of RSVD over traditional approaches are demonstrated by both simulated signals and real vibration/acoustic data from a two-stage gearbox as well as train bearings. The results demonstrate that the proposed method can successfully extract the weak fault feature even in the presence of heavy noise and ambient interferences.

  12. Exotic singularities and spatially curved loop quantum cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Parampreet; Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, Ontario N2L 2Y5; Vidotto, Francesca

    2011-03-15

    We investigate the occurrence of various exotic spacelike singularities in the past and the future evolution of k={+-}1 Friedmann-Robertson-Walker model and loop quantum cosmology using a sufficiently general phenomenological model for the equation of state. We highlight the nontrivial role played by the intrinsic curvature for these singularities and the new physics which emerges at the Planck scale. We show that quantum gravity effects generically resolve all strong curvature singularities including big rip and big freeze singularities. The weak singularities, which include sudden and big brake singularities, are ignored by quantum gravity when spatial curvature is negative, as was previouslymore » found for the spatially flat model. Interestingly, for the spatially closed model there exist cases where weak singularities may be resolved when they occur in the past evolution. The spatially closed model exhibits another novel feature. For a particular class of equation of state, this model also exhibits an additional physical branch in loop quantum cosmology, a baby universe separated from the parent branch. Our analysis generalizes previous results obtained on the resolution of strong curvature singularities in flat models to isotropic spacetimes with nonzero spatial curvature.« less

  13. On the box-counting dimension of the potential singular set for suitable weak solutions to the 3D Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Wang, Yanqing; Wu, Gang

    2017-05-01

    In this paper, we are concerned with the upper box-counting dimension of the set of possible singular points in the space-time of suitable weak solutions to the 3D Navier-Stokes equations. By taking full advantage of the pressure \\Pi in terms of \

  14. Partial regularity of weak solutions to a PDE system with cubic nonlinearity

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Xu, Xiangsheng

    2018-04-01

    In this paper we investigate regularity properties of weak solutions to a PDE system that arises in the study of biological transport networks. The system consists of a possibly singular elliptic equation for the scalar pressure of the underlying biological network coupled to a diffusion equation for the conductance vector of the network. There are several different types of nonlinearities in the system. Of particular mathematical interest is a term that is a polynomial function of solutions and their partial derivatives and this polynomial function has degree three. That is, the system contains a cubic nonlinearity. Only weak solutions to the system have been shown to exist. The regularity theory for the system remains fundamentally incomplete. In particular, it is not known whether or not weak solutions develop singularities. In this paper we obtain a partial regularity theorem, which gives an estimate for the parabolic Hausdorff dimension of the set of possible singular points.

  15. Steady/unsteady aerodynamic analysis of wings at subsonic, sonic and supersonic Mach numbers using a 3D panel method

    NASA Astrophysics Data System (ADS)

    Cho, Jeonghyun; Han, Cheolheui; Cho, Leesang; Cho, Jinsoo

    2003-08-01

    This paper treats the kernel function of an integral equation that relates a known or prescribed upwash distribution to an unknown lift distribution for a finite wing. The pressure kernel functions of the singular integral equation are summarized for all speed range in the Laplace transform domain. The sonic kernel function has been reduced to a form, which can be conveniently evaluated as a finite limit from both the subsonic and supersonic sides when the Mach number tends to one. Several examples are solved including rectangular wings, swept wings, a supersonic transport wing and a harmonically oscillating wing. Present results are given with other numerical data, showing continuous results through the unit Mach number. Computed results are in good agreement with other numerical results.

  16. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    PubMed Central

    Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969

  17. Contact and crack problems for an elastic wedge. [stress concentration in elastic half spaces

    NASA Technical Reports Server (NTRS)

    Erdogan, F.; Gupta, G. D.

    1974-01-01

    The contact and the crack problems for an elastic wedge of arbitrary angle are considered. The problem is reduced to a singular integral equation which, in the general case, may have a generalized Cauchy kernel. The singularities under the stamp as well as at the wedge apex were studied, and the relevant stress intensity factors are defined. The problem was solved for various wedge geometries and loading conditions. The results may be applicable to certain foundation problems and to crack problems in symmetrically loaded wedges in which cracks initiate from the apex.

  18. Finite conformal quantum gravity and spacetime singularities

    NASA Astrophysics Data System (ADS)

    Modesto, Leonardo; Rachwał, Lesław

    2017-12-01

    We show that a class of finite quantum non-local gravitational theories is conformally invariant at classical as well as at quantum level. This is actually a range of conformal anomaly-free theories in the spontaneously broken phase of the Weyl symmetry. At classical level we show how the Weyl conformal invariance is able to tame all the spacetime singularities that plague not only Einstein gravity, but also local and weakly non-local higher derivative theories. The latter statement is proved by a singularity theorem that applies to a large class of weakly non-local theories. Therefore, we are entitled to look for a solution of the spacetime singularity puzzle in a missed symmetry of nature, namely the Weyl conformal symmetry. Following the seminal paper by Narlikar and Kembhavi, we provide an explicit construction of singularity-free black hole exact solutions in a class of conformally invariant theories.

  19. Space-time domain solutions of the wave equation by a non-singular boundary integral method and Fourier transform.

    PubMed

    Klaseboer, Evert; Sepehrirahnama, Shahrokh; Chan, Derek Y C

    2017-08-01

    The general space-time evolution of the scattering of an incident acoustic plane wave pulse by an arbitrary configuration of targets is treated by employing a recently developed non-singular boundary integral method to solve the Helmholtz equation in the frequency domain from which the space-time solution of the wave equation is obtained using the fast Fourier transform. The non-singular boundary integral solution can enforce the radiation boundary condition at infinity exactly and can account for multiple scattering effects at all spacings between scatterers without adverse effects on the numerical precision. More generally, the absence of singular kernels in the non-singular integral equation confers high numerical stability and precision for smaller numbers of degrees of freedom. The use of fast Fourier transform to obtain the time dependence is not constrained to discrete time steps and is particularly efficient for studying the response to different incident pulses by the same configuration of scatterers. The precision that can be attained using a smaller number of Fourier components is also quantified.

  20. Heat kernel for the elliptic system of linear elasticity with boundary conditions

    NASA Astrophysics Data System (ADS)

    Taylor, Justin; Kim, Seick; Brown, Russell

    2014-10-01

    We consider the elliptic system of linear elasticity with bounded measurable coefficients in a domain where the second Korn inequality holds. We construct heat kernel of the system subject to Dirichlet, Neumann, or mixed boundary condition under the assumption that weak solutions of the elliptic system are Hölder continuous in the interior. Moreover, we show that if weak solutions of the mixed problem are Hölder continuous up to the boundary, then the corresponding heat kernel has a Gaussian bound. In particular, if the domain is a two dimensional Lipschitz domain satisfying a corkscrew or non-tangential accessibility condition on the set where we specify Dirichlet boundary condition, then we show that the heat kernel has a Gaussian bound. As an application, we construct Green's function for elliptic mixed problem in such a domain.

  1. The Cucker-Smale Equation: Singular Communication Weight, Measure-Valued Solutions and Weak-Atomic Uniqueness

    NASA Astrophysics Data System (ADS)

    Mucha, Piotr B.; Peszek, Jan

    2018-01-01

    The Cucker-Smale flocking model belongs to a wide class of kinetic models that describe a collective motion of interacting particles that exhibit some specific tendency, e.g. to aggregate, flock or disperse. This paper examines the kinetic Cucker-Smale equation with a singular communication weight. Given a compactly supported measure as an initial datum we construct a global in time weak measure-valued solution in the space {C_{weak}(0,∞M)}. The solution is defined as a mean-field limit of the empirical distributions of particles, the dynamics of which is governed by the Cucker-Smale particle system. The studied communication weight is {ψ(s)=|s|^{-α}} with {α \\in (0,1/2)}. This range of singularity admits the sticking of characteristics/trajectories. The second result concerns the weak-atomic uniqueness property stating that a weak solution initiated by a finite sum of atoms, i.e. Dirac deltas in the form {m_i δ_{x_i} ⊗ δ_{v_i}}, preserves its atomic structure. Hence these coincide with unique solutions to the system of ODEs associated with the Cucker-Smale particle system.

  2. Weak variations of Lipschitz graphs and stability of phase boundaries

    NASA Astrophysics Data System (ADS)

    Grabovsky, Yury; Kucher, Vladislav A.; Truskinovsky, Lev

    2011-03-01

    In the case of Lipschitz extremals of vectorial variational problems, an important class of strong variations originates from smooth deformations of the corresponding non-smooth graphs. These seemingly singular variations, which can be viewed as combinations of weak inner and outer variations, produce directions of differentiability of the functional and lead to singularity-centered necessary conditions on strong local minima: an equality, arising from stationarity, and an inequality, implying configurational stability of the singularity set. To illustrate the underlying coupling between inner and outer variations, we study in detail the case of smooth surfaces of gradient discontinuity representing, for instance, martensitic phase boundaries in non-linear elasticity.

  3. The construction of a two-dimensional reproducing kernel function and its application in a biomedical model.

    PubMed

    Guo, Qi; Shen, Shu-Ting

    2016-04-29

    There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.

  4. On nonsingular potentials of Cox-Thompson inversion scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmai, Tamas; Apagyi, Barnabas

    2010-02-15

    We establish a condition for obtaining nonsingular potentials using the Cox-Thompson inverse scattering method with one phase shift. The anomalous singularities of the potentials are avoided by maintaining unique solutions of the underlying Regge-Newton integral equation for the transformation kernel. As a by-product, new inequality sequences of zeros of Bessel functions are discovered.

  5. Recent Results on Singularity Strengths

    NASA Astrophysics Data System (ADS)

    Nolan, Brien

    2002-12-01

    In this contribution, we review some recent results on strengths of singularities. In a space-time (M,g), let γ[τ0, 0) → M be an incomplete, inextendible causal geodesic, affinely parametrised by τ, tangent ěc k. Let Jτ1 :=set of Jacobi fields along γ, orthogonal to γ and vanishing at time τ1 ≥ τ0 i.e. ěc ξ ∈ J{τ 1 } iff D2ξa = -Rbcdakbkdξc, gabξakb = 0, and ěc ξ (τ 1 ) = 0. Vτ1(τ) := volume element defined by full set of independent elements of Jτ1 (2-dim for null geodesies, 3-dim for time-like); Vτ1 := ∥Vτ1∥. Definition (Tipler 1977): γ terminates in a gravitationally strong singularity if for all 0 > τ1 ≥ τ0, lim infτ→0- Vτ1(τ) = 0. γ... gravitationally weak ... lim infτ→0- Vτ1(τ) > 0. The interpretation is that at a strong singularity, an extended body, e.g. a gravitational wave detector, is crushed to zero volume by the singularity. Tipler's definition does not take account of the possibility that (i) V → ∞ or (ii) V → finite, non-zero value, but with infinite stretching/crushing in orthogonal directions ('spaghettifying singularity'). Extended definition (Nolan 1999): strong if either V → 0,∞ or if for every τ1, there is an element ěc ξ of Jτ1 satisfying ||ěc ξ || -> 0. Otherwise weak. (Ori 2000): singularity is 'deformationally strong' if either (i) it is Tipler-strong or (ii) for every τ1, there is an element ěc ξ of Jτ1 satisfying ||ěc ξ || -> ∞ . Otherwise, deformationally weak...

  6. w-cosmological singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandez-Jambrina, L.

    2010-12-15

    In this paper we characterize barotropic index singularities of homogeneous isotropic cosmological models [M. P. Dabrowski and T. Denkiewicz, Phys. Rev. D 79, 063521 (2009).]. They are shown to appear in cosmologies for which the scale factor is analytical with a Taylor series in which the linear and quadratic terms are absent. Though the barotropic index of the perfect fluid is singular, the singularities are weak, as it happens for other models for which the density and the pressure are regular.

  7. Interaction between a circular inclusion and an arbitrarily oriented crack

    NASA Technical Reports Server (NTRS)

    Erdogan, F.; Gupta, G. D.; Ratwani, M.

    1975-01-01

    The plane interaction problem for a circular elastic inclusion embedded in an elastic matrix which contains an arbitrarily oriented crack is considered. Using the existing solutions for the edge dislocations as Green's functions, first the general problem of a through crack in the form of an arbitrary smooth arc located in the matrix in the vicinity of the inclusion is formulated. The integral equations for the line crack are then obtained as a system of singular integral equations with simple Cauchy kernels. The singular behavior of the stresses around the crack tips is examined and the expressions for the stress-intensity factors representing the strength of the stress singularities are obtained in terms of the asymptotic values of the density functions of the integral equations. The problem is solved for various typical crack orientations and the corresponding stress-intensity factors are given.

  8. The Singular Set of Solutions to Non-Differentiable Elliptic Systems

    NASA Astrophysics Data System (ADS)

    Mingione, Giuseppe

    We estimate the Hausdorff dimension of the singular set of solutions to elliptic systems of the type If the vector fields a and b are Hölder continuous with respect to the variable x with exponent α, then the Hausdorff dimension of the singular set of any weak solution is at most n-2α.

  9. Acute cyanide toxicity caused by apricot kernel ingestion.

    PubMed

    Suchard, J R; Wallace, K L; Gerkin, R D

    1998-12-01

    A 41-year-old woman ingested apricot kernels purchased at a health food store and became weak and dyspneic within 20 minutes. The patient was comatose and hypothermic on presentation but responded promptly to antidotal therapy for cyanide poisoning. She was later treated with a continuous thiosulfate infusion for persistent metabolic acidosis. This is the first reported case of cyanide toxicity from apricot kernel ingestion in the United States since 1979.

  10. Tachyon field in loop quantum cosmology: An example of traversable singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Lifang; Zhu Jianyang

    2009-06-15

    Loop quantum cosmology (LQC) predicts a nonsingular evolution of the universe through a bounce in the high energy region. But LQC has an ambiguity about the quantization scheme. Recently, the authors in [Phys. Rev. D 77, 124008 (2008)] proposed a new quantization scheme. Similar to others, this new quantization scheme also replaces the big bang singularity with the quantum bounce. More interestingly, it introduces a quantum singularity, which is traversable. We investigate this novel dynamics quantitatively with a tachyon scalar field, which gives us a concrete example. Our result shows that our universe can evolve through the quantum singularity regularly,more » which is different from the classical big bang singularity. So this singularity is only a weak singularity.« less

  11. Green's functions for dislocations in bonded strips and related crack problems

    NASA Technical Reports Server (NTRS)

    Ballarini, R.; Luo, H. A.

    1990-01-01

    Green's functions are derived for the plane elastostatics problem of a dislocation in a bimaterial strip. Using these fundamental solutions as kernels, various problems involving cracks in a bimaterial strip are analyzed using singular integral equations. For each problem considered, stress intensity factors are calculated for several combinations of the parameters which describe loading, geometry and material mismatch.

  12. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine.

    PubMed

    Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.

  13. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine

    PubMed Central

    Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829

  14. Resummed memory kernels in generalized system-bath master equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavros, Michael G.; Van Voorhis, Troy, E-mail: tvan@mit.edu

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between themore » two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.« less

  15. Numerical techniques in radiative heat transfer for general, scattering, plane-parallel media

    NASA Technical Reports Server (NTRS)

    Sharma, A.; Cogley, A. C.

    1982-01-01

    The study of radiative heat transfer with scattering usually leads to the solution of singular Fredholm integral equations. The present paper presents an accurate and efficient numerical method to solve certain integral equations that govern radiative equilibrium problems in plane-parallel geometry for both grey and nongrey, anisotropically scattering media. In particular, the nongrey problem is represented by a spectral integral of a system of nonlinear integral equations in space, which has not been solved previously. The numerical technique is constructed to handle this unique nongrey governing equation as well as the difficulties caused by singular kernels. Example problems are solved and the method's accuracy and computational speed are analyzed.

  16. Convergence of Weak Kähler-Ricci Flows on Minimal Models of Positive Kodaira Dimension

    NASA Astrophysics Data System (ADS)

    Eyssidieux, Phylippe; Guedj, Vincent; Zeriahi, Ahmed

    2018-02-01

    Studying the behavior of the Kähler-Ricci flow on mildly singular varieties, one is naturally lead to study weak solutions of degenerate parabolic complex Monge-Ampère equations. In this article, the third of a series on this subject, we study the long term behavior of the normalized Kähler-Ricci flow on mildly singular varieties of positive Kodaira dimension, generalizing results of Song and Tian who dealt with smooth minimal models.

  17. Time delay and magnification centroid due to gravitational lensing by black holes and naked singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Virbhadra, K. S.; Keeton, C. R.; Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Road, Piscataway, NJ 08854

    We model the massive dark object at the center of the Galaxy as a Schwarzschild black hole as well as Janis-Newman-Winicour naked singularities, characterized by the mass and scalar charge parameters, and study gravitational lensing (particularly time delay, magnification centroid, and total magnification) by them. We find that the lensing features are qualitatively similar (though quantitatively different) for Schwarzschild black holes, weakly naked, and marginally strongly naked singularities. However, the lensing characteristics of strongly naked singularities are qualitatively very different from those due to Schwarzschild black holes. The images produced by Schwarzschild black hole lenses and weakly naked and marginallymore » strongly naked singularity lenses always have positive time delays. On the other hand, strongly naked singularity lenses can give rise to images with positive, zero, or negative time delays. In particular, for a large angular source position the direct image (the outermost image on the same side as the source) due to strongly naked singularity lensing always has a negative time delay. We also found that the scalar field decreases the time delay and increases the total magnification of images; this result could have important implications for cosmology. As the Janis-Newman-Winicour metric also describes the exterior gravitational field of a scalar star, naked singularities as well as scalar star lenses, if these exist in nature, will serve as more efficient cosmic telescopes than regular gravitational lenses.« less

  18. Bose-Einstein condensation on a manifold with non-negative Ricci curvature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akant, Levent, E-mail: levent.akant@boun.edu.tr; Ertuğrul, Emine, E-mail: emine.ertugrul@boun.edu.tr; Tapramaz, Ferzan, E-mail: waskhez@gmail.com

    The Bose-Einstein condensation for an ideal Bose gas and for a dilute weakly interacting Bose gas in a manifold with non-negative Ricci curvature is investigated using the heat kernel and eigenvalue estimates of the Laplace operator. The main focus is on the nonrelativistic gas. However, special relativistic ideal gas is also discussed. The thermodynamic limit of the heat kernel and eigenvalue estimates is taken and the results are used to derive bounds for the depletion coefficient. In the case of a weakly interacting gas, Bogoliubov approximation is employed. The ground state is analyzed using heat kernel methods and finite sizemore » effects on the ground state energy are proposed. The justification of the c-number substitution on a manifold is given.« less

  19. Strong consistency of nonparametric Bayes density estimation on compact metric spaces with applications to specific manifolds

    PubMed Central

    Bhattacharya, Abhishek; Dunson, David B.

    2012-01-01

    This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295

  20. Boundary-layer effects in composite laminates: Free-edge stress singularities, part 6

    NASA Technical Reports Server (NTRS)

    Wanag, S. S.; Choi, I.

    1981-01-01

    A rigorous mathematical model was obtained for the boundary-layer free-edge stress singularity in angleplied and crossplied fiber composite laminates. The solution was obtained using a method consisting of complex-variable stress function potentials and eigenfunction expansions. The required order of the boundary-layer stress singularity is determined by solving the transcendental characteristic equation obtained from the homogeneous solution of the partial differential equations. Numerical results obtained show that the boundary-layer stress singularity depends only upon material elastic constants and fiber orientation of the adjacent plies. For angleplied and crossplied laminates the order of the singularity is weak in general.

  1. Signature of phase singularities in diffusive regimes in disordered waveguide lattices: interplay and qualitative analysis

    NASA Astrophysics Data System (ADS)

    Ghosh, Somnath

    2018-05-01

    Co-existence and interplay between mesoscopic light dynamics with singular optics in spatially random but temporally coherent disordered waveguide lattices is reported. Two CW light beams of 1.55 micron operating wavelength are launched as inputs to 1D waveguide lattices with controllable weak disorder in refractive index profile. Direct observation of phase singularities in the speckle pattern along the length is numerically demonstrated. Quantitative analysis of onset of such singular behavior and diffusive wave propagation is analyzed for the first time.

  2. Fracture and fatigue analysis of functionally graded and homogeneous materials using singular integral equation approach

    NASA Astrophysics Data System (ADS)

    Zhao, Huaqing

    There are two major objectives of this thesis work. One is to study theoretically the fracture and fatigue behavior of both homogeneous and functionally graded materials, with or without crack bridging. The other is to further develop the singular integral equation approach in solving mixed boundary value problems. The newly developed functionally graded materials (FGMs) have attracted considerable research interests as candidate materials for structural applications ranging from aerospace to automobile to manufacturing. From the mechanics viewpoint, the unique feature of FGMs is that their resistance to deformation, fracture and damage varies spatially. In order to guide the microstructure selection and the design and performance assessment of components made of functionally graded materials, in this thesis work, a series of theoretical studies has been carried out on the mode I stress intensity factors and crack opening displacements for FGMs with different combinations of geometry and material under various loading conditions, including: (1) a functionally graded layer under uniform strain, far field pure bending and far field axial loading, (2) a functionally graded coating on an infinite substrate under uniform strain, and (3) a functionally graded coating on a finite substrate under uniform strain, far field pure bending and far field axial loading. In solving crack problems in homogeneous and non-homogeneous materials, a very powerful singular integral equation (SEE) method has been developed since 1960s by Erdogan and associates to solve mixed boundary value problems. However, some of the kernel functions developed earlier are incomplete and possibly erroneous. In this thesis work, mode I fracture problems in a homogeneous strip are reformulated and accurate singular Cauchy type kernels are derived. Very good convergence rates and consistency with standard data are achieved. Other kernel functions are subsequently developed for mode I fracture in functionally graded materials. This work provides a solid foundation for further applications of the singular integral equation approach to fracture and fatigue problems in advanced composites. The concept of crack bridging is a unifying theory for fracture at various length scales, from atomic cleavage to rupture of concrete structures. However, most of the previous studies are limited to small scale bridging analyses although large scale bridging conditions prevail in engineering materials. In this work, a large scale bridging analysis is included within the framework of singular integral equation approach. This allows us to study fracture, fatigue and toughening mechanisms in advanced materials with crack bridging. As an example, the fatigue crack growth of grain bridging ceramics is studied. With the advent of composite materials technology, more complex material microstructures are being introduced, and more mechanics issues such as inhomogeneity and nonlinearity come into play. Improved mathematical and numerical tools need to be developed to allow theoretical modeling of these materials. This thesis work is an attempt to meet these challenges by making contributions to both micromechanics modeling and applied mathematics. It sets the stage for further investigations of a wide range of problems in the deformation and fracture of advanced engineering materials.

  3. The Hawking-Penrose Singularity Theorem for C 1,1-Lorentzian Metrics

    NASA Astrophysics Data System (ADS)

    Graf, Melanie; Grant, James D. E.; Kunzinger, Michael; Steinbauer, Roland

    2018-06-01

    We show that the Hawking-Penrose singularity theorem, and the generalisation of this theorem due to Galloway and Senovilla, continue to hold for Lorentzian metrics that are of C 1,1-regularity. We formulate appropriate weak versions of the strong energy condition and genericity condition for C 1,1-metrics, and of C 0-trapped submanifolds. By regularisation, we show that, under these weak conditions, causal geodesics necessarily become non-maximising. This requires a detailed analysis of the matrix Riccati equation for the approximating metrics, which may be of independent interest.

  4. Singular Vectors' Subtle Secrets

    ERIC Educational Resources Information Center

    James, David; Lachance, Michael; Remski, Joan

    2011-01-01

    Social scientists use adjacency tables to discover influence networks within and among groups. Building on work by Moler and Morrison, we use ordered pairs from the components of the first and second singular vectors of adjacency matrices as tools to distinguish these groups and to identify particularly strong or weak individuals.

  5. Baker-Akhiezer Spinor Kernel and Tau-functions on Moduli Spaces of Meromorphic Differentials

    NASA Astrophysics Data System (ADS)

    Kalla, C.; Korotkin, D.

    2014-11-01

    In this paper we study the Baker-Akhiezer spinor kernel on moduli spaces of meromorphic differentials on Riemann surfaces. We introduce the Baker-Akhiezer tau-function which is related to both the Bergman tau-function (which was studied before in the context of Hurwitz spaces and spaces of holomorphic Abelian and quadratic differentials) and the KP tau-function on such spaces. In particular, we derive variational formulas of Rauch-Ahlfors type on moduli spaces of meromorphic differentials with prescribed singularities: we use the system of homological coordinates, consisting of absolute and relative periods of the meromorphic differential, and show how to vary the fundamental objects associated to a Riemann surface (the matrix of b-periods, normalized Abelian differentials, the Bergman bidifferential, the Szegö kernel and the Baker-Akhiezer spinor kernel) with respect to these coordinates. The variational formulas encode dependence both on the moduli of the Riemann surface and on the choice of meromorphic differential (variation of the meromorphic differential while keeping the Riemann surface fixed corresponds to flows of KP type). Analyzing the global properties of the Bergman and Baker-Akhiezer tau-functions, we establish relationships between various divisor classes on the moduli spaces.

  6. Signature of phase singularities in diffusive regimes in disordered waveguide lattices: interplay and qualitative analysis.

    PubMed

    Ghosh, Somnath

    2018-05-10

    Coexistence and interplay between mesoscopic light dynamics with singular optics in spatially disordered waveguide lattices are reported. Two CW light beams of a 1.55 μm operating wavelength are launched as inputs to 1D waveguide lattices with controllable weak disorder in a complex refractive index profile. Direct observation of phase singularities in the speckle pattern along the length is numerically demonstrated. Quantitative analysis of the onset of such singular behavior and diffusive wave propagation is analyzed for the first time, to the best of our knowledge.

  7. Spontaneous generation of singularities in paraxial optical fields.

    PubMed

    Aiello, Andrea

    2016-04-01

    In nonrelativistic quantum mechanics, the spontaneous generation of singularities in smooth and finite wave functions is a well understood phenomenon also occurring for free particles. We use the familiar analogy between the two-dimensional Schrödinger equation and the optical paraxial wave equation to define a new class of square-integrable paraxial optical fields that develop a spatial singularity in the focal point of a weakly focusing thin lens. These fields are characterized by a single real parameter whose value determines the nature of the singularity. This novel field enhancement mechanism may stimulate fruitful research for diverse technological and scientific applications.

  8. Analytic Evolution of Singular Distribution Amplitudes in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tandogan Kunkel, Asli

    2014-08-01

    Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less

  9. A fast numerical solution of scattering by a cylinder: Spectral method for the boundary integral equations

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.

  10. Testing the causality of Hawkes processes with time reversal

    NASA Astrophysics Data System (ADS)

    Cordi, Marcus; Challet, Damien; Muni Toke, Ioane

    2018-03-01

    We show that univariate and symmetric multivariate Hawkes processes are only weakly causal: the true log-likelihoods of real and reversed event time vectors are almost equal, thus parameter estimation via maximum likelihood only weakly depends on the direction of the arrow of time. In ideal (synthetic) conditions, tests of goodness of parametric fit unambiguously reject backward event times, which implies that inferring kernels from time-symmetric quantities, such as the autocovariance of the event rate, only rarely produce statistically significant fits. Finally, we find that fitting financial data with many-parameter kernels may yield significant fits for both arrows of time for the same event time vector, sometimes favouring the backward time direction. This goes to show that a significant fit of Hawkes processes to real data with flexible kernels does not imply a definite arrow of time unless one tests it.

  11. Analysis of the Fisher solution

    NASA Astrophysics Data System (ADS)

    Abdolrahimi, Shohreh; Shoom, Andrey A.

    2010-01-01

    We study the d-dimensional Fisher solution which represents a static, spherically symmetric, asymptotically flat spacetime with a massless scalar field. The solution has two parameters, the mass M and the “scalar charge” Σ. The Fisher solution has a naked curvature singularity which divides the spacetime manifold into two disconnected parts. The part which is asymptotically flat we call the Fisher spacetime, and another part we call the Fisher universe. The d-dimensional Schwarzschild-Tangherlini solution and the Fisher solution belong to the same theory and are dual to each other. The duality transformation acting in the parameter space (M,Σ) maps the exterior region of the Schwarzschild-Tangherlini black hole into the Fisher spacetime which has a naked timelike singularity, and interior region of the black hole into the Fisher universe, which is an anisotropic expanding-contracting universe and which has two spacelike singularities representing its “big bang” and “big crunch.” The big bang singularity and the singularity of the Fisher spacetime are radially weak in the sense that a 1-dimensional object moving along a timelike radial geodesic can arrive to the singularities intact. At the vicinity of the singularity the Fisher spacetime of nonzero mass has a region where its Misner-Sharp energy is negative. The Fisher universe has a marginally trapped surface corresponding to the state of its maximal expansion in the angular directions. These results and derived relations between geometric quantities of the Fisher spacetime, the Fisher universe, and the Schwarzschild-Tangherlini black hole may suggest that the massless scalar field transforms the black hole event horizon into the naked radially weak disjoint singularities of the Fisher spacetime and the Fisher universe which are “dual to the horizon.”

  12. Analysis of the Fisher solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdolrahimi, Shohreh; Shoom, Andrey A.

    2010-01-15

    We study the d-dimensional Fisher solution which represents a static, spherically symmetric, asymptotically flat spacetime with a massless scalar field. The solution has two parameters, the mass M and the 'scalar charge' {Sigma}. The Fisher solution has a naked curvature singularity which divides the spacetime manifold into two disconnected parts. The part which is asymptotically flat we call the Fisher spacetime, and another part we call the Fisher universe. The d-dimensional Schwarzschild-Tangherlini solution and the Fisher solution belong to the same theory and are dual to each other. The duality transformation acting in the parameter space (M,{Sigma}) maps the exteriormore » region of the Schwarzschild-Tangherlini black hole into the Fisher spacetime which has a naked timelike singularity, and interior region of the black hole into the Fisher universe, which is an anisotropic expanding-contracting universe and which has two spacelike singularities representing its 'big bang' and 'big crunch'. The big bang singularity and the singularity of the Fisher spacetime are radially weak in the sense that a 1-dimensional object moving along a timelike radial geodesic can arrive to the singularities intact. At the vicinity of the singularity the Fisher spacetime of nonzero mass has a region where its Misner-Sharp energy is negative. The Fisher universe has a marginally trapped surface corresponding to the state of its maximal expansion in the angular directions. These results and derived relations between geometric quantities of the Fisher spacetime, the Fisher universe, and the Schwarzschild-Tangherlini black hole may suggest that the massless scalar field transforms the black hole event horizon into the naked radially weak disjoint singularities of the Fisher spacetime and the Fisher universe which are 'dual to the horizon'.« less

  13. Elasticity solutions for a class of composite laminate problems with stress singularities

    NASA Technical Reports Server (NTRS)

    Wang, S. S.

    1983-01-01

    A study on the fundamental mechanics of fiber-reinforced composite laminates with stress singularities is presented. Based on the theory of anisotropic elasticity and Lekhnitskii's complex-variable stress potentials, a system of coupled governing partial differential equations are established. An eigenfunction expansion method is introduced to determine the orders of stress singularities in composite laminates with various geometric configurations and material systems. Complete elasticity solutions are obtained for this class of singular composite laminate mechanics problems. Homogeneous solutions in eigenfunction series and particular solutions in polynomials are presented for several cases of interest. Three examples are given to illustrate the method of approach and the basic nature of the singular laminate elasticity solutions. The first problem is the well-known laminate free-edge stress problem, which has a rather weak stress singularity. The second problem is the important composite delamination problem, which has a strong crack-tip stress singularity. The third problem is the commonly encountered bonded composite joints, which has a complex solution structure with moderate orders of stress singularities.

  14. Multidisciplinary Research Program in Atmospheric Science. [remote sensing

    NASA Technical Reports Server (NTRS)

    Thompson, O. E.

    1982-01-01

    A theoretical analysis of the vertical resolving power of the High resolution Infrared Radiation Sounder (HIRS) and the Advanced Meteorological Temperature Sounder (AMTS) is carried out. The infrared transmittance weighting functions and associated radiative transfer kernels are analyzed through singular value decomposition. The AMTS was found to contain several more pieces of independent information than HIRS when the transmittances were considered, but the two instruments appeared to be much more similar when the temperature sensitive radiative transfer kernels were analyzed. The HIRS and AMTS instruments were also subjected to a thorough analysis. It was found that the two instruments should have very similar vertical resolving power below 500 mb but that AMTS should have superior resolving power above 200 mb. In the layer 200 to 500 mb the AMTS showed badly degraded spread function.

  15. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.

    PubMed

    Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong

    2018-05-11

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.

  16. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN

    PubMed Central

    Cheng, Gang; Chen, Xihui

    2018-01-01

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671

  17. Potential Singularity for a Family of Models of the Axisymmetric Incompressible Flow

    NASA Astrophysics Data System (ADS)

    Hou, Thomas Y.; Jin, Tianling; Liu, Pengfei

    2017-03-01

    We study a family of 3D models for the incompressible axisymmetric Euler and Navier-Stokes equations. The models are derived by changing the strength of the convection terms in the equations written using a set of transformed variables. The models share several regularity results with the Euler and Navier-Stokes equations, including an energy identity, the conservation of a modified circulation quantity, the BKM criterion and the Prodi-Serrin criterion. The inviscid models with weak convection are numerically observed to develop stable self-similar singularity with the singular region traveling along the symmetric axis, and such singularity scenario does not seem to persist for strong convection.

  18. Rare-Region-Induced Avoided Quantum Criticality in Disordered Three-Dimensional Dirac and Weyl Semimetals

    NASA Astrophysics Data System (ADS)

    Pixley, J. H.; Huse, David A.; Das Sarma, S.

    2016-04-01

    We numerically study the effect of short-ranged potential disorder on massless noninteracting three-dimensional Dirac and Weyl fermions, with a focus on the question of the proposed (and extensively theoretically studied) quantum critical point separating semimetal and diffusive-metal phases. We determine the properties of the eigenstates of the disordered Dirac Hamiltonian (H ) and exactly calculate the density of states (DOS) near zero energy, using a combination of Lanczos on H2 and the kernel polynomial method on H . We establish the existence of two distinct types of low-energy eigenstates contributing to the disordered density of states in the weak-disorder semimetal regime. These are (i) typical eigenstates that are well described by linearly dispersing perturbatively dressed Dirac states and (ii) nonperturbative rare eigenstates that are weakly dispersive and quasilocalized in the real-space regions with the largest (and rarest) local random potential. Using twisted boundary conditions, we are able to systematically find and study these two (essentially independent) types of eigenstates. We find that the Dirac states contribute low-energy peaks in the finite-size DOS that arise from the clean eigenstates which shift and broaden in the presence of disorder. On the other hand, we establish that the rare quasilocalized eigenstates contribute a nonzero background DOS which is only weakly energy dependent near zero energy and is exponentially small at weak disorder. We also find that the expected semimetal to diffusive-metal quantum critical point is converted to an avoided quantum criticality that is "rounded out" by nonperturbative effects, with no signs of any singular behavior in the DOS at the energy of the clean Dirac point. However, the crossover effects of the avoided (or hidden) criticality manifest themselves in a so-called quantum critical fan region away from the Dirac energy. We discuss the implications of our results for disordered Dirac and Weyl semimetals, and reconcile the large body of existing numerical work showing quantum criticality with the existence of these nonperturbative effects.

  19. Evaluation of Optimal Formulas for Gravitational Tensors up to Gravitational Curvatures of a Tesseroid

    NASA Astrophysics Data System (ADS)

    Deng, Xiao-Le; Shen, Wen-Bin

    2018-01-01

    The forward modeling of the topographic effects of the gravitational parameters in the gravity field is a fundamental topic in geodesy and geophysics. Since the gravitational effects, including for instance the gravitational potential (GP), the gravity vector (GV) and the gravity gradient tensor (GGT), of the topographic (or isostatic) mass reduction have been expanded by adding the gravitational curvatures (GC) in geoscience, it is crucial to find efficient numerical approaches to evaluate these effects. In this paper, the GC formulas of a tesseroid in Cartesian integral kernels are derived in 3D/2D forms. Three generally used numerical approaches for computing the topographic effects (e.g., GP, GV, GGT, GC) of a tesseroid are studied, including the Taylor Series Expansion (TSE), Gauss-Legendre Quadrature (GLQ) and Newton-Cotes Quadrature (NCQ) approaches. Numerical investigations show that the GC formulas in Cartesian integral kernels are more efficient if compared to the previously given GC formulas in spherical integral kernels: by exploiting the 3D TSE second-order formulas, the computational burden associated with the former is 46%, as an average, of that associated with the latter. The GLQ behaves better than the 3D/2D TSE and NCQ in terms of accuracy and computational time. In addition, the effects of a spherical shell's thickness and large-scale geocentric distance on the GP, GV, GGT and GC functionals have been studied with the 3D TSE second-order formulas as well. The relative approximation errors of the GC functionals are larger with the thicker spherical shell, which are the same as those of the GP, GV and GGT. Finally, the very-near-area problem and polar singularity problem have been considered by the numerical methods of the 3D TSE, GLQ and NCQ. The relative approximation errors of the GC components are larger than those of the GP, GV and GGT, especially at the very near area. Compared to the GC formulas in spherical integral kernels, these new GC formulas can avoid the polar singularity problem.

  20. A fast and well-conditioned spectral method for singular integral equations

    NASA Astrophysics Data System (ADS)

    Slevinsky, Richard Mikael; Olver, Sheehan

    2017-03-01

    We develop a spectral method for solving univariate singular integral equations over unions of intervals by utilizing Chebyshev and ultraspherical polynomials to reformulate the equations as almost-banded infinite-dimensional systems. This is accomplished by utilizing low rank approximations for sparse representations of the bivariate kernels. The resulting system can be solved in O (m2 n) operations using an adaptive QR factorization, where m is the bandwidth and n is the optimal number of unknowns needed to resolve the true solution. The complexity is reduced to O (mn) operations by pre-caching the QR factorization when the same operator is used for multiple right-hand sides. Stability is proved by showing that the resulting linear operator can be diagonally preconditioned to be a compact perturbation of the identity. Applications considered include the Faraday cage, and acoustic scattering for the Helmholtz and gravity Helmholtz equations, including spectrally accurate numerical evaluation of the far- and near-field solution. The JULIA software package SingularIntegralEquations.jl implements our method with a convenient, user-friendly interface.

  1. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    NASA Astrophysics Data System (ADS)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  2. Power Series Approximation for the Correlation Kernel Leading to Kohn-Sham Methods Combining Accuracy, Computational Efficiency, and General Applicability

    NASA Astrophysics Data System (ADS)

    Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas

    2016-09-01

    A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.

  3. Reliable and efficient a posteriori error estimation for adaptive IGA boundary element methods for weakly-singular integral equations

    PubMed Central

    Feischl, Michael; Gantner, Gregor; Praetorius, Dirk

    2015-01-01

    We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698

  4. On the Formulation of Weakly Singular Displacement/Traction Integral Equations; and Their Solution by the MLPG Method

    NASA Technical Reports Server (NTRS)

    Atluri, Satya N.; Shen, Shengping

    2002-01-01

    In this paper, a very simple method is used to derive the weakly singular traction boundary integral equation based on the integral relationships for displacement gradients. The concept of the MLPG method is employed to solve the integral equations, especially those arising in solid mechanics. A moving Least Squares (MLS) interpolation is selected to approximate the trial functions in this paper. Five boundary integral Solution methods are introduced: direct solution method; displacement boundary-value problem; traction boundary-value problem; mixed boundary-value problem; and boundary variational principle. Based on the local weak form of the BIE, four different nodal-based local test functions are selected, leading to four different MLPG methods for each BIE solution method. These methods combine the advantages of the MLPG method and the boundary element method.

  5. Cosmology of the closed string tachyon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swanson, Ian

    2008-09-15

    The spacetime physics of bulk closed string tachyon condensation is studied at the level of a two-derivative effective action. We derive the unique perturbative tachyon potential consistent with a full class of linearized tachyonic deformations of supercritical string theory. The solutions of interest deform a general linear dilaton background by the insertion of purely exponential tachyon vertex operators. In spacetime, the evolution of the tachyon drives an accelerated contraction of the universe and, absent higher-order corrections, the theory collapses to a cosmological singularity in finite time, at arbitrarily weak string coupling. When the tachyon exhibits a null symmetry, the worldsheetmore » dynamics is known to be exact and well defined at tree level. We prove that if the two-derivative effective action is free of nongravitational singularities, higher-order corrections always resolve the spacetime curvature singularity of the null tachyon. The resulting theory provides an explicit mechanism by which tachyon condensation can generate or terminate the flow of cosmological time in string theory. Additional particular solutions can resolve an initial singularity with a tachyonic phase at weak coupling, or yield solitonic configurations that localize the universe along spatial directions.« less

  6. Issues and Methods Concerning the Evaluation of Hypersingular and Near-Hypersingular Integrals in BEM Formulations

    NASA Technical Reports Server (NTRS)

    Fink, P. W.; Khayat, M. A.; Wilton, D. R.

    2005-01-01

    It is known that higher order modeling of the sources and the geometry in Boundary Element Modeling (BEM) formulations is essential to highly efficient computational electromagnetics. However, in order to achieve the benefits of hIgher order basis and geometry modeling, the singular and near-singular terms arising in BEM formulations must be integrated accurately. In particular, the accurate integration of near-singular terms, which occur when observation points are near but not on source regions of the scattering object, has been considered one of the remaining limitations on the computational efficiency of integral equation methods. The method of singularity subtraction has been used extensively for the evaluation of singular and near-singular terms. Piecewise integration of the source terms in this manner, while manageable for bases of constant and linear orders, becomes unwieldy and prone to error for bases of higher order. Furthermore, we find that the singularity subtraction method is not conducive to object-oriented programming practices, particularly in the context of multiple operators. To extend the capabilities, accuracy, and maintainability of general-purpose codes, the subtraction method is being replaced in favor of the purely numerical quadrature schemes. These schemes employ singularity cancellation methods in which a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. An example of the sin,oularity cancellation approach is the Duffy method, which has two major drawbacks: 1) In the resulting integrand, it produces an angular variation about the singular point that becomes nearly-singular for observation points close to an edge of the parent element, and 2) it appears not to work well when applied to nearly-singular integrals. Recently, the authors have introduced the transformation u(x(prime))= sinh (exp -1) x(prime)/Square root of ((y prime (exp 2))+ z(exp 2) for integrating functions of the form I = Integral of (lambda(r(prime))((e(exp -jkR))/(4 pi R) d D where A (r (prime)) is a vector or scalar basis function and R = Square root of( (x(prime)(exp2) + (y(prime)(exp2) + z(exp 2)) is the distance between source and observation points. This scheme has all of the advantages of the Duffy method while avoiding the disadvantages listed above. In this presentation we will survey similar approaches for handling singular and near-singular terms for kernels with 1/R(exp 2) type behavior, addressing potential pitfalls and offering techniques to efficiently handle special cases.

  7. Blood flow problem in the presence of magnetic particles through a circular cylinder using Caputo-Fabrizio fractional derivative

    NASA Astrophysics Data System (ADS)

    Uddin, Salah; Mohamad, Mahathir; Khalid, Kamil; Abdulhammed, Mohammed; Saifullah Rusiman, Mohd; Che – Him, Norziha; Roslan, Rozaini

    2018-04-01

    In this paper, the flow of blood mixed with magnetic particles subjected to uniform transverse magnetic field and pressure gradient in an axisymmetric circular cylinder is studied by using a new trend of fractional derivative without singular kernel. The governing equations are fractional partial differential equations derived based on the Caputo-Fabrizio time-fractional derivatives NFDt. The current result agrees considerably well with that of the previous Caputo fractional derivatives UFDt.

  8. Timelike naked singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goswami, Rituparno; Joshi, Pankaj S.; Vaz, Cenalo

    We construct a class of spherically symmetric collapse models in which a naked singularity may develop as the end state of collapse. The matter distribution considered has negative radial and tangential pressures, but the weak energy condition is obeyed throughout. The singularity forms at the center of the collapsing cloud and continues to be visible for a finite time. The duration of visibility depends on the nature of energy distribution. Hence the causal structure of the resulting singularity depends on the nature of the mass function chosen for the cloud. We present a general model in which the naked singularitymore » formed is timelike, neither pointlike nor null. Our work represents a step toward clarifying the necessary conditions for the validity of the Cosmic Censorship Conjecture.« less

  9. Weak cosmic censorship: as strong as ever.

    PubMed

    Hod, Shahar

    2008-03-28

    Spacetime singularities that arise in gravitational collapse are always hidden inside of black holes. This is the essence of the weak cosmic censorship conjecture. The hypothesis, put forward by Penrose 40 years ago, is still one of the most important open questions in general relativity. In this Letter, we reanalyze extreme situations which have been considered as counterexamples to the weak cosmic censorship conjecture. In particular, we consider the absorption of scalar particles with large angular momentum by a black hole. Ignoring back reaction effects may lead one to conclude that the incident wave may overspin the black hole, thereby exposing its inner singularity to distant observers. However, we show that when back reaction effects are properly taken into account, the stability of the black-hole event horizon is irrefutable. We therefore conclude that cosmic censorship is actually respected in this type of gedanken experiments.

  10. Elliptic polylogarithms and iterated integrals on elliptic curves. Part I: general formalism

    NASA Astrophysics Data System (ADS)

    Broedel, Johannes; Duhr, Claude; Dulat, Falko; Tancredi, Lorenzo

    2018-05-01

    We introduce a class of iterated integrals, defined through a set of linearly independent integration kernels on elliptic curves. As a direct generalisation of multiple polylogarithms, we construct our set of integration kernels ensuring that they have at most simple poles, implying that the iterated integrals have at most logarithmic singularities. We study the properties of our iterated integrals and their relationship to the multiple elliptic polylogarithms from the mathematics literature. On the one hand, we find that our iterated integrals span essentially the same space of functions as the multiple elliptic polylogarithms. On the other, our formulation allows for a more direct use to solve a large variety of problems in high-energy physics. We demonstrate the use of our functions in the evaluation of the Laurent expansion of some hypergeometric functions for values of the indices close to half integers.

  11. The crack problem in a reinforced cylindrical shell

    NASA Technical Reports Server (NTRS)

    Yahsi, O. S.; Erdogan, F.

    1986-01-01

    In this paper a partially reinforced cylinder containing an axial through crack is considered. The reinforcement is assumed to be fully bonded to the main cylinder. The composite cylinder is thus modelled by a nonhomogeneous shell having a step change in the elastic properties at the z=0 plane, z being the axial coordinate. Using a Reissner type transverse shear theory the problem is reduced to a pair of singular integral equations. In the special case of a crack tip touching the bimaterial interface it is shown that the dominant parts of the kernels of the integral equations associated with both membrane loading and bending of the shell reduce to the generalized Cauchy kernel obtained for the corresponding plane stress case. The integral equations are solved and the stress intensity factors are given for various crack and shell dimensions. A bonded fiberglass reinforcement which may serve as a crack arrestor is used as an example.

  12. The crack problem in a reinforced cylindrical shell

    NASA Technical Reports Server (NTRS)

    Yahsi, O. S.; Erdogan, F.

    1986-01-01

    A partially reinforced cylinder containing an axial through crack is considered. The reinforcement is assumed to be fully bonded to the main cylinder. The composite cylinder is thus modelled by a nonhomogeneous shell having a step change in the elastic properties at the z = 0 plane, z being the axial coordinate. Using a Reissner type transverse shear theory the problem is reduced to a pair of singular integral equations. In the special case of a crack tip touching the bimaterial interface it is shown that the dominant parts of the kernels of the integral equations associated with both membrane loading and bending of the shell reduce to the generalized Cauchy kernel obtained for the corresponding plane stress case. The integral equations are solved and the stress intensity factors are given for various crack and shell dimensions. A bonded fiberglass reinforcement which may serve as a crack arrestor is used as an example.

  13. Utilizing the Structure and Content Information for XML Document Clustering

    NASA Astrophysics Data System (ADS)

    Tran, Tien; Kutty, Sangeetha; Nayak, Richi

    This paper reports on the experiments and results of a clustering approach used in the INEX 2008 document mining challenge. The clustering approach utilizes both the structure and content information of the Wikipedia XML document collection. A latent semantic kernel (LSK) is used to measure the semantic similarity between XML documents based on their content features. The construction of a latent semantic kernel involves the computing of singular vector decomposition (SVD). On a large feature space matrix, the computation of SVD is very expensive in terms of time and memory requirements. Thus in this clustering approach, the dimension of the document space of a term-document matrix is reduced before performing SVD. The document space reduction is based on the common structural information of the Wikipedia XML document collection. The proposed clustering approach has shown to be effective on the Wikipedia collection in the INEX 2008 document mining challenge.

  14. Volterra series truncation and kernel estimation of nonlinear systems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Billings, S. A.

    2017-02-01

    The Volterra series model is a direct generalisation of the linear convolution integral and is capable of displaying the intrinsic features of a nonlinear system in a simple and easy to apply way. Nonlinear system analysis using Volterra series is normally based on the analysis of its frequency-domain kernels and a truncated description. But the estimation of Volterra kernels and the truncation of Volterra series are coupled with each other. In this paper, a novel complex-valued orthogonal least squares algorithm is developed. The new algorithm provides a powerful tool to determine which terms should be included in the Volterra series expansion and to estimate the kernels and thus solves the two problems all together. The estimated results are compared with those determined using the analytical expressions of the kernels to validate the method. To further evaluate the effectiveness of the method, the physical parameters of the system are also extracted from the measured kernels. Simulation studies demonstrates that the new approach not only can truncate the Volterra series expansion and estimate the kernels of a weakly nonlinear system, but also can indicate the applicability of the Volterra series analysis in a severely nonlinear system case.

  15. Convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation and rate constants: Case study of the spin-boson model.

    PubMed

    Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang

    2018-04-28

    The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.

  16. Convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation and rate constants: Case study of the spin-boson model

    NASA Astrophysics Data System (ADS)

    Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang

    2018-04-01

    The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.

  17. A multi-domain spectral method for time-fractional differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.

    2015-07-01

    This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.

  18. Non-homogeneous harmonic analysis: 16 years of development

    NASA Astrophysics Data System (ADS)

    Volberg, A. L.; Èiderman, V. Ya

    2013-12-01

    This survey contains results and methods in the theory of singular integrals, a theory which has been developing dramatically in the last 15-20 years. The central (although not the only) topic of the paper is the connection between the analytic properties of integrals and operators with Calderón-Zygmund kernels and the geometric properties of the measures. The history is traced of the classical Painlevé problem of describing removable singularities of bounded analytic functions, which has provided a strong incentive for the development of this branch of harmonic analysis. The progress of recent decades has largely been based on the creation of an apparatus for dealing with non-homogeneous measures, and much attention is devoted to this apparatus here. Several open questions are stated, first and foremost in the multidimensional case, where the method of curvature of a measure is not available. Bibliography: 128 titles.

  19. Bread Wheat Quality: Some Physical, Chemical and Rheological Characteristics of Syrian and English Bread Wheat Samples.

    PubMed

    Al-Saleh, Abboud; Brennan, Charles S

    2012-11-22

    The relationships between breadmaking quality, kernel properties (physical and chemical), and dough rheology were investigated using flours from six genotypes of Syrian wheat lines, comprising both commercially grown cultivars and advanced breeding lines. Genotypes were grown in 2008/2009 season in irrigated plots in the Eastern part of Syria. Grain samples were evaluated for vitreousness, test weight, 1000-kernel weight and then milled and tested for protein content, ash, and water content. Dough rheology of the samples was studied by the determination of the mixing time, stability, weakness, resistance and the extensibility of the dough. Loaf baking quality was evaluated by the measurement of the specific weight, resilience and firmness in addition to the sensory analysis. A comparative study between the six Syrian wheat genotypes and two English flour samples was conducted. Significant differences were observed among Syrian genotypes in vitreousness (69.3%-95.0%), 1000-kernel weight (35.2-46.9 g) and the test weight (82.2-88.0 kg/hL). All samples exhibited high falling numbers (346 to 417 s for the Syrian samples and 285 and 305 s for the English flours). A significant positive correlation was exhibited between the protein content of the flour and its absorption of water (r = 0.84 **), as well as with the vitreousness of the kernel (r = 0.54 *). Protein content was also correlated with dough stability (r = 0.86 **), extensibility (r = 0.8 **), and negatively correlated with dough weakness (r = -0.69 **). Bread firmness and dough weakness were positively correlated (r = 0.66 **). Sensory analysis indicated Doumah-2 was the best appreciated whilst Doumah 40765 and 46055 were the least appreciated which may suggest their suitability for biscuit preparation rather than bread making.

  20. Bread Wheat Quality: Some Physical, Chemical and Rheological Characteristics of Syrian and English Bread Wheat Samples

    PubMed Central

    Al-Saleh, Abboud; Brennan, Charles S.

    2012-01-01

    The relationships between breadmaking quality, kernel properties (physical and chemical), and dough rheology were investigated using flours from six genotypes of Syrian wheat lines, comprising both commercially grown cultivars and advanced breeding lines. Genotypes were grown in 2008/2009 season in irrigated plots in the Eastern part of Syria. Grain samples were evaluated for vitreousness, test weight, 1000-kernel weight and then milled and tested for protein content, ash, and water content. Dough rheology of the samples was studied by the determination of the mixing time, stability, weakness, resistance and the extensibility of the dough. Loaf baking quality was evaluated by the measurement of the specific weight, resilience and firmness in addition to the sensory analysis. A comparative study between the six Syrian wheat genotypes and two English flour samples was conducted. Significant differences were observed among Syrian genotypes in vitreousness (69.3%–95.0%), 1000-kernel weight (35.2–46.9 g) and the test weight (82.2–88.0 kg/hL). All samples exhibited high falling numbers (346 to 417 s for the Syrian samples and 285 and 305 s for the English flours). A significant positive correlation was exhibited between the protein content of the flour and its absorption of water (r = 0.84 **), as well as with the vitreousness of the kernel (r = 0.54 *). Protein content was also correlated with dough stability (r = 0.86 **), extensibility (r = 0.8 **), and negatively correlated with dough weakness (r = −0.69 **). Bread firmness and dough weakness were positively correlated (r = 0.66 **). Sensory analysis indicated Doumah-2 was the best appreciated whilst Doumah 40765 and 46055 were the least appreciated which may suggest their suitability for biscuit preparation rather than bread making. PMID:28239087

  1. Adaptive evolution of defense ability leads to diversification of prey species.

    PubMed

    Zu, Jian; Wang, Jinliang; Du, Jianqiang

    2014-06-01

    In this paper, by using the adaptive dynamics approach, we investigate how the adaptive evolution of defense ability promotes the diversity of prey species in an initial one-prey-two-predator community. We assume that the prey species can evolve to a safer strategy such that it can reduce the predation risk, but a prey with a high defense ability for one predator may have a low defense ability for the other and vice versa. First, by using the method of critical function analysis, we find that if the trade-off is convex in the vicinity of the evolutionarily singular strategy, then this singular strategy is a continuously stable strategy. However, if the trade-off is weakly concave near the singular strategy and the competition between the two predators is relatively weak, then the singular strategy may be an evolutionary branching point. Second, we find that after the branching has occurred in the prey strategy, if the trade-off curve is globally concave, then the prey species might eventually evolve into two specialists, each caught by only one predator species. However, if the trade-off curve is convex-concave-convex, the prey species might eventually branch into two partial specialists, each being caught by both of the two predators and they can stably coexist on the much longer evolutionary timescale.

  2. Continuations of the nonlinear Schrödinger equation beyond the singularity

    NASA Astrophysics Data System (ADS)

    Fibich, G.; Klein, M.

    2011-07-01

    We present four continuations of the critical nonlinear Schrödinger equation (NLS) beyond the singularity: (1) a sub-threshold power continuation, (2) a shrinking-hole continuation for ring-type solutions, (3) a vanishing nonlinear-damping continuation and (4) a complex Ginzburg-Landau (CGL) continuation. Using asymptotic analysis, we explicitly calculate the limiting solutions beyond the singularity. These calculations show that for generic initial data that lead to a loglog collapse, the sub-threshold power limit is a Bourgain-Wang solution, both before and after the singularity, and the vanishing nonlinear-damping and CGL limits are a loglog solution before the singularity, and have an infinite-velocity expanding core after the singularity. Our results suggest that all NLS continuations share the universal feature that after the singularity time Tc, the phase of the singular core is only determined up to multiplication by eiθ. As a result, interactions between post-collapse beams (filaments) become chaotic. We also show that when the continuation model leads to a point singularity and preserves the NLS invariance under the transformation t → -t and ψ → ψ*, the singular core of the weak solution is symmetric with respect to Tc. Therefore, the sub-threshold power and the shrinking-hole continuations are symmetric with respect to Tc, but continuations which are based on perturbations of the NLS equation are generically asymmetric.

  3. Shocks and finite-time singularities in Hele-Shaw flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teodorescu, Razvan; Wiegmann, P; Lee, S-y

    Hele-Shaw flow at vanishing surface tension is ill-defined. In finite time, the flow develops cusplike singularities. We show that the ill-defined problem admits a weak dispersive solution when singularities give rise to a graph of shock waves propagating in the viscous fluid. The graph of shocks grows and branches. Velocity and pressure jump across the shock. We formulate a few simple physical principles which single out the dispersive solution and interpret shocks as lines of decompressed fluid. We also formulate the dispersive solution in algebro-geometrical terms as an evolution of Krichever-Boutroux complex curve. We study in details the most genericmore » (2,3) cusp singularity which gives rise to an elementary branching event. This solution is self-similar and expressed in terms of elliptic functions.« less

  4. Deforming regular black holes

    NASA Astrophysics Data System (ADS)

    Neves, J. C. S.

    2017-06-01

    In this work, we have deformed regular black holes which possess a general mass term described by a function which generalizes the Bardeen and Hayward mass functions. By using linear constraints in the energy-momentum tensor to generate metrics, the solutions presented in this work are either regular or singular. That is, within this approach, it is possible to generate regular or singular black holes from regular or singular black holes. Moreover, contrary to the Bardeen and Hayward regular solutions, the deformed regular black holes may violate the weak energy condition despite the presence of the spherical symmetry. Some comments on accretion of deformed black holes in cosmological scenarios are made.

  5. Multiscale Analysis in the Compressible Rotating and Heat Conducting Fluids

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Sam; Maltese, David; Novotný, Antonín

    2017-06-01

    We consider the full Navier-Stokes-Fourier system under rotation in the singular regime of small Mach and Rossby, and large Reynolds and Péclet numbers, with ill prepared initial data on an infinite straight 3-D layer rotating with respect to the axis orthogonal to the layer. We perform the singular limit in the framework of weak solutions and identify the 2-D Euler-Boussinesq system as the target problem.

  6. Weak-noise limit of a piecewise-smooth stochastic differential equation.

    PubMed

    Chen, Yaming; Baule, Adrian; Touchette, Hugo; Just, Wolfram

    2013-11-01

    We investigate the validity and accuracy of weak-noise (saddle-point or instanton) approximations for piecewise-smooth stochastic differential equations (SDEs), taking as an illustrative example a piecewise-constant SDE, which serves as a simple model of Brownian motion with solid friction. For this model, we show that the weak-noise approximation of the path integral correctly reproduces the known propagator of the SDE at lowest order in the noise power, as well as the main features of the exact propagator with higher-order corrections, provided the singularity of the path integral associated with the nonsmooth SDE is treated with some heuristics. We also show that, as in the case of smooth SDEs, the deterministic paths of the noiseless system correctly describe the behavior of the nonsmooth SDE in the low-noise limit. Finally, we consider a smooth regularization of the piecewise-constant SDE and study to what extent this regularization can rectify some of the problems encountered when dealing with discontinuous drifts and singularities in SDEs.

  7. The development of a mixing layer under the action of weak streamwise vortices

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.; Mathew, Joseph

    1993-01-01

    The action of weak, streamwise vortices on a plane, incompressible, steady mixing layer is examined in the large Reynolds-number limit. The outer, inviscid region is bounded by a vortex sheet to which the viscous region is confined. It is shown that the local linear analysis becomes invalid at streamwise distances O(epsilon(sup -1)), where epsilon is much less than 1 is the cross flow amplitude, and a new nonlinear analysis is constructed for this region. Numerical solutions of the nonlinear problem show that the vortex sheet undergoes an O(1) change in position and that the solution is ultimately terminated by the appearance of a singularity. The corresponding viscous layer shows downstream thickening, but appears to remain well behaved up to the singular location.

  8. Singular Perturbations and Time-Scale Methods in Control Theory: Survey 1976-1982.

    DTIC Science & Technology

    1982-12-01

    established in the 1960s, when they first became a means for simplified computation of optimal trajectories. It was soon recognized that singular...null-space of P(ao). The asymptotic values of the invariant zeros and associated invariant-zero directions as € O are the values computed from the...49 ’ 49 7. WEAK COUPLING AND TIME SCALES The need for model simplification with a reduction (or distribution) of computational effort is

  9. A problem with inverse time for a singularly perturbed integro-differential equation with diagonal degeneration of the kernel of high order

    NASA Astrophysics Data System (ADS)

    Bobodzhanov, A. A.; Safonov, V. F.

    2016-04-01

    We consider an algorithm for constructing asymptotic solutions regularized in the sense of Lomov (see [1], [2]). We show that such problems can be reduced to integro-differential equations with inverse time. But in contrast to known papers devoted to this topic (see, for example, [3]), in this paper we study a fundamentally new case, which is characterized by the absence, in the differential part, of a linear operator that isolates, in the asymptotics of the solution, constituents described by boundary functions and by the fact that the integral operator has kernel with diagonal degeneration of high order. Furthermore, the spectrum of the regularization operator A(t) (see below) may contain purely imaginary eigenvalues, which causes difficulties in the application of the methods of construction of asymptotic solutions proposed in the monograph [3]. Based on an analysis of the principal term of the asymptotics, we isolate a class of inhomogeneities and initial data for which the exact solution of the original problem tends to the limit solution (as \\varepsilon\\to+0) on the entire time interval under consideration, also including a boundary-layer zone (that is, we solve the so-called initialization problem). The paper is of a theoretical nature and is designed to lead to a greater understanding of the problems in the theory of singular perturbations. There may be applications in various applied areas where models described by integro-differential equations are used (for example, in elasticity theory, the theory of electrical circuits, and so on).

  10. Symmetry breaking and singularity structure in Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Commeford, K. A.; Garcia-March, M. A.; Ferrando, A.; Carr, Lincoln D.

    2012-08-01

    We determine the trajectories of vortex singularities that arise after a single vortex is broken by a discretely symmetric impulse in the context of Bose-Einstein condensates in a harmonic trap. The dynamics of these singularities are analyzed to determine the form of the imprinted motion. We find that the symmetry-breaking process introduces two effective forces: a repulsive harmonic force that causes the daughter trajectories to be ejected from the parent singularity and a Magnus force that introduces a torque about the axis of symmetry. For the analytical noninteracting case we find that the parent singularity is reconstructed from the daughter singularities after one period of the trapping frequency. The interactions between singularities in the weakly interacting system do not allow the parent vortex to be reconstructed. Analytic trajectories were compared to the actual minima of the wave function, showing less than 0.5% error for an impulse strength of v=0.00005. We show that these solutions are valid within the impulse regime for various impulse strengths using numerical integration of the Gross-Pitaevskii equation. We also show that the actual duration of the symmetry-breaking potential does not significantly change the dynamics of the system as long as the strength is below v=0.0005.

  11. Spherically Symmetric Gravitational Collapse of a Dust Cloud in Third-Order Lovelock Gravity

    NASA Astrophysics Data System (ADS)

    Zhou, Kang; Yang, Zhan-Ying; Zou, De-Cheng; Yue, Rui-Hong

    We investigate the spherically symmetric gravitational collapse of an incoherent dust cloud by considering a LTB-type spacetime in third-order Lovelock Gravity without cosmological constant, and give three families of LTB-like solutions which separately corresponding to hyperbolic, parabolic and elliptic. Notice that the contribution of high-order curvature corrections have a profound influence on the nature of the singularity, and the global structure of spacetime changes drastically from the analogous general relativistic case. Interestingly, the presence of high order Lovelock terms leads to the formation of massive, naked and timelike singularities in the 7D spacetime, which is disallowed in general relativity. Moveover, we point out that the naked singularities in the 7D case may be gravitational weak therefore may not be a serious threat to the cosmic censorship hypothesis, while the naked singularities in the D ≥ 8 inhomogeneous collapse violate the cosmic censorship hypothesis seriously.

  12. Band structure of an electron in a kind of periodic potentials with singularities

    NASA Astrophysics Data System (ADS)

    Hai, Kuo; Yu, Ning; Jia, Jiangping

    2018-06-01

    Noninteracting electrons in some crystals may experience periodic potentials with singularities and the governing Schrödinger equation cannot be defined at the singular points. The band structure of a single electron in such a one-dimensional crystal has been calculated by using an equivalent integral form of the Schrödinger equation. Both the perturbed and exact solutions are constructed respectively for the cases of a general singular weak-periodic system and its an exactly solvable version, Kronig-Penney model. Any one of them leads to a special band structure of the energy-dependent parameter, which results in an effective correction to the previous energy-band structure and gives a new explanation for forming the band structure. The used method and obtained results could be a valuable aid in the study of energy bands in solid-state physics, and the new explanation may trigger investigation to different physical mechanism of electron band structures.

  13. A new analysis of the Fornberg-Whitham equation pertaining to a fractional derivative with Mittag-Leffler-type kernel

    NASA Astrophysics Data System (ADS)

    Kumar, Devendra; Singh, Jagdev; Baleanu, Dumitru

    2018-02-01

    The mathematical model of breaking of non-linear dispersive water waves with memory effect is very important in mathematical physics. In the present article, we examine a novel fractional extension of the non-linear Fornberg-Whitham equation occurring in wave breaking. We consider the most recent theory of differentiation involving the non-singular kernel based on the extended Mittag-Leffler-type function to modify the Fornberg-Whitham equation. We examine the existence of the solution of the non-linear Fornberg-Whitham equation of fractional order. Further, we show the uniqueness of the solution. We obtain the numerical solution of the new arbitrary order model of the non-linear Fornberg-Whitham equation with the aid of the Laplace decomposition technique. The numerical outcomes are displayed in the form of graphs and tables. The results indicate that the Laplace decomposition algorithm is a very user-friendly and reliable scheme for handling such type of non-linear problems of fractional order.

  14. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  15. Singular temperature dependence of the equation of state of superconductors with spin–orbit interaction in the low-temperature region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovchinnikov, Yu. N., E-mail: ovc@itp.ac.ru

    The equation of state is investigated for a thin superconducting film in a longitudinal magnetic field and with strong spin-orbit interaction at the critical point. As a first step, the state with the maximal value of the magnetic field for a given value of spin–orbit interaction at T = 0 is chosen. This state is investigated in the low-temperature region. The temperature contribution to the equation of state is weakly singular.

  16. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  17. Spectral and entropic characterizations of Wigner functions: applications to model vibrational systems.

    PubMed

    Luzanov, A V

    2008-09-07

    The Wigner function for the pure quantum states is used as an integral kernel of the non-Hermitian operator K, to which the standard singular value decomposition (SVD) is applied. It provides a set of the squared singular values treated as probabilities of the individual phase-space processes, the latter being described by eigenfunctions of KK(+) (for coordinate variables) and K(+)K (for momentum variables). Such a SVD representation is employed to obviate the well-known difficulties in the definition of the phase-space entropy measures in terms of the Wigner function that usually allows negative values. In particular, the new measures of nonclassicality are constructed in the form that automatically satisfies additivity for systems composed of noninteracting parts. Furthermore, the emphasis is given on the geometrical interpretation of the full entropy measure as the effective phase-space volume in the Wigner picture of quantum mechanics. The approach is exemplified by considering some generic vibrational systems. Specifically, for eigenstates of the harmonic oscillator and a superposition of coherent states, the singular value spectrum is evaluated analytically. Numerical computations are given for the nonlinear problems (the Morse and double well oscillators, and the Henon-Heiles system). We also discuss the difficulties in implementation of a similar technique for electronic problems.

  18. Soliton structure versus singularity analysis: Third-order completely intergrable nonlinear differential equations in 1 + 1-dimensions

    NASA Astrophysics Data System (ADS)

    Fuchssteiner, Benno; Carillo, Sandra

    1989-01-01

    Bäcklund transformations between all known completely integrable third-order differential equations in (1 + 1)-dimensions are established and the corresponding transformations formulas for their hereditary operators and Hamiltonian formulations are exhibited. Some of these Bäcklund transformations are not injective; therefore additional non-commutative symmetry groups are found for some equations. These non-commutative symmetry groups are classified as having a semisimple part isomorphic to the affine algebra A(1)1. New completely integrable third-order integro-differential equations, some depending explicitly on x, are given. These new equations give rise to nonin equation. Connections between the singularity equations (from the Painlevé analysis) and the nonlinear equations for interacting solitons are established. A common approach to singularity analysis and soliton structure is introduced. The Painlevé analysis is modified in such a sense that it carries over directly and without difficulty to the time evolution of singularity manifolds of equations like the sine-Gordon and nonlinear Schrödinger equation. A method to recover the Painlevé series from its constant level term is exhibit. The soliton-singularity transform is recognized to be connected to the Möbius group. This gives rise to a Darboux-like result for the spectral properties of the recursion operator. These connections are used in order to explain why poles of soliton equations move like trajectories of interacting solitons. Furthermore it is explicitly computed how solitons of singularity equations behave under the effect of this soliton-singularity transform. This then leads to the result that only for scaling degrees α = -1 and α = -2 the usual Painlevé analysis can be carried out. A new invariance principle, connected to kernels of differential operators is discovered. This new invariance, for example, connects the explicit solutions of the Liouville equation with the Miura transform. Simple methods are exhibited which allow to compute out of N-soliton solutions of the KdV (Bargman potentials) explicit solutions of equations like the Harry Dym equation. Certain solutions are plotted.

  19. Robust control for fractional variable-order chaotic systems with non-singular kernel

    NASA Astrophysics Data System (ADS)

    Zuñiga-Aguilar, C. J.; Gómez-Aguilar, J. F.; Escobar-Jiménez, R. F.; Romero-Ugalde, H. M.

    2018-01-01

    This paper investigates the chaos control for a class of variable-order fractional chaotic systems using robust control strategy. The variable-order fractional models of the non-autonomous biological system, the King Cobra chaotic system, the Halvorsen's attractor and the Burke-Shaw system, have been derived using the fractional-order derivative with Mittag-Leffler in the Liouville-Caputo sense. The fractional differential equations and the control law were solved using the Adams-Bashforth-Moulton algorithm. To test the control stability efficiency, different statistical indicators were introduced. Finally, simulation results demonstrate the effectiveness of the proposed robust control.

  20. Analysis of a Generally Oriented Crack in a Functionally Graded Strip Sandwiched Between Two Homogeneous Half Planes

    NASA Technical Reports Server (NTRS)

    Shbeeb, N.; Binienda, W. K.; Kreider, K.

    1999-01-01

    The driving forces for a generally oriented crack embedded in a Functionally Graded strip sandwiched between two half planes are analyzed using singular integral equations with Cauchy kernels, and integrated using Lobatto-Chebyshev collocation. Mixed-mode Stress Intensity Factors (SIF) and Strain Energy Release Rates (SERR) are calculated. The Stress Intensity Factors are compared for accuracy with previously published results. Parametric studies are conducted for various nonhomogeneity ratios, crack lengths. crack orientation and thickness of the strip. It is shown that the SERR is more complete and should be used for crack propagation analysis.

  1. Analysis of an Interface Crack for a Functionally Graded Strip Sandwiched between Two Homogeneous Layers of Finite Thickness

    NASA Technical Reports Server (NTRS)

    Shbeeh, N. I.; Binienda, W. K.

    1999-01-01

    The interface crack problem for a composite layer that consists of a homogeneous substrate, coating and a non-homogeneous interface was formulated for singular integral equations with Cauchy kernels and integrated using the Lobatto-Chebyshev collocation technique. Mixed-mode Stress Intensity Factors and Strain Energy Release Rates were calculated. The Stress Intensity Factors were compared for accuracy with relevant results previously published. The parametric studies were conducted for the various thickness of each layer and for various non-homogeneity ratios. Particular application to the Zirconia thermal barrier on steel substrate is demonstrated.

  2. Generic absence of strong singularities in loop quantum Bianchi-IX spacetimes

    NASA Astrophysics Data System (ADS)

    Saini, Sahil; Singh, Parampreet

    2018-03-01

    We study the generic resolution of strong singularities in loop quantized effective Bianchi-IX spacetime in two different quantizations—the connection operator based ‘A’ quantization and the extrinsic curvature based ‘K’ quantization. We show that in the effective spacetime description with arbitrary matter content, it is necessary to include inverse triad corrections to resolve all the strong singularities in the ‘A’ quantization. Whereas in the ‘K’ quantization these results can be obtained without including inverse triad corrections. Under these conditions, the energy density, expansion and shear scalars for both of the quantization prescriptions are bounded. Notably, both the quantizations can result in potentially curvature divergent events if matter content allows divergences in the partial derivatives of the energy density with respect to the triad variables at a finite energy density. Such events are found to be weak curvature singularities beyond which geodesics can be extended in the effective spacetime. Our results show that all potential strong curvature singularities of the classical theory are forbidden in Bianchi-IX spacetime in loop quantum cosmology and geodesic evolution never breaks down for such events.

  3. Singular boundary value problem for the integrodifferential equation in an insurance model with stochastic premiums: Analysis and numerical solution

    NASA Astrophysics Data System (ADS)

    Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.

    2012-10-01

    A singular boundary value problem for a second-order linear integrodifferential equation with Volterra and non-Volterra integral operators is formulated and analyzed. The equation is defined on ℝ+, has a weak singularity at zero and a strong singularity at infinity, and depends on several positive parameters. Under natural constraints on the coefficients of the equation, existence and uniqueness theorems for this problem with given limit boundary conditions at singular points are proved, asymptotic representations of the solution are given, and an algorithm for its numerical determination is described. Numerical computations are performed and their interpretation is given. The problem arises in the study of the survival probability of an insurance company over infinite time (as a function of its initial surplus) in a dynamic insurance model that is a modification of the classical Cramer-Lundberg model with a stochastic process rate of premium under a certain investment strategy in the financial market. A comparative analysis of the results with those produced by the model with deterministic premiums is given.

  4. Flood susceptibility mapping using a novel ensemble weights-of-evidence and support vector machine models in GIS

    NASA Astrophysics Data System (ADS)

    Tehrany, Mahyat Shafapour; Pradhan, Biswajeet; Jebur, Mustafa Neamah

    2014-05-01

    Flood is one of the most devastating natural disasters that occur frequently in Terengganu, Malaysia. Recently, ensemble based techniques are getting extremely popular in flood modeling. In this paper, weights-of-evidence (WoE) model was utilized first, to assess the impact of classes of each conditioning factor on flooding through bivariate statistical analysis (BSA). Then, these factors were reclassified using the acquired weights and entered into the support vector machine (SVM) model to evaluate the correlation between flood occurrence and each conditioning factor. Through this integration, the weak point of WoE can be solved and the performance of the SVM will be enhanced. The spatial database included flood inventory, slope, stream power index (SPI), topographic wetness index (TWI), altitude, curvature, distance from the river, geology, rainfall, land use/cover (LULC), and soil type. Four kernel types of SVM (linear kernel (LN), polynomial kernel (PL), radial basis function kernel (RBF), and sigmoid kernel (SIG)) were used to investigate the performance of each kernel type. The efficiency of the new ensemble WoE and SVM method was tested using area under curve (AUC) which measured the prediction and success rates. The validation results proved the strength and efficiency of the ensemble method over the individual methods. The best results were obtained from RBF kernel when compared with the other kernel types. Success rate and prediction rate for ensemble WoE and RBF-SVM method were 96.48% and 95.67% respectively. The proposed ensemble flood susceptibility mapping method could assist researchers and local governments in flood mitigation strategies.

  5. Detecting weak position fluctuations from encoder signal using singular spectrum analysis.

    PubMed

    Xu, Xiaoqiang; Zhao, Ming; Lin, Jing

    2017-11-01

    Mechanical fault or defect will cause some weak fluctuations to the position signal. Detection of such fluctuations via encoders can help determine the health condition and performance of the machine, and offer a promising alternative to the vibration-based monitoring scheme. However, besides the interested fluctuations, encoder signal also contains a large trend and some measurement noise. In applications, the trend is normally several orders larger than the concerned fluctuations in magnitude, which makes it difficult to detect the weak fluctuations without signal distortion. In addition, the fluctuations can be complicated and amplitude modulated under non-stationary working condition. To overcome this issue, singular spectrum analysis (SSA) is proposed for detecting weak position fluctuations from encoder signal in this paper. It enables complicated encode signal to be reduced into several interpretable components including a trend, a set of periodic fluctuations and noise. A numerical simulation is given to demonstrate the performance of the method, it shows that SSA outperforms empirical mode decomposition (EMD) in terms of capability and accuracy. Moreover, linear encoder signals from a CNC machine tool are analyzed to determine the magnitudes and sources of fluctuations during feed motion. The proposed method is proven to be feasible and reliable for machinery condition monitoring. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Dynamics of DNA breathing: weak noise analysis, finite time singularity, and mapping onto the quantum Coulomb problem.

    PubMed

    Fogedby, Hans C; Metzler, Ralf

    2007-12-01

    We study the dynamics of denaturation bubbles in double-stranded DNA on the basis of the Poland-Scheraga model. We show that long time distributions for the survival of DNA bubbles and the size autocorrelation function can be derived from an asymptotic weak noise approach. In particular, below the melting temperature the bubble closure corresponds to a noisy finite time singularity. We demonstrate that the associated Fokker-Planck equation is equivalent to a quantum Coulomb problem. Below the melting temperature, the bubble lifetime is associated with the continuum of scattering states of the repulsive Coulomb potential; at the melting temperature, the Coulomb potential vanishes and the underlying first exit dynamics exhibits a long time power law tail; above the melting temperature, corresponding to an attractive Coulomb potential, the long time dynamics is controlled by the lowest bound state. Correlations and finite size effects are discussed.

  7. An Onsager Singularity Theorem for Turbulent Solutions of Compressible Euler Equations

    NASA Astrophysics Data System (ADS)

    Drivas, Theodore D.; Eyink, Gregory L.

    2017-12-01

    We prove that bounded weak solutions of the compressible Euler equations will conserve thermodynamic entropy unless the solution fields have sufficiently low space-time Besov regularity. A quantity measuring kinetic energy cascade will also vanish for such Euler solutions, unless the same singularity conditions are satisfied. It is shown furthermore that strong limits of solutions of compressible Navier-Stokes equations that are bounded and exhibit anomalous dissipation are weak Euler solutions. These inviscid limit solutions have non-negative anomalous entropy production and kinetic energy dissipation, with both vanishing when solutions are above the critical degree of Besov regularity. Stationary, planar shocks in Euclidean space with an ideal-gas equation of state provide simple examples that satisfy the conditions of our theorems and which demonstrate sharpness of our L 3-based conditions. These conditions involve space-time Besov regularity, but we show that they are satisfied by Euler solutions that possess similar space regularity uniformly in time.

  8. Stability conditions for exact-exchange Kohn-Sham methods and their relation to correlation energies from the adiabatic-connection fluctuation-dissipation theorem.

    PubMed

    Bleiziffer, Patrick; Schmidtel, Daniel; Görling, Andreas

    2014-11-28

    The occurrence of instabilities, in particular singlet-triplet and singlet-singlet instabilities, in the exact-exchange (EXX) Kohn-Sham method is investigated. Hessian matrices of the EXX electronic energy with respect to the expansion coefficients of the EXX effective Kohn-Sham potential in an auxiliary basis set are derived. The eigenvalues of these Hessian matrices determine whether or not instabilities are present. Similar as in the corresponding Hartree-Fock case instabilities in the EXX method are related to symmetry breaking of the Hamiltonian operator for the EXX orbitals. In the EXX methods symmetry breaking can easily be visualized by displaying the local multiplicative exchange potential. Examples (N2, O2, and the polyyne C10H2) for instabilities and symmetry breaking are discussed. The relation of the stability conditions for EXX methods to approaches calculating the Kohn-Sham correlation energy via the adiabatic-connection fluctuation-dissipation (ACFD) theorem is discussed. The existence or nonexistence of singlet-singlet instabilities in an EXX calculation is shown to indicate whether or not the frequency-integration in the evaluation of the correlation energy is singular in the EXX-ACFD method. This method calculates the Kohn-Sham correlation energy through the ACFD theorem theorem employing besides the Coulomb kernel also the full frequency-dependent exchange kernel and yields highly accurate electronic energies. For the case of singular frequency-integrands in the EXX-ACFD method a regularization is suggested. Finally, we present examples of molecular systems for which the self-consistent field procedure of the EXX as well as the Hartree-Fock method can converge to more than one local minimum depending on the initial conditions.

  9. Integrated Multiscale Modeling of Molecular Computing Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gregory Beylkin

    2012-03-23

    Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for themore » free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.« less

  10. Contact interaction of thin-walled elements with an elastic layer and an infinite circular cylinder under torsion

    NASA Astrophysics Data System (ADS)

    Kanetsyan, E. G.; Mkrtchyan, M. S.; Mkhitaryan, S. M.

    2018-04-01

    We consider a class of contact torsion problems on interaction of thin-walled elements shaped as an elastic thin washer – a flat circular plate of small height – with an elastic layer, in particular, with a half-space, and on interaction of thin cylindrical shells with a solid elastic cylinder, infinite in both directions. The governing equations of the physical models of elastic thin washers and thin circular cylindrical shells under torsion are derived from the exact equations of mathematical theory of elasticity using the Hankel and Fourier transforms. Within the framework of the accepted physical models, the solution of the contact problem between an elastic washer and an elastic layer is reduced to solving the Fredholm integral equation of the first kind with a kernel representable as a sum of the Weber–Sonin integral and some integral regular kernel, while solving the contact problem between a cylindrical shell and solid cylinder is reduced to a singular integral equation (SIE). An effective method for solving the governing integral equations of these problems are specified.

  11. Indetermination of particle sizing by laser diffraction in the anomalous size ranges

    NASA Astrophysics Data System (ADS)

    Pan, Linchao; Ge, Baozhen; Zhang, Fugen

    2017-09-01

    The laser diffraction method is widely used to measure particle size distributions. It is generally accepted that the scattering angle becomes smaller and the angles to the location of the main peak of scattered energy distributions in laser diffraction instruments shift to smaller values with increasing particle size. This specific principle forms the foundation of the laser diffraction method. However, this principle is not entirely correct for non-absorbing particles in certain size ranges and these particle size ranges are called anomalous size ranges. Here, we derive the analytical formulae for the bounds of the anomalous size ranges and discuss the influence of the width of the size segments on the signature of the Mie scattering kernel. This anomalous signature of the Mie scattering kernel will result in an indetermination of the particle size distribution when measured by laser diffraction instruments in the anomalous size ranges. By using the singular-value decomposition method we interpret the mechanism of occurrence of this indetermination in detail and then validate its existence by using inversion simulations.

  12. Fracture analysis of a transversely isotropic high temperature superconductor strip based on real fundamental solutions

    NASA Astrophysics Data System (ADS)

    Gao, Zhiwen; Zhou, Youhe

    2015-04-01

    Real fundamental solution for fracture problem of transversely isotropic high temperature superconductor (HTS) strip is obtained. The superconductor E-J constitutive law is characterized by the Bean model where the critical current density is independent of the flux density. Fracture analysis is performed by the methods of singular integral equations which are solved numerically by Gauss-Lobatto-Chybeshev (GSL) collocation method. To guarantee a satisfactory accuracy, the convergence behavior of the kernel function is investigated. Numerical results of fracture parameters are obtained and the effects of the geometric characteristics, applied magnetic field and critical current density on the stress intensity factors (SIF) are discussed.

  13. Fingering patterns in Hele-Shaw flows are density shock wave solutions of dispersionless KdV hierarchy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teodorescu, Razvan; Lee, S - Y; Wiegmann, P

    We investigate the hydrodynamics of a Hele-Shaw flow as the free boundary evolves from smooth initial conditions into a generic cusp singularity (of local geometry type x{sup 3} {approx} y{sup 2}), and then into a density shock wave. This novel solution preserves the integrability of the dynamics and, unlike all the weak solutions proposed previously, is not underdetermined. The evolution of the shock is such that the net vorticity remains zero, as before the critical time, and the shock can be interpreted as a singular line distribution of fluid deficit.

  14. Breathing pulses in singularly perturbed reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Veerman, Frits

    2015-07-01

    The weakly nonlinear stability of pulses in general singularly perturbed reaction-diffusion systems near a Hopf bifurcation is determined using a centre manifold expansion. A general framework to obtain leading order expressions for the (Hopf) centre manifold expansion for scale separated, localised structures is presented. Using the scale separated structure of the underlying pulse, directly calculable expressions for the Hopf normal form coefficients are obtained in terms of solutions to classical Sturm-Liouville problems. The developed theory is used to establish the existence of breathing pulses in a slowly nonlinear Gierer-Meinhardt system, and is confirmed by direct numerical simulation.

  15. Two Dimensional Dendritic Crystal Growth for Weak Undercooling

    NASA Technical Reports Server (NTRS)

    Tanveer, S.; Kunka, M. D.; Foster, M. R.

    1999-01-01

    We discuss the framework and issues brought forth in the recent work of Kunka, Foster & Tanveer, which incorporates small but nonzero surface energy effects in the nonlinear dynamics of a conformal mapping function z(zeta,t) that maps the upper-half zeta plane into the exterior of a dendrite. In this paper, surface energy effects on the singularities of z(zeta,t) in the lower-half plane were examined, as they move toward the real axis from below. In particular, the dynamics of complex singularities manifests itself in predictions on nature and growth rate of disturbances, as well as of coarsening.

  16. Conformally-flat, non-singular static metric in infinite derivative gravity

    NASA Astrophysics Data System (ADS)

    Buoninfante, Luca; Koshelev, Alexey S.; Lambiase, Gaetano; Marto, João; Mazumdar, Anupam

    2018-06-01

    In Einstein's theory of general relativity the vacuum solution yields a blackhole with a curvature singularity, where there exists a point-like source with a Dirac delta distribution which is introduced as a boundary condition in the static case. It has been known for a while that ghost-free infinite derivative theory of gravity can ameliorate such a singularity at least at the level of linear perturbation around the Minkowski background. In this paper, we will show that the Schwarzschild metric does not satisfy the boundary condition at the origin within infinite derivative theory of gravity, since a Dirac delta source is smeared out by non-local gravitational interaction. We will also show that the spacetime metric becomes conformally-flat and singularity-free within the non-local region, which can be also made devoid of an event horizon. Furthermore, the scale of non-locality ought to be as large as that of the Schwarzschild radius, in such a way that the gravitational potential in any metric has to be always bounded by one, implying that gravity remains weak from the infrared all the way up to the ultraviolet regime, in concurrence with the results obtained in [arXiv:1707.00273]. The singular Schwarzschild blackhole can now be potentially replaced by a non-singular compact object, whose core is governed by the mass and the effective scale of non-locality.

  17. Editorial

    NASA Astrophysics Data System (ADS)

    Li, C. P.; Mainardi, F.

    2011-03-01

    Fractional calculus, in allowing integrals and derivatives of any positive real order (the term "fractional" is kept only for historical reasons), can be considered a branch of mathematical analysis which deals with integro-differential equations where the integrals are of convolution type and exhibit (weakly singular) kernels of power-law type. It has a history of at least three hundred years because it can be dated back to the letter from G.W. Leibniz to G.A. de L'Hôpital and J. Wallis, dated 30 September 1695, in which the meaning of the one-half order derivative was first discussed and were made some remarks about its possibility. Subsequent mention of fractional derivatives was made, in some context or the other by L. Euler (1730), J.L. Lagrange (1772), P.S. Laplace (1812), S.F. Lacroix (1819), J.B.J. Fourier (1822), N.H. Abel (1823), J. Liouville (1832), B. Riemann (1847), H.L. Greer (1859), H. Holmgren (1865), A.K. Grünwald (1867), A.V. Letnikov (1868), N.Ya. Sonin (1869), H. Laurent (1884), P.A. Nekrassov (1888), A. Krug (1890), O. Heaviside (1892), S. Pincherle (1902), H. Weyl (1919), P. Lévy (1923), A. Marchaud (1927), H.T. Davis (1936), A. Zygmund (1945), M. Riesz (1949), W. Feller (1952), just to cite some relevant contributors up the mid of the last century, see e.g. [1,2]. Recently, a poster illustrating the major contributors during the period 1695-1970 has been published [3].

  18. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    PubMed Central

    Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.

    2013-01-01

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507

  19. Strong Cosmic Censorship

    NASA Astrophysics Data System (ADS)

    Isenberg, James

    2017-01-01

    The Hawking-Penrose theorems tell us that solutions of Einstein's equations are generally singular, in the sense of the incompleteness of causal geodesics (the paths of physical observers). These singularities might be marked by the blowup of curvature and therefore crushing tidal forces, or by the breakdown of physical determinism. Penrose has conjectured (in his `Strong Cosmic Censorship Conjecture`) that it is generically unbounded curvature that causes singularities, rather than causal breakdown. The verification that ``AVTD behavior'' (marked by the domination of time derivatives over space derivatives) is generically present in a family of solutions has proven to be a useful tool for studying model versions of Strong Cosmic Censorship in that family. I discuss some of the history of Strong Cosmic Censorship, and then discuss what is known about AVTD behavior and Strong Cosmic Censorship in families of solutions defined by varying degrees of isometry, and discuss recent results which we believe will extend this knowledge and provide new support for Strong Cosmic Censorship. I also comment on some of the recent work on ``Weak Null Singularities'', and how this relates to Strong Cosmic Censorship.

  20. Inverse Jacobi multiplier as a link between conservative systems and Poisson structures

    NASA Astrophysics Data System (ADS)

    García, Isaac A.; Hernández-Bermejo, Benito

    2017-08-01

    Some aspects of the relationship between conservativeness of a dynamical system (namely the preservation of a finite measure) and the existence of a Poisson structure for that system are analyzed. From the local point of view, due to the flow-box theorem we restrict ourselves to neighborhoods of singularities. In this sense, we characterize Poisson structures around the typical zero-Hopf singularity in dimension 3 under the assumption of having a local analytic first integral with non-vanishing first jet by connecting with the classical Poincaré center problem. From the global point of view, we connect the property of being strictly conservative (the invariant measure must be positive) with the existence of a Poisson structure depending on the phase space dimension. Finally, weak conservativeness in dimension two is introduced by the extension of inverse Jacobi multipliers as weak solutions of its defining partial differential equation and some of its applications are developed. Examples including Lotka-Volterra systems, quadratic isochronous centers, and non-smooth oscillators are provided.

  1. Research on offense and defense technology for iOS kernel security mechanism

    NASA Astrophysics Data System (ADS)

    Chu, Sijun; Wu, Hao

    2018-04-01

    iOS is a strong and widely used mobile device system. It's annual profits make up about 90% of the total profits of all mobile phone brands. Though it is famous for its security, there have been many attacks on the iOS operating system, such as the Trident apt attack in 2016. So it is important to research the iOS security mechanism and understand its weaknesses and put forward targeted protection and security check framework. By studying these attacks and previous jailbreak tools, we can see that an attacker could only run a ROP code and gain kernel read and write permissions based on the ROP after exploiting kernel and user layer vulnerabilities. However, the iOS operating system is still protected by the code signing mechanism, the sandbox mechanism, and the not-writable mechanism of the system's disk area. This is far from the steady, long-lasting control that attackers expect. Before iOS 9, breaking these security mechanisms was usually done by modifying the kernel's important data structures and security mechanism code logic. However, after iOS 9, the kernel integrity protection mechanism was added to the 64-bit operating system and none of the previous methods were adapted to the new versions of iOS [1]. But this does not mean that attackers can not break through. Therefore, based on the analysis of the vulnerability of KPP security mechanism, this paper implements two possible breakthrough methods for kernel security mechanism for iOS9 and iOS10. Meanwhile, we propose a defense method based on kernel integrity detection and sensitive API call detection to defense breakthrough method mentioned above. And we make experiments to prove that this method can prevent and detect attack attempts or invaders effectively and timely.

  2. Weak characteristic information extraction from early fault of wind turbine generator gearbox

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoli; Liu, Xiuli

    2017-09-01

    Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.

  3. Aerodynamics Via Acoustics: Application of Acoustic Formulas for Aerodynamic Calculations

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Myers, M. K.

    1986-01-01

    Prediction of aerodynamic loads on bodies in arbitrary motion is considered from an acoustic point of view, i.e., in a frame of reference fixed in the undisturbed medium. An inhomogeneous wave equation which governs the disturbance pressure is constructed and solved formally using generalized function theory. When the observer is located on the moving body surface there results a singular linear integral equation for surface pressure. Two different methods for obtaining such equations are discussed. Both steady and unsteady aerodynamic calculations are considered. Two examples are presented, the more important being an application to propeller aerodynamics. Of particular interest for numerical applications is the analytical behavior of the kernel functions in the various integral equations.

  4. Quantitative trait loci mapping for Gibberella ear rot resistance and associated agronomic traits using genotyping-by-sequencing in maize.

    PubMed

    Kebede, Aida Z; Woldemariam, Tsegaye; Reid, Lana M; Harris, Linda J

    2016-01-01

    Unique and co-localized chromosomal regions affecting Gibberella ear rot disease resistance and correlated agronomic traits were identified in maize. Dissecting the mechanisms underlying resistance to Gibberella ear rot (GER) disease in maize provides insight towards more informed breeding. To this goal, we evaluated 410 recombinant inbred lines (RIL) for GER resistance over three testing years using silk channel and kernel inoculation techniques. RILs were also evaluated for agronomic traits like days to silking, husk cover, and kernel drydown rate. The RILs showed significant genotypic differences for all traits with above average to high heritability estimates. Significant (P < 0.01) but weak genotypic correlations were observed between disease severity and agronomic traits, indicating the involvement of agronomic traits in disease resistance. Common QTLs were detected for GER resistance and kernel drydown rate, suggesting the existence of pleiotropic genes that could be exploited to improve both traits at the same time. The QTLs identified for silk and kernel resistance shared some common regions on chromosomes 1, 2, and 8 and also had some regions specific to each tissue on chromosomes 9 and 10. Thus, effective GER resistance breeding could be achieved by considering screening methods that allow exploitation of tissue-specific disease resistance mechanisms and include kernel drydown rate either in an index or as indirect selection criterion.

  5. A Non-Local, Energy-Optimized Kernel: Recovering Second-Order Exchange and Beyond in Extended Systems

    NASA Astrophysics Data System (ADS)

    Bates, Jefferson; Laricchia, Savio; Ruzsinszky, Adrienn

    The Random Phase Approximation (RPA) is quickly becoming a standard method beyond semi-local Density Functional Theory that naturally incorporates weak interactions and eliminates self-interaction error. RPA is not perfect, however, and suffers from self-correlation error as well as an incorrect description of short-ranged correlation typically leading to underbinding. To improve upon RPA we introduce a short-ranged, exchange-like kernel that is one-electron self-correlation free for one and two electron systems in the high-density limit. By tuning the one free parameter in our model to recover an exact limit of the homogeneous electron gas correlation energy we obtain a non-local, energy-optimized kernel that reduces the errors of RPA for both homogeneous and inhomogeneous solids. To reduce the computational cost of the standard kernel-corrected RPA, we also implement RPA renormalized perturbation theory for extended systems, and demonstrate its capability to describe the dominant correlation effects with a low-order expansion in both metallic and non-metallic systems. Furthermore we stress that for norm-conserving implementations the accuracy of RPA and beyond RPA structural properties compared to experiment is inherently limited by the choice of pseudopotential. Current affiliation: King's College London.

  6. The Swift-Hohenberg equation with a nonlocal nonlinearity

    NASA Astrophysics Data System (ADS)

    Morgan, David; Dawes, Jonathan H. P.

    2014-03-01

    It is well known that aspects of the formation of localised states in a one-dimensional Swift-Hohenberg equation can be described by Ginzburg-Landau-type envelope equations. This paper extends these multiple scales analyses to cases where an additional nonlinear integral term, in the form of a convolution, is present. The presence of a kernel function introduces a new lengthscale into the problem, and this results in additional complexity in both the derivation of envelope equations and in the bifurcation structure. When the kernel is short-range, weakly nonlinear analysis results in envelope equations of standard type but whose coefficients are modified in complicated ways by the nonlinear nonlocal term. Nevertheless, these computations can be formulated quite generally in terms of properties of the Fourier transform of the kernel function. When the lengthscale associated with the kernel is longer, our method leads naturally to the derivation of two different, novel, envelope equations that describe aspects of the dynamics in these new regimes. The first of these contains additional bifurcations, and unexpected loops in the bifurcation diagram. The second of these captures the stretched-out nature of the homoclinic snaking curves that arises due to the nonlocal term.

  7. Weak fault detection and health degradation monitoring using customized standard multiwavelets

    NASA Astrophysics Data System (ADS)

    Yuan, Jing; Wang, Yu; Peng, Yizhen; Wei, Chenjun

    2017-09-01

    Due to the nonobvious symptoms contaminated by a large amount of background noise, it is challenging to beforehand detect and predictively monitor the weak faults for machinery security assurance. Multiwavelets can act as adaptive non-stationary signal processing tools, potentially viable for weak fault diagnosis. However, the signal-based multiwavelets suffer from such problems as the imperfect properties missing the crucial orthogonality, the decomposition distortion impossibly reflecting the relationships between the faults and signatures, the single objective optimization and independence for fault prognostic. Thus, customized standard multiwavelets are proposed for weak fault detection and health degradation monitoring, especially the weak fault signature quantitative identification. First, the flexible standard multiwavelets are designed using the construction method derived from scalar wavelets, seizing the desired properties for accurate detection of weak faults and avoiding the distortion issue for feature quantitative identification. Second, the multi-objective optimization combined three dimensionless indicators of the normalized energy entropy, normalized singular entropy and kurtosis index is introduced to the evaluation criterions, and benefits for selecting the potential best basis functions for weak faults without the influence of the variable working condition. Third, an ensemble health indicator fused by the kurtosis index, impulse index and clearance index of the original signal along with the normalized energy entropy and normalized singular entropy by the customized standard multiwavelets is achieved using Mahalanobis distance to continuously monitor the health condition and track the performance degradation. Finally, three experimental case studies are implemented to demonstrate the feasibility and effectiveness of the proposed method. The results show that the proposed method can quantitatively identify the fault signature of a slight rub on the inner race of a locomotive bearing, effectively detect and locate the potential failure from a complicated epicyclic gear train and successfully reveal the fault development and performance degradation of a test bearing in the lifetime.

  8. End-use quality of CIMMYT-derived soft kernel durum wheat germplasm. II. Dough strength and pan bread quality

    USDA-ARS?s Scientific Manuscript database

    Durum wheat (Triticum turgidum ssp. durum) is considered unsuitable for the majority of commercial bread production because its weak gluten strength combined with flour particle size and flour starch damage after milling are not commensurate with hexaploid wheat flours. Recently a new durum cultivar...

  9. Multistage adsorption of diffusing macromolecules and viruses

    NASA Astrophysics Data System (ADS)

    Chou, Tom; D'Orsogna, Maria R.

    2007-09-01

    We derive the equations that describe adsorption of diffusing particles onto a surface followed by additional surface kinetic steps before being transported across the interface. Multistage surface kinetics occurs during membrane protein insertion, cell signaling, and the infection of cells by virus particles. For example, viral entry into healthy cells is possible only after a series of receptor and coreceptor binding events occurs at the cellular surface. We couple the diffusion of particles in the bulk phase with the multistage surface kinetics and derive an effective, integrodifferential boundary condition that contains a memory kernel embodying the delay induced by the surface reactions. This boundary condition takes the form of a singular perturbation problem in the limit where particle-surface interactions are short ranged. Moreover, depending on the surface kinetics, the delay kernel induces a nonmonotonic, transient replenishment of the bulk particle concentration near the interface. The approach generalizes that of Ward and Tordai [J. Chem. Phys. 14, 453 (1946)] and Diamant and Andelman [Colloids Surf. A 183-185, 259 (2001)] to include surface kinetics, giving rise to qualitatively new behaviors. Our analysis also suggests a simple scheme by which stochastic surface reactions may be coupled to deterministic bulk diffusion.

  10. Regularization techniques on least squares non-uniform fast Fourier transform.

    PubMed

    Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena

    2013-05-01

    Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.

  11. The correlation of chemical and physical corn kernel traits with production performance in broiler chickens and laying hens.

    PubMed

    Moore, S M; Stalder, K J; Beitz, D C; Stahl, C H; Fithian, W A; Bregendahl, K

    2008-04-01

    A study was conducted to determine the influence on broiler chicken growth and laying hen performance of chemical and physical traits of corn kernels from different hybrids. A total of 720 male 1-d-old Ross-308 broiler chicks were allotted to floor pens in 2 replicated experiments with a randomized complete block design. A total of 240 fifty-two-week-old Hy-Line W-36 laying hens were allotted to cages in a randomized complete block design. Corn-soybean meal diets were formulated for 3 broiler growth phases and one 14-wk-long laying hen phase to be marginally deficient in Lys and TSAA to allow for the detection of differences or correlations attributable to corn kernel chemical or physical traits. The broiler chicken diets were also marginally deficient in Ca and nonphytate P. Within a phase, corn- and soybean-based diets containing equal amounts of 1 of 6 different corn hybrids were formulated. The corn hybrids were selected to vary widely in chemical and physical traits. Feed consumption and BW were recorded for broiler chickens every 2 wk from 0 to 6 wk of age. Egg production was recorded daily, and feed consumption and egg weights were recorded weekly for laying hens between 53 and 67 wk of age. Physical and chemical composition of kernels was correlated with performance measures by multivariate ANOVA. Chemical and physical kernel traits were weakly correlated with performance in broiler chickens from 0 to 2 wk of age (P<0.05, | r |<0.42). However, from 4 to 6 wk of age and 0 to 6 wk of age, only kernel chemical traits were correlated with broiler chicken performance (P<0.05, | r |<0.29). From 53 to 67 wk of age, correlations were observed between both kernel physical and chemical traits and laying hen performance (P<0.05, | r |<0.34). In both experiments, the correlations of performance measures with individual kernel chemical and physical traits for any single kernel trait were not large enough to base corn hybrid selection on for feeding poultry.

  12. On the nonlinear three dimensional instability of Stokes layers and other shear layers to pairs of oblique waves

    NASA Technical Reports Server (NTRS)

    Wu, Xuesong; Lee, Sang Soo; Cowley, Stephen J.

    1992-01-01

    The nonlinear evolution of a pair of initially oblique waves in a high Reynolds Number Stokes layer is studied. Attention is focused on times when disturbances of amplitude epsilon have O(epsilon(exp 1/3)R) growth rates, where R is the Reynolds number. The development of a pair of oblique waves is then controlled by nonlinear critical-layer effects. Viscous effects are included by studying the distinguished scaling epsilon = O(R(exp -1)). This leads to a complicated modification of the kernel function in the integro-differential amplitude equation. When viscosity is not too large, solutions to the amplitude equation develop a finite-time singularity, indicating that an explosive growth can be introduced by nonlinear effects; we suggest that such explosive growth can lead to the bursts observed in experiments. Increasing the importance of viscosity generally delays the occurrence of the finite-time singularity, and sufficiently large viscosity may lead to the disturbance decaying exponentially. For the special case when the streamwise and spanwise wavenumbers are equal, the solution can evolve into a periodic oscillation. A link between the unsteady critical-layer approach to high-Reynolds-number flow instability, and the wave vortex approach is identified.

  13. Compactness and robustness: Applications in the solution of integral equations for chemical kinetics and electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Zhou, Yajun

    This thesis employs the topological concept of compactness to deduce robust solutions to two integral equations arising from chemistry and physics: the inverse Laplace problem in chemical kinetics and the vector wave scattering problem in dielectric optics. The inverse Laplace problem occurs in the quantitative understanding of biological processes that exhibit complex kinetic behavior: different subpopulations of transition events from the "reactant" state to the "product" state follow distinct reaction rate constants, which results in a weighted superposition of exponential decay modes. Reconstruction of the rate constant distribution from kinetic data is often critical for mechanistic understandings of chemical reactions related to biological macromolecules. We devise a "phase function approach" to recover the probability distribution of rate constants from decay data in the time domain. The robustness (numerical stability) of this reconstruction algorithm builds upon the continuity of the transformations connecting the relevant function spaces that are compact metric spaces. The robust "phase function approach" not only is useful for the analysis of heterogeneous subpopulations of exponential decays within a single transition step, but also is generalizable to the kinetic analysis of complex chemical reactions that involve multiple intermediate steps. A quantitative characterization of the light scattering is central to many meteoro-logical, optical, and medical applications. We give a rigorous treatment to electromagnetic scattering on arbitrarily shaped dielectric media via the Born equation: an integral equation with a strongly singular convolution kernel that corresponds to a non-compact Green operator. By constructing a quadratic polynomial of the Green operator that cancels out the kernel singularity and satisfies the compactness criterion, we reveal the universality of a real resonance mode in dielectric optics. Meanwhile, exploiting the properties of compact operators, we outline the geometric and physical conditions that guarantee a robust solution to the light scattering problem, and devise an asymptotic solution to the Born equation of electromagnetic scattering for arbitrarily shaped dielectric in a non-perturbative manner.

  14. Inversion of residual stress profiles from ultrasonic Rayleigh wave dispersion data

    NASA Astrophysics Data System (ADS)

    Mora, P.; Spies, M.

    2018-05-01

    We investigate theoretically and with synthetic data the performance of several inversion methods to infer a residual stress state from ultrasonic surface wave dispersion data. We show that this particular problem may reveal in relevant materials undesired behaviors for some methods that could be reliably applied to infer other properties. We focus on two methods, one based on a Taylor-expansion, and another one based on a piecewise linear expansion regularized by a singular value decomposition. We explain the instabilities of the Taylor-based method by highlighting singularities in the series of coefficients. At the same time, we show that the other method can successfully provide performances which only weakly depend on the material.

  15. Finite-frequency sensitivity kernels for head waves

    NASA Astrophysics Data System (ADS)

    Zhang, Zhigang; Shen, Yang; Zhao, Li

    2007-11-01

    Head waves are extremely important in determining the structure of the predominantly layered Earth. While several recent studies have shown the diffractive nature and the 3-D Fréchet kernels of finite-frequency turning waves, analogues of head waves in a continuous velocity structure, the finite-frequency effects and sensitivity kernels of head waves are yet to be carefully examined. We present the results of a numerical study focusing on the finite-frequency effects of head waves. Our model has a low-velocity layer over a high-velocity half-space and a cylindrical-shaped velocity perturbation placed beneath the interface at different locations. A 3-D finite-difference method is used to calculate synthetic waveforms. Traveltime and amplitude anomalies are measured by the cross-correlation of synthetic seismograms from models with and without the velocity perturbation and are compared to the 3-D sensitivity kernels constructed from full waveform simulations. The results show that the head wave arrival-time and amplitude are influenced by the velocity structure surrounding the ray path in a pattern that is consistent with the Fresnel zones. Unlike the `banana-doughnut' traveltime sensitivity kernels of turning waves, the traveltime sensitivity of the head wave along the ray path below the interface is weak, but non-zero. Below the ray path, the traveltime sensitivity reaches the maximum (absolute value) at a depth that depends on the wavelength and propagation distance. The sensitivity kernels vary with the vertical velocity gradient in the lower layer, but the variation is relatively small at short propagation distances when the vertical velocity gradient is within the range of the commonly accepted values. Finally, the depression or shoaling of the interface results in increased or decreased sensitivities, respectively, beneath the interface topography.

  16. Selectively enhanced photocurrent generation in twisted bilayer graphene with van Hove singularity

    PubMed Central

    Yin, Jianbo; Wang, Huan; Peng, Han; Tan, Zhenjun; Liao, Lei; Lin, Li; Sun, Xiao; Koh, Ai Leen; Chen, Yulin; Peng, Hailin; Liu, Zhongfan

    2016-01-01

    Graphene with ultra-high carrier mobility and ultra-short photoresponse time has shown remarkable potential in ultrafast photodetection. However, the broad and weak optical absorption (∼2.3%) of monolayer graphene hinders its practical application in photodetectors with high responsivity and selectivity. Here we demonstrate that twisted bilayer graphene, a stack of two graphene monolayers with an interlayer twist angle, exhibits a strong light–matter interaction and selectively enhanced photocurrent generation. Such enhancement is attributed to the emergence of unique twist-angle-dependent van Hove singularities, which are directly revealed by spatially resolved angle-resolved photoemission spectroscopy. When the energy interval between the van Hove singularities of the conduction and valance bands matches the energy of incident photons, the photocurrent generated can be significantly enhanced (up to ∼80 times with the integration of plasmonic structures in our devices). These results provide valuable insight for designing graphene photodetectors with enhanced sensitivity for variable wavelength. PMID:26948537

  17. Rare regions and Griffiths singularities at a clean critical point: the five-dimensional disordered contact process.

    PubMed

    Vojta, Thomas; Igo, John; Hoyos, José A

    2014-07-01

    We investigate the nonequilibrium phase transition of the disordered contact process in five space dimensions by means of optimal fluctuation theory and Monte Carlo simulations. We find that the critical behavior is of mean-field type, i.e., identical to that of the clean five-dimensional contact process. It is accompanied by off-critical power-law Griffiths singularities whose dynamical exponent z' saturates at a finite value as the transition is approached. These findings resolve the apparent contradiction between the Harris criterion, which implies that weak disorder is renormalization-group irrelevant, and the rare-region classification, which predicts unconventional behavior. We confirm and illustrate our theory by large-scale Monte Carlo simulations of systems with up to 70(5) sites. We also relate our results to a recently established general relation between the Harris criterion and Griffiths singularities [Phys. Rev. Lett. 112, 075702 (2014)], and we discuss implications for other phase transitions.

  18. Topological features of vector vortex beams perturbed with uniformly polarized light

    PubMed Central

    D’Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo

    2017-01-01

    Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell’s equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams. PMID:28079134

  19. Topological features of vector vortex beams perturbed with uniformly polarized light

    NASA Astrophysics Data System (ADS)

    D'Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo

    2017-01-01

    Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell’s equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams.

  20. Topological features of vector vortex beams perturbed with uniformly polarized light.

    PubMed

    D'Errico, Alessio; Maffei, Maria; Piccirillo, Bruno; de Lisio, Corrado; Cardano, Filippo; Marrucci, Lorenzo

    2017-01-12

    Optical singularities manifesting at the center of vector vortex beams are unstable, since their topological charge is higher than the lowest value permitted by Maxwell's equations. Inspired by conceptually similar phenomena occurring in the polarization pattern characterizing the skylight, we show how perturbations that break the symmetry of radially symmetric vector beams lead to the formation of a pair of fundamental and stable singularities, i.e. points of circular polarization. We prepare a superposition of a radial (or azimuthal) vector beam and a uniformly linearly polarized Gaussian beam; by varying the amplitudes of the two fields, we control the formation of pairs of these singular points and their spatial separation. We complete this study by applying the same analysis to vector vortex beams with higher topological charges, and by investigating the features that arise when increasing the intensity of the Gaussian term. Our results can find application in the context of singularimetry, where weak fields are measured by considering them as perturbations of unstable optical beams.

  1. River flow prediction using hybrid models of support vector regression with the wavelet transform, singular spectrum analysis and chaotic approach

    NASA Astrophysics Data System (ADS)

    Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal

    2018-06-01

    Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.

  2. Weakly Nonlinear Description of Parametric Instabilities in Vibrating Flows

    NASA Technical Reports Server (NTRS)

    Knobloch, E.; Vega, J. M.

    1999-01-01

    This project focuses on the effects of weak dissipation on vibrational flows in microgravity and in particular on (a) the generation of mean flows through viscous effects and their reaction on the flows themselves, and (b) the effects of finite group velocity and dispersion on the resulting dynamics in large domains. The basic mechanism responsible for the generation of such flows is nonlinear and was identified by Schlichting [21] and Longuet-Higgins. However, only recently has it become possible to describe such flows self-consistently in terms of amplitude equations for the parametrically excited waves coupled to a mean flow equation. The derivation of these equations is nontrivial because the limit of zero viscosity is singular. This project focuses on various aspects of this singular problem (i.e., the limit C equivalent to (nu)((g)(h(exp 3)))exp -1/2 << 1,where nu is the kinematic viscosity and h is the liquid depth) in the weakly nonlinear regime. A number of distinct cases is identified depending on the values of the Bond number, the size of the nonlinear terms, distance above threshold and the length scales of interest. The theory provides a quantitative explanation of a number of experiments on the vibration modes of liquid bridges and related experiments on parametric excitation of capillary waves in containers of both small and large aspect ratio. The following is a summary of results obtained thus far.

  3. Digging Deeper: Understanding Non-Proficient Students through an Understanding of Reading and Motivational Profiles

    ERIC Educational Resources Information Center

    Smith, Hiawatha D.

    2017-01-01

    With the continued emphasis on accountability for students, schools are working to increase the reading academic performance of their non-proficient students. Many remedial approaches fail to identify the individual strengths and weaknesses and tend to treat these students with a singular remedial focus on word identification (Allington, 2001). In…

  4. Guided solitary waves.

    PubMed

    Miles, J

    1980-04-01

    Transversely periodic solitary-wave solutions of the Boussinesq equations (which govern wave propagation in a weakly dispersive, weakly nonlinear physical system) are determined. The solutions for negative dispersion (e.g., gravity waves) are singular and therefore physically unacceptable. The solutions for positive dispersion (e.g., capillary waves or magnetosonic waves in a plasma) are physically acceptable except in a limited parametric interval, in which they are complex. The two end points of this interval are associated with (two different) resonant interactions among three basic solitary waves, two of which are two-dimensional complex conjugates and the third of which is one-dimensional and real.

  5. An accurate boundary element method for the exterior elastic scattering problem in two dimensions

    NASA Astrophysics Data System (ADS)

    Bao, Gang; Xu, Liwei; Yin, Tao

    2017-11-01

    This paper is concerned with a Galerkin boundary element method solving the two dimensional exterior elastic wave scattering problem. The original problem is first reduced to the so-called Burton-Miller [1] boundary integral formulation, and essential mathematical features of its variational form are discussed. In numerical implementations, a newly-derived and analytically accurate regularization formula [2] is employed for the numerical evaluation of hyper-singular boundary integral operator. A new computational approach is employed based on the series expansions of Hankel functions for the computation of weakly-singular boundary integral operators during the reduction of corresponding Galerkin equations into a discrete linear system. The effectiveness of proposed numerical methods is demonstrated using several numerical examples.

  6. The Casimir effect in rugby-ball type flux compactifications

    NASA Astrophysics Data System (ADS)

    Minamitsuji, M.

    2008-04-01

    We discuss volume stabilization in a 6D braneworld model based on 6D supergravity theory. The internal space is compactified by magnetic flux and contains codimension two 3-branes (conical singularities) as its boundaries. In general the external 4D spacetime is warped and in the unwrapped limit the shape of the internal space looks like a 'rugby ball'. The size of the internal space is not fixed due to the scale invariance of the supergravity theory. We discuss the possibility of volume stabilization by the Casimir effect for a massless, minimally coupled bulk scalar field. The main obstacle in studying this case is that the brane (conical) part of the relevant heat kernel coefficient (a6) has not been formulated. Thus as a first step, we consider the 4D analog model with boundary codimension two 1-branes. The spacetime structure of the 4D model is very similar to that of the original 6D model, where now the relevant heat kernel coefficient is well known. We derive the one-loop effective potential induced by a scalar field in the bulk by employing zeta function regularization with heat kernel analysis. As a result, the volume is stabilized for most possible choices of the parameters. Especially, for a larger degree of warping, our results imply that a large hierarchy between the mass scales and a tiny amount of effective cosmological constant can be realized on the brane. In the non-warped limit the ratio tends to converge to the same value, independently of the bulk gauge coupling constant. Finally, we will analyze volume stabilization in the original model 6D by employing the same mode-sum technique.

  7. Entanglement Entropy of Black Holes.

    PubMed

    Solodukhin, Sergey N

    2011-01-01

    The entanglement entropy is a fundamental quantity, which characterizes the correlations between sub-systems in a larger quantum-mechanical system. For two sub-systems separated by a surface the entanglement entropy is proportional to the area of the surface and depends on the UV cutoff, which regulates the short-distance correlations. The geometrical nature of entanglement-entropy calculation is particularly intriguing when applied to black holes when the entangling surface is the black-hole horizon. I review a variety of aspects of this calculation: the useful mathematical tools such as the geometry of spaces with conical singularities and the heat kernel method, the UV divergences in the entropy and their renormalization, the logarithmic terms in the entanglement entropy in four and six dimensions and their relation to the conformal anomalies. The focus in the review is on the systematic use of the conical singularity method. The relations to other known approaches such as 't Hooft's brick-wall model and the Euclidean path integral in the optical metric are discussed in detail. The puzzling behavior of the entanglement entropy due to fields, which non-minimally couple to gravity, is emphasized. The holographic description of the entanglement entropy of the blackhole horizon is illustrated on the two- and four-dimensional examples. Finally, I examine the possibility to interpret the Bekenstein-Hawking entropy entirely as the entanglement entropy.

  8. Entanglement Entropy of Black Holes

    NASA Astrophysics Data System (ADS)

    Solodukhin, Sergey N.

    2011-10-01

    The entanglement entropy is a fundamental quantity, which characterizes the correlations between sub-systems in a larger quantum-mechanical system. For two sub-systems separated by a surface the entanglement entropy is proportional to the area of the surface and depends on the UV cutoff, which regulates the short-distance correlations. The geometrical nature of entanglement-entropy calculation is particularly intriguing when applied to black holes when the entangling surface is the black-hole horizon. I review a variety of aspects of this calculation: the useful mathematical tools such as the geometry of spaces with conical singularities and the heat kernel method, the UV divergences in the entropy and their renormalization, the logarithmic terms in the entanglement entropy in four and six dimensions and their relation to the conformal anomalies. The focus in the review is on the systematic use of the conical singularity method. The relations to other known approaches such as 't Hooft's brick-wall model and the Euclidean path integral in the optical metric are discussed in detail. The puzzling behavior of the entanglement entropy due to fields, which non-minimally couple to gravity, is emphasized. The holographic description of the entanglement entropy of the blackhole horizon is illustrated on the two- and four-dimensional examples. Finally, I examine the possibility to interpret the Bekenstein-Hawking entropy entirely as the entanglement entropy.

  9. On the Boltzmann Equation with Stochastic Kinetic Transport: Global Existence of Renormalized Martingale Solutions

    NASA Astrophysics Data System (ADS)

    Punshon-Smith, Samuel; Smith, Scott

    2018-02-01

    This article studies the Cauchy problem for the Boltzmann equation with stochastic kinetic transport. Under a cut-off assumption on the collision kernel and a coloring hypothesis for the noise coefficients, we prove the global existence of renormalized (in the sense of DiPerna/Lions) martingale solutions to the Boltzmann equation for large initial data with finite mass, energy, and entropy. Our analysis includes a detailed study of weak martingale solutions to a class of linear stochastic kinetic equations. This study includes a criterion for renormalization, the weak closedness of the solution set, and tightness of velocity averages in {{L}1}.

  10. Detection of weak signals in memory thermal baths.

    PubMed

    Jiménez-Aquino, J I; Velasco, R M; Romero-Bastida, M

    2014-11-01

    The nonlinear relaxation time and the statistics of the first passage time distribution in connection with the quasideterministic approach are used to detect weak signals in the decay process of the unstable state of a Brownian particle embedded in memory thermal baths. The study is performed in the overdamped approximation of a generalized Langevin equation characterized by an exponential decay in the friction memory kernel. A detection criterion for each time scale is studied: The first one is referred to as the receiver output, which is given as a function of the nonlinear relaxation time, and the second one is related to the statistics of the first passage time distribution.

  11. Spectral imaging using consumer-level devices and kernel-based regression.

    PubMed

    Heikkinen, Ville; Cámara, Clara; Hirvonen, Tapani; Penttinen, Niko

    2016-06-01

    Hyperspectral reflectance factor image estimations were performed in the 400-700 nm wavelength range using a portable consumer-level laptop display as an adjustable light source for a trichromatic camera. Targets of interest were ColorChecker Classic samples, Munsell Matte samples, geometrically challenging tempera icon paintings from the turn of the 20th century, and human hands. Measurements and simulations were performed using Nikon D80 RGB camera and Dell Vostro 2520 laptop screen as a light source. Estimations were performed without spectral characteristics of the devices and by emphasizing simplicity for training sets and estimation model optimization. Spectral and color error images are shown for the estimations using line-scanned hyperspectral images as the ground truth. Estimations were performed using kernel-based regression models via a first-degree inhomogeneous polynomial kernel and a Matérn kernel, where in the latter case the median heuristic approach for model optimization and link function for bounded estimation were evaluated. Results suggest modest requirements for a training set and show that all estimation models have markedly improved accuracy with respect to the DE00 color distance (up to 99% for paintings and hands) and the Pearson distance (up to 98% for paintings and 99% for hands) from a weak training set (Digital ColorChecker SG) case when small representative training data were used in the estimation.

  12. Three-body spectrum in a finite volume: The role of cubic symmetry

    DOE PAGES

    Doring, M.; Hammer, H. -W.; Mai, M.; ...

    2018-06-15

    The three-particle quantization condition is partially diagonalized in the center-of-mass frame by using cubic symmetry on the lattice. To this end, instead of spherical harmonics, the kernel of the Bethe-Salpeter equation for particle-dimer scattering is expanded in the basis functions of different irreducible representations of the octahedral group. Such a projection is of particular importance for the three-body problem in the finite volume due to the occurrence of three-body singularities above breakup. Additionally, we study the numerical solution and properties of such a projected quantization condition in a simple model. It is shown that, for large volumes, these solutions allowmore » for an instructive interpretation of the energy eigenvalues in terms of bound and scattering states.« less

  13. Three-body spectrum in a finite volume: The role of cubic symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doring, M.; Hammer, H. -W.; Mai, M.

    The three-particle quantization condition is partially diagonalized in the center-of-mass frame by using cubic symmetry on the lattice. To this end, instead of spherical harmonics, the kernel of the Bethe-Salpeter equation for particle-dimer scattering is expanded in the basis functions of different irreducible representations of the octahedral group. Such a projection is of particular importance for the three-body problem in the finite volume due to the occurrence of three-body singularities above breakup. Additionally, we study the numerical solution and properties of such a projected quantization condition in a simple model. It is shown that, for large volumes, these solutions allowmore » for an instructive interpretation of the energy eigenvalues in terms of bound and scattering states.« less

  14. Final-state QED multipole radiation in antenna parton showers

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2017-11-01

    We present a formalism for a fully coherent QED parton shower. The complete multipole structure of photonic radiation is incorporated in a single branching kernel. The regular on-shell 2 → 3 kinematic picture is kept intact by dividing the radiative phase space into sectors, allowing for a definition of the ordering variable that is similar to QCD antenna showers. A modified version of the Sudakov veto algorithm is discussed that increases performance at the cost of the introduction of weighted events. Due to the absence of a soft singularity, the formalism for photon splitting is very similar to the QCD analogon of gluon splitting. However, since no color structure is available to guide the selection of a spectator, a weighted selection procedure from all available spectators is introduced.

  15. A numerical solution for a variable-order reaction-diffusion model by using fractional derivatives with non-local and non-singular kernel

    NASA Astrophysics Data System (ADS)

    Coronel-Escamilla, A.; Gómez-Aguilar, J. F.; Torres, L.; Escobar-Jiménez, R. F.

    2018-02-01

    A reaction-diffusion system can be represented by the Gray-Scott model. The reaction-diffusion dynamic is described by a pair of time and space dependent Partial Differential Equations (PDEs). In this paper, a generalization of the Gray-Scott model by using variable-order fractional differential equations is proposed. The variable-orders were set as smooth functions bounded in (0 , 1 ] and, specifically, the Liouville-Caputo and the Atangana-Baleanu-Caputo fractional derivatives were used to express the time differentiation. In order to find a numerical solution of the proposed model, the finite difference method together with the Adams method were applied. The simulations results showed the chaotic behavior of the proposed model when different variable-orders are applied.

  16. Kowalevski's analysis of the swinging Atwood's machine

    NASA Astrophysics Data System (ADS)

    Babelon, O.; Talon, M.; Capdequi Peyranère, M.

    2010-02-01

    We study the Kowalevski expansions near singularities of the swinging Atwood's machine. We show that there is an infinite number of mass ratios M/m where such expansions exist with the maximal number of arbitrary constants. These expansions are of the so-called weak Painlevé type. However, in view of these expansions, it is not possible to distinguish between integrable and nonintegrable cases.

  17. Scanning the parameter space of collapsing rotating thin shells

    NASA Astrophysics Data System (ADS)

    Rocha, Jorge V.; Santarelli, Raphael

    2018-06-01

    We present results of a comprehensive study of collapsing and bouncing thin shells with rotation, framing it in the context of the weak cosmic censorship conjecture. The analysis is based on a formalism developed specifically for higher odd dimensions that is able to describe the dynamics of collapsing rotating shells exactly. We analyse and classify a plethora of shell trajectories in asymptotically flat spacetimes. The parameters varied include the shell’s mass and angular momentum, its radial velocity at infinity, the (linear) equation-of-state parameter and the spacetime dimensionality. We find that plunges of rotating shells into black holes never produce naked singularities, as long as the matter shell obeys the weak energy condition, and so respects cosmic censorship. This applies to collapses of dust shells starting from rest or with a finite velocity at infinity. Not even shells with a negative isotropic pressure component (i.e. tension) lead to the formation of naked singularities, as long as the weak energy condition is satisfied. Endowing the shells with a positive isotropic pressure component allows for the existence of bouncing trajectories satisfying the dominant energy condition and fully contained outside rotating black holes. Otherwise any turning point occurs always inside the horizon. These results are based on strong numerical evidence from scans of numerous sections in the large parameter space available to these collapsing shells. The generalisation of the radial equation of motion to a polytropic equation-of-state for the matter shell is also included in an appendix.

  18. X-ray edge singularity in resonant inelastic x-ray scattering (RIXS)

    NASA Astrophysics Data System (ADS)

    Markiewicz, Robert; Rehr, John; Bansil, Arun

    2013-03-01

    We develop a lattice model based on the theory of Mahan, Noziéres, and de Dominicis for x-ray absorption to explore the effect of the core hole on the RIXS cross section. The dominant part of the spectrum can be described in terms of the dynamic structure function S (q , ω) dressed by matrix element effects, but there is also a weak background associated with multi-electron-hole pair excitations. The model reproduces the decomposition of the RIXS spectrum into well- and poorly-screened components. An edge singularity arises at the threshold of both components. Fairly large lattice sizes are required to describe the continuum limit. Supported by DOE Grant DE-FG02-07ER46352 and facilitated by the DOE CMCSN, under grant number DE-SC0007091.

  19. MHD memes

    NASA Astrophysics Data System (ADS)

    Dewar, R. L.; Mills, R.; Hole, M. J.

    2009-05-01

    The celebration of Allan Kaufman's 80th birthday was an occasion to reflect on a career that has stimulated the mutual exchange of ideas (or memes in the terminology of Richard Dawkins) between many researchers. This paper will revisit a meme Allan encountered in his early career in magnetohydrodynamics, the continuation of a magnetohydrodynamic mode through a singularity, and will also mention other problems where Allan's work has had a powerful cross-fertilizing effect in plasma physics and other areas of physics and mathematics. To resolve the continuation problem we regularize the Newcomb equation, solve it in terms of Legendre functions of imaginary argument, and define the small weak solutions of the Newcomb equation as generalized functions in the manner of Lighthill, i.e. via a limiting sequence of analytic functions that connect smoothly across the singularity.

  20. MUSIC algorithm for location searching of dielectric anomalies from S-parameters using microwave imaging

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang; Kim, Hwa Pyung; Lee, Kwang-Jae; Son, Seong-Ho

    2017-11-01

    Motivated by the biomedical engineering used in early-stage breast cancer detection, we investigated the use of MUltiple SIgnal Classification (MUSIC) algorithm for location searching of small anomalies using S-parameters. We considered the application of MUSIC to functional imaging where a small number of dipole antennas are used. Our approach is based on the application of Born approximation or physical factorization. We analyzed cases in which the anomaly is respectively small and large in relation to the wavelength, and the structure of the left-singular vectors is linked to the nonzero singular values of a Multi-Static Response (MSR) matrix whose elements are the S-parameters. Using simulations, we demonstrated the strengths and weaknesses of the MUSIC algorithm in detecting both small and extended anomalies.

  1. Numerical Tests of the Cosmic Censorship Conjecture with Collisionless Matter Collapse

    NASA Astrophysics Data System (ADS)

    Okounkova, Maria; Hemberger, Daniel; Scheel, Mark

    2016-03-01

    We present our results of numerical tests of the weak cosmic censorship conjecture (CCC), which states that generically, singularities of gravitational collapse are hidden within black holes, and the hoop conjecture, which states that black holes form when and only when a mass M gets compacted into a region whose circumference in every direction is C <= 4 πM . We built a smooth particle methods module in SpEC, the Spectral Einstein Code, to simultaneously evolve spacetime and collisionless matter configurations. We monitor RabcdRabcd for singularity formation, and probe for the existence of apparent horizons. We include in our simulations the prolate spheroid configurations considered in Shapiro and Teukolsky's 1991 numerical study of the CCC. This research was partially supported by the Dominic Orr Fellowship at Caltech.

  2. Optimized formulas for the gravitational field of a tesseroid

    NASA Astrophysics Data System (ADS)

    Grombein, Thomas; Seitz, Kurt; Heck, Bernhard

    2013-07-01

    Various tasks in geodesy, geophysics, and related geosciences require precise information on the impact of mass distributions on gravity field-related quantities, such as the gravitational potential and its partial derivatives. Using forward modeling based on Newton's integral, mass distributions are generally decomposed into regular elementary bodies. In classical approaches, prisms or point mass approximations are mostly utilized. Considering the effect of the sphericity of the Earth, alternative mass modeling methods based on tesseroid bodies (spherical prisms) should be taken into account, particularly in regional and global applications. Expressions for the gravitational field of a point mass are relatively simple when formulated in Cartesian coordinates. In the case of integrating over a tesseroid volume bounded by geocentric spherical coordinates, it will be shown that it is also beneficial to represent the integral kernel in terms of Cartesian coordinates. This considerably simplifies the determination of the tesseroid's potential derivatives in comparison with previously published methodologies that make use of integral kernels expressed in spherical coordinates. Based on this idea, optimized formulas for the gravitational potential of a homogeneous tesseroid and its derivatives up to second-order are elaborated in this paper. These new formulas do not suffer from the polar singularity of the spherical coordinate system and can, therefore, be evaluated for any position on the globe. Since integrals over tesseroid volumes cannot be solved analytically, the numerical evaluation is achieved by means of expanding the integral kernel in a Taylor series with fourth-order error in the spatial coordinates of the integration point. As the structure of the Cartesian integral kernel is substantially simplified, Taylor coefficients can be represented in a compact and computationally attractive form. Thus, the use of the optimized tesseroid formulas particularly benefits from a significant decrease in computation time by about 45 % compared to previously used algorithms. In order to show the computational efficiency and to validate the mathematical derivations, the new tesseroid formulas are applied to two realistic numerical experiments and are compared to previously published tesseroid methods and the conventional prism approach.

  3. Comptonization in Ultra-Strong Magnetic Fields: Numerical Solution to the Radiative Transfer Problem

    NASA Technical Reports Server (NTRS)

    Ceccobello, C.; Farinelli, R.; Titarchuk, L.

    2014-01-01

    We consider the radiative transfer problem in a plane-parallel slab of thermal electrons in the presence of an ultra-strong magnetic field (B approximately greater than B(sub c) approx. = 4.4 x 10(exp 13) G). Under these conditions, the magnetic field behaves like a birefringent medium for the propagating photons, and the electromagnetic radiation is split into two polarization modes, ordinary and extraordinary, that have different cross-sections. When the optical depth of the slab is large, the ordinary-mode photons are strongly Comptonized and the photon field is dominated by an isotropic component. Aims. The radiative transfer problem in strong magnetic fields presents many mathematical issues and analytical or numerical solutions can be obtained only under some given approximations. We investigate this problem both from the analytical and numerical point of view, provide a test of the previous analytical estimates, and extend these results with numerical techniques. Methods. We consider here the case of low temperature black-body photons propagating in a sub-relativistic temperature plasma, which allows us to deal with a semi-Fokker-Planck approximation of the radiative transfer equation. The problem can then be treated with the variable separation method, and we use a numerical technique to find solutions to the eigenvalue problem in the case of a singular kernel of the space operator. The singularity of the space kernel is the result of the strong angular dependence of the electron cross-section in the presence of a strong magnetic field. Results. We provide the numerical solution obtained for eigenvalues and eigenfunctions of the space operator, and the emerging Comptonization spectrum of the ordinary-mode photons for any eigenvalue of the space equation and for energies significantly lesser than the cyclotron energy, which is on the order of MeV for the intensity of the magnetic field here considered. Conclusions. We derived the specific intensity of the ordinary photons, under the approximation of large angle and large optical depth. These assumptions allow the equation to be treated using a diffusion-like approximation.

  4. Validation of Born Traveltime Kernels

    NASA Astrophysics Data System (ADS)

    Baig, A. M.; Dahlen, F. A.; Hung, S.

    2001-12-01

    Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.

  5. New numerical method for radiation heat transfer in nonhomogeneous participating media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, J.R.; Tan, Zhiqiang

    A new numerical method, which solves the exact integral equations of distance-angular integration form for radiation transfer, is introduced in this paper. By constructing and prestoring the numerical integral formulas for the distance integral for appropriate kernel functions, this method eliminates the time consuming evaluations of the kernels of the space integrals in the formal computations. In addition, when the number of elements in the system is large, the resulting coefficient matrix is quite sparse. Thus, either considerable time or much storage can be saved. A weakness of the method is discussed, and some remedies are suggested. As illustrations, somemore » one-dimensional and two-dimensional problems in both homogeneous and inhomogeneous emitting, absorbing, and linear anisotropic scattering media are studied. Some results are compared with available data. 13 refs.« less

  6. Discontinuous Galerkin Finite Element Method for Parabolic Problems

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.

    2004-01-01

    In this paper, we develop a time and its corresponding spatial discretization scheme, based upon the assumption of a certain weak singularity of parallel ut(t) parallel Lz(omega) = parallel ut parallel2, for the discontinuous Galerkin finite element method for one-dimensional parabolic problems. Optimal convergence rates in both time and spatial variables are obtained. A discussion of automatic time-step control method is also included.

  7. The crack problem for a nonhomogeneous plane

    NASA Technical Reports Server (NTRS)

    Delale, F.; Erdogan, F.

    1982-01-01

    The plane elasticity problem for a nonhomogeneous medium containing a crack is considered. It is assumed that the Poisson's ratio of the medium is constant and the Young's modulus E varies exponentially with the coordinate parallel to the crack. First the half plane problem is formulated and the solution is given for arbitrary tractions along the boundary. Then the integral equation for the crack problem is derived. It is shown that the integral equation having the derivative of the crack surface displacement as the density function has a simple Cauchy type kernel. Hence, its solution and the stresses around the crack tips have the conventional square root singularity. The solution is given for various loading conditions. The results show that the effect of the Poisson's ratio and consequently that of the thickness constraint on the stress intensity factors are rather negligible.

  8. The crack problem for a nonhomogeneous plane

    NASA Technical Reports Server (NTRS)

    Delale, F.; Erdogan, F.

    1983-01-01

    The plane elasticity problem for a nonhomogeneous medium containing a crack is considered. It is assumed that the Poisson's ratio of the medium is constant and the Young's modulus E varies exponentially with the coordinate parallel to the crack. First the half plane problem is formulated and the solution is given for arbitrary tractions along the boundary. Then the integral equation for the crack problem is derived. It is shown that the integral equation having the derivative of the crack surface displacement as the density function has a simple Cauchy type kernel. Hence, its solution and the stresses around the crack tips have the conventional square root singularity. The solution is given for various loading conditions. The results show that the effect of the Poisson's ratio and consequently that of the thickness constraint on the stress intensity factors are rather negligible.

  9. Fast and accurate implementation of Fourier spectral approximations of nonlocal diffusion operators and its applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Qiang, E-mail: jyanghkbu@gmail.com; Yang, Jiang, E-mail: qd2125@columbia.edu

    This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simplemore » ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge–Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge–Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen–Cahn equations, nonlocal Cahn–Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.« less

  10. FastSKAT: Sequence kernel association tests for very large sets of markers.

    PubMed

    Lumley, Thomas; Brody, Jennifer; Peloso, Gina; Morrison, Alanna; Rice, Kenneth

    2018-06-22

    The sequence kernel association test (SKAT) is widely used to test for associations between a phenotype and a set of genetic variants that are usually rare. Evaluating tail probabilities or quantiles of the null distribution for SKAT requires computing the eigenvalues of a matrix related to the genotype covariance between markers. Extracting the full set of eigenvalues of this matrix (an n×n matrix, for n subjects) has computational complexity proportional to n 3 . As SKAT is often used when n>104, this step becomes a major bottleneck in its use in practice. We therefore propose fastSKAT, a new computationally inexpensive but accurate approximations to the tail probabilities, in which the k largest eigenvalues of a weighted genotype covariance matrix or the largest singular values of a weighted genotype matrix are extracted, and a single term based on the Satterthwaite approximation is used for the remaining eigenvalues. While the method is not particularly sensitive to the choice of k, we also describe how to choose its value, and show how fastSKAT can automatically alert users to the rare cases where the choice may affect results. As well as providing faster implementation of SKAT, the new method also enables entirely new applications of SKAT that were not possible before; we give examples grouping variants by topologically associating domains, and comparing chromosome-wide association by class of histone marker. © 2018 WILEY PERIODICALS, INC.

  11. Congested Aggregation via Newtonian Interaction

    NASA Astrophysics Data System (ADS)

    Craig, Katy; Kim, Inwon; Yao, Yao

    2018-01-01

    We consider a congested aggregation model that describes the evolution of a density through the competing effects of nonlocal Newtonian attraction and a hard height constraint. This provides a counterpoint to existing literature on repulsive-attractive nonlocal interaction models, where the repulsive effects instead arise from an interaction kernel or the addition of diffusion. We formulate our model as the Wasserstein gradient flow of an interaction energy, with a penalization to enforce the constraint on the height of the density. From this perspective, the problem can be seen as a singular limit of the Keller-Segel equation with degenerate diffusion. Two key properties distinguish our problem from previous work on height constrained equations: nonconvexity of the interaction kernel (which places the model outside the scope of classical gradient flow theory) and nonlocal dependence of the velocity field on the density (which causes the problem to lack a comparison principle). To overcome these obstacles, we combine recent results on gradient flows of nonconvex energies with viscosity solution theory. We characterize the dynamics of patch solutions in terms of a Hele-Shaw type free boundary problem and, using this characterization, show that in two dimensions patch solutions converge to a characteristic function of a disk in the long-time limit, with an explicit rate on the decay of the energy. We believe that a key contribution of the present work is our blended approach, combining energy methods with viscosity solution theory.

  12. Stable computations with flat radial basis functions using vector-valued rational approximations

    NASA Astrophysics Data System (ADS)

    Wright, Grady B.; Fornberg, Bengt

    2017-02-01

    One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.

  13. Higher criticism approach to detect rare variants using whole genome sequencing data

    PubMed Central

    2014-01-01

    Because of low statistical power of single-variant tests for whole genome sequencing (WGS) data, the association test for variant groups is a key approach for genetic mapping. To address the features of sparse and weak genetic effects to be detected, the higher criticism (HC) approach has been proposed and theoretically has proven optimal for detecting sparse and weak genetic effects. Here we develop a strategy to apply the HC approach to WGS data that contains rare variants as the majority. By using Genetic Analysis Workshop 18 "dose" genetic data with simulated phenotypes, we assess the performance of HC under a variety of strategies for grouping variants and collapsing rare variants. The HC approach is compared with the minimal p-value method and the sequence kernel association test. The results show that the HC approach is preferred for detecting weak genetic effects. PMID:25519367

  14. Decomposition of algebraic sets and applications to weak centers of cubic systems

    NASA Astrophysics Data System (ADS)

    Chen, Xingwu; Zhang, Weinian

    2009-10-01

    There are many methods such as Gröbner basis, characteristic set and resultant, in computing an algebraic set of a system of multivariate polynomials. The common difficulties come from the complexity of computation, singularity of the corresponding matrices and some unnecessary factors in successive computation. In this paper, we decompose algebraic sets, stratum by stratum, into a union of constructible sets with Sylvester resultants, so as to simplify the procedure of elimination. Applying this decomposition to systems of multivariate polynomials resulted from period constants of reversible cubic differential systems which possess a quadratic isochronous center, we determine the order of weak centers and discuss the bifurcation of critical periods.

  15. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  16. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  17. Strong gravitational lensing by a Konoplya-Zhidenko rotating non-Kerr compact object

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Shangyun; Chen, Songbai; Jing, Jiliang, E-mail: shangyun_wang@163.com, E-mail: csb3752@hunnu.edu.cn, E-mail: jljing@hunnu.edu.cn

    Konoplya and Zhidenko have proposed recently a rotating non-Kerr black hole metric beyond General Relativity and make an estimate for the possible deviations from the Kerr solution with the data of GW 150914. We here study the strong gravitational lensing in such a rotating non-Kerr spacetime with an extra deformation parameter. We find that the condition of existence of horizons is not inconsistent with that of the marginally circular photon orbit. Moreover, the deflection angle of the light ray near the weakly naked singularity covered by the marginally circular orbit diverges logarithmically in the strong-field limit. In the case ofmore » the completely naked singularity, the deflection angle near the singularity tends to a certain finite value, whose sign depends on the rotation parameter and the deformation parameter. These properties of strong gravitational lensing are different from those in the Johannsen-Psaltis rotating non-Kerr spacetime and in the Janis-Newman-Winicour spacetime. Modeling the supermassive central object of the Milk Way Galaxy as a Konoplya-Zhidenko rotating non-Kerr compact object, we estimated the numerical values of observables for the strong gravitational lensing including the time delay between two relativistic images.« less

  18. Scattering of elastic waves from thin shapes in three dimensions using the composite boundary integral equation formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Rizzo, F.J.

    1997-08-01

    In this paper, the composite boundary integral equation (BIE) formulation is applied to scattering of elastic waves from thin shapes with small but {ital finite} thickness (open cracks or thin voids, thin inclusions, thin-layer interfaces, etc.), which are modeled with {ital two surfaces}. This composite BIE formulation, which is an extension of the Burton and Miller{close_quote}s formulation for acoustic waves, uses a linear combination of the conventional BIE and the hypersingular BIE. For thin shapes, the conventional BIE, as well as the hypersingular BIE, will degenerate (or nearly degenerate) if they are applied {ital individually} on the two surfaces. Themore » composite BIE formulation, however, will not degenerate for such problems, as demonstrated in this paper. Nearly singular and hypersingular integrals, which arise in problems involving thin shapes modeled with two surfaces, are transformed into sums of weakly singular integrals and nonsingular line integrals. Thus, no finer mesh is needed to compute these nearly singular integrals. Numerical examples of elastic waves scattered from penny-shaped cracks with varying openings are presented to demonstrate the effectiveness of the composite BIE formulation. {copyright} {ital 1997 Acoustical Society of America.}« less

  19. [Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.

    PubMed

    Takacs, T; Jüttler, B

    2012-11-01

    Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.

  20. Mechanics of finite cracks in dissimilar anisotropic elastic media considering interfacial elasticity

    DOE PAGES

    Juan, Pierre -Alexandre; Dingreville, Remi

    2016-10-31

    Interfacial crack fields and singularities in bimaterial interfaces (i.e., grain boundaries or dissimilar materials interfaces) are considered through a general formulation for two-dimensional (2-D) anisotropic elasticity while accounting for the interfacial structure by means of an interfacial elasticity paradigm. The interfacial elasticity formulation introduces boundary conditions that are effectively equivalent to those for a weakly bounded interface. This formalism considers the 2-D crack-tip elastic fields using complex variable techniques. While the consideration of the interfacial elasticity does not affect the order of the singularity, it modifies the oscillatory effects associated with problems involving interface cracks. Constructive or destructive “interferences” aremore » directly affected by the interface structure and its elastic response. Furthermore, this general formulation provides an insight on the physical significance and the obvious coupling between the interface structure and the associated mechanical fields in the vicinity of the crack tip.« less

  1. Mechanics of finite cracks in dissimilar anisotropic elastic media considering interfacial elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juan, Pierre -Alexandre; Dingreville, Remi

    Interfacial crack fields and singularities in bimaterial interfaces (i.e., grain boundaries or dissimilar materials interfaces) are considered through a general formulation for two-dimensional (2-D) anisotropic elasticity while accounting for the interfacial structure by means of an interfacial elasticity paradigm. The interfacial elasticity formulation introduces boundary conditions that are effectively equivalent to those for a weakly bounded interface. This formalism considers the 2-D crack-tip elastic fields using complex variable techniques. While the consideration of the interfacial elasticity does not affect the order of the singularity, it modifies the oscillatory effects associated with problems involving interface cracks. Constructive or destructive “interferences” aremore » directly affected by the interface structure and its elastic response. Furthermore, this general formulation provides an insight on the physical significance and the obvious coupling between the interface structure and the associated mechanical fields in the vicinity of the crack tip.« less

  2. Testosterone and androstanediol glucuronide among men in NHANES III.

    PubMed

    Duan, Chuan Wei; Xu, Lin

    2018-03-09

    Most of the androgen replacement therapies were based on serum testosterone and without measurements of total androgen activities. Whether those with low testosterone also have low levels of androgen activity is largely unknown. We hence examined the association between testosterone and androstanediol glucuronide (AG), a reliable measure of androgen activity, in a nationally representative sample of US men. Cross-sectional analysis was based on 1493 men from the Third National Health and Nutrition examination Survey (NHANES III) conducted from 1988 to 1991. Serum testosterone and AG were measured by immunoassay. Kernel density was used to estimate the average density of serum AG concentrations by quartiles of testosterone. Testosterone was weakly and positively correlated with AG (correlation coefficient = 0.18). The kernel density estimates show that the distributions are quite similar between the quartiles of testosterone. After adjustment for age, the distributions of AG in quartiles of testosterone did not change. The correlation between testosterone and AG was stronger in men with younger age, lower body mass index, non-smoking and good self-rated health and health status. Serum testosterone is weakly correlated with total androgen activities, and the correlation is even weaker for those with poor self-rated health. Our results suggest that measurement of total androgen activity in addition to testosterone is necessary in clinical practice, especially before administration of androgen replacement therapy.

  3. Higher-order phase transitions on financial markets

    NASA Astrophysics Data System (ADS)

    Kasprzak, A.; Kutner, R.; Perelló, J.; Masoliver, J.

    2010-08-01

    Statistical and thermodynamic properties of the anomalous multifractal structure of random interevent (or intertransaction) times were thoroughly studied by using the extended continuous-time random walk (CTRW) formalism of Montroll, Weiss, Scher, and Lax. Although this formalism is quite general (and can be applied to any interhuman communication with nontrivial priority), we consider it in the context of a financial market where heterogeneous agent activities can occur within a wide spectrum of time scales. As the main general consequence, we found (by additionally using the Saddle-Point Approximation) the scaling or power-dependent form of the partition function, Z(q'). It diverges for any negative scaling powers q' (which justifies the name anomalous) while for positive ones it shows the scaling with the general exponent τ(q'). This exponent is the nonanalytic (singular) or noninteger power of q', which is one of the pilar of higher-order phase transitions. In definition of the partition function we used the pausing-time distribution (PTD) as the central one, which takes the form of convolution (or superstatistics used, e.g. for describing turbulence as well as the financial market). Its integral kernel is given by the stretched exponential distribution (often used in disordered systems). This kernel extends both the exponential distribution assumed in the original version of the CTRW formalism (for description of the transient photocurrent measured in amorphous glassy material) as well as the Gaussian one sometimes used in this context (e.g. for diffusion of hydrogen in amorphous metals or for aging effects in glasses). Our most important finding is the third- and higher-order phase transitions, which can be roughly interpreted as transitions between the phase where high frequency trading is most visible and the phase defined by low frequency trading. The specific order of the phase transition directly depends upon the shape exponent α defining the stretched exponential integral kernel. On this basis a simple practical hint for investors was formulated.

  4. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  5. Fast algorithms for Quadrature by Expansion I: Globally valid expansions

    NASA Astrophysics Data System (ADS)

    Rachh, Manas; Klöckner, Andreas; O'Neil, Michael

    2017-09-01

    The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.

  6. Eulerian Dynamics with a Commutator Forcing

    DTIC Science & Technology

    2017-01-09

    SIAM Review 56(4) (2014) 577–621. [Pes2015] J. Peszek. Discrete Cucker-Smale flocking model with a weakly singular weight. SIAM J. Math . Anal., to...viscosities in bounded domains. J. Math . Pures Appl. (9), 87(2):227– 235, 2007. [CV2010] L. Caffarelli, A. Vasseur, Drift diffusion equations with...Further time regularity for fully non-linear parabolic equations. Math . Res. Lett., 22(6):1749–1766, 2015. [CCTT2016] José A. Carrillo, Young-Pil

  7. Integrated ensemble noise-reconstructed empirical mode decomposition for mechanical fault detection

    NASA Astrophysics Data System (ADS)

    Yuan, Jing; Ji, Feng; Gao, Yuan; Zhu, Jun; Wei, Chenjun; Zhou, Yu

    2018-05-01

    A new branch of fault detection is utilizing the noise such as enhancing, adding or estimating the noise so as to improve the signal-to-noise ratio (SNR) and extract the fault signatures. Hereinto, ensemble noise-reconstructed empirical mode decomposition (ENEMD) is a novel noise utilization method to ameliorate the mode mixing and denoised the intrinsic mode functions (IMFs). Despite the possibility of superior performance in detecting weak and multiple faults, the method still suffers from the major problems of the user-defined parameter and the powerless capability for a high SNR case. Hence, integrated ensemble noise-reconstructed empirical mode decomposition is proposed to overcome the drawbacks, improved by two noise estimation techniques for different SNRs as well as the noise estimation strategy. Independent from the artificial setup, the noise estimation by the minimax thresholding is improved for a low SNR case, which especially shows an outstanding interpretation for signature enhancement. For approximating the weak noise precisely, the noise estimation by the local reconfiguration using singular value decomposition (SVD) is proposed for a high SNR case, which is particularly powerful for reducing the mode mixing. Thereinto, the sliding window for projecting the phase space is optimally designed by the correlation minimization. Meanwhile, the reasonable singular order for the local reconfiguration to estimate the noise is determined by the inflection point of the increment trend of normalized singular entropy. Furthermore, the noise estimation strategy, i.e. the selection approaches of the two estimation techniques along with the critical case, is developed and discussed for different SNRs by means of the possible noise-only IMF family. The method is validated by the repeatable simulations to demonstrate the synthetical performance and especially confirm the capability of noise estimation. Finally, the method is applied to detect the local wear fault from a dual-axis stabilized platform and the gear crack from an operating electric locomotive to verify its effectiveness and feasibility.

  8. Genuine quark state versus dynamically generated structure for the Roper resonance

    NASA Astrophysics Data System (ADS)

    Golli, B.; Osmanović, H.; Širca, S.; Švarc, A.

    2018-03-01

    In view of the recent results of lattice QCD simulation in the P 11 partial wave that has found no clear signal for the three-quark Roper state we investigate a different mechanism for the formation of the Roper resonance in a coupled channel approach including the π N , π Δ , and σ N channels. We fix the pion-baryon vertices in the underlying quark model while the s -wave sigma-baryon interaction is introduced phenomenologically with the coupling strength, the mass, and the width of the σ meson as free parameters. The Laurent-Pietarinen expansion is used to extract the information about the S -matrix pole. The Lippmann-Schwinger equation for the K matrix with a separable kernel is solved to all orders. For sufficiently strong σ N N coupling the kernel becomes singular and a quasibound state emerges at around 1.4 GeV, dominated by the σ N component and reflecting itself in a pole of the S matrix. The alternative mechanism involving a (1s ) 22 s quark resonant state is added to the model and the interplay of the dynamically generated state and the three-quark resonant state is studied. It turns out that for the mass of the three-quark resonant state above 1.6 GeV the mass of the resonance is determined solely by the dynamically generated state, nonetheless, the inclusion of the three-quark resonant state is imperative to reproduce the experimental width and the modulus of the resonance pole.

  9. Static Analysis Using Abstract Interpretation

    NASA Technical Reports Server (NTRS)

    Arthaud, Maxime

    2017-01-01

    Short presentation about static analysis and most particularly abstract interpretation. It starts with a brief explanation on why static analysis is used at NASA. Then, it describes the IKOS (Inference Kernel for Open Static Analyzers) tool chain. Results on NASA projects are shown. Several well known algorithms from the static analysis literature are then explained (such as pointer analyses, memory analyses, weak relational abstract domains, function summarization, etc.). It ends with interesting problems we encountered (such as C++ analysis with exception handling, or the detection of integer overflow).

  10. Fourier's law of heat conduction: quantum mechanical master equation analysis.

    PubMed

    Wu, Lian-Ao; Segal, Dvira

    2008-06-01

    We derive the macroscopic Fourier's Law of heat conduction from the exact gain-loss time convolutionless quantum master equation under three assumptions for the interaction kernel. To second order in the interaction, we show that the first two assumptions are natural results of the long time limit. The third assumption can be satisfied by a family of interactions consisting of an exchange effect. The pure exchange model directly leads to energy diffusion in a weakly coupled spin- 12 chain.

  11. Topological strings on singular elliptic Calabi-Yau 3-folds and minimal 6d SCFTs

    NASA Astrophysics Data System (ADS)

    Del Zotto, Michele; Gu, Jie; Huang, Min-xin; Kashani-Poor, Amir-Kian; Klemm, Albrecht; Lockhart, Guglielmo

    2018-03-01

    We apply the modular approach to computing the topological string partition function on non-compact elliptically fibered Calabi-Yau 3-folds with higher Kodaira singularities in the fiber. The approach consists in making an ansatz for the partition function at given base degree, exact in all fiber classes to arbitrary order and to all genus, in terms of a rational function of weak Jacobi forms. Our results yield, at given base degree, the elliptic genus of the corresponding non-critical 6d string, and thus the associated BPS invariants of the 6d theory. The required elliptic indices are determined from the chiral anomaly 4-form of the 2d worldsheet theories, or the 8-form of the corresponding 6d theories, and completely fix the holomorphic anomaly equation constraining the partition function. We introduce subrings of the known rings of Weyl invariant Jacobi forms which are adapted to the additional symmetries of the partition function, making its computation feasible to low base wrapping number. In contradistinction to the case of simpler singularities, generic vanishing conditions on BPS numbers are no longer sufficient to fix the modular ansatz at arbitrary base wrapping degree. We show that to low degree, imposing exact vanishing conditions does suffice, and conjecture this to be the case generally.

  12. Milne, a routine for the numerical solution of Milne's problem

    NASA Astrophysics Data System (ADS)

    Rawat, Ajay; Mohankumar, N.

    2010-11-01

    The routine Milne provides accurate numerical values for the classical Milne's problem of neutron transport for the planar one speed and isotropic scattering case. The solution is based on the Case eigen-function formalism. The relevant X functions are evaluated accurately by the Double Exponential quadrature. The calculated quantities are the extrapolation distance and the scalar and the angular fluxes. Also, the H function needed in astrophysical calculations is evaluated as a byproduct. Program summaryProgram title: Milne Catalogue identifier: AEGS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 701 No. of bytes in distributed program, including test data, etc.: 6845 Distribution format: tar.gz Programming language: Fortran 77 Computer: PC under Linux or Windows Operating system: Ubuntu 8.04 (Kernel version 2.6.24-16-generic), Windows-XP Classification: 4.11, 21.1, 21.2 Nature of problem: The X functions are integral expressions. The convergence of these regular and Cauchy Principal Value integrals are impaired by the singularities of the integrand in the complex plane. The DE quadrature scheme tackles these singularities in a robust manner compared to the standard Gauss quadrature. Running time: The test included in the distribution takes a few seconds to run.

  13. Power weighted L p -inequalities for Laguerre-Riesz transforms

    NASA Astrophysics Data System (ADS)

    Harboure, Eleonor; Segovia, Carlos; Torrea, José L.; Viviani, Beatriz

    2008-10-01

    In this paper we give a complete description of the power weighted inequalities, of strong, weak and restricted weak type for the pair of Riesz transforms associated with the Laguerre function system \\{mathcal{L}_k^{α}\\}, for any given α>-1. We achieve these results by a careful estimate of the kernels: near the diagonal we show that they are local Calderón-Zygmund operators while in the complement they are majorized by Hardy type operators and the maximal heat-diffusion operator. We also show that in all the cases our results are sharp.

  14. Exact combinatorial approach to finite coagulating systems

    NASA Astrophysics Data System (ADS)

    Fronczak, Agata; Chmiel, Anna; Fronczak, Piotr

    2018-02-01

    This paper outlines an exact combinatorial approach to finite coagulating systems. In this approach, cluster sizes and time are discrete and the binary aggregation alone governs the time evolution of the systems. By considering the growth histories of all possible clusters, an exact expression is derived for the probability of a coagulating system with an arbitrary kernel being found in a given cluster configuration when monodisperse initial conditions are applied. Then this probability is used to calculate the time-dependent distribution for the number of clusters of a given size, the average number of such clusters, and that average's standard deviation. The correctness of our general expressions is proved based on the (analytical and numerical) results obtained for systems with the constant kernel. In addition, the results obtained are compared with the results arising from the solutions to the mean-field Smoluchowski coagulation equation, indicating its weak points. The paper closes with a brief discussion on the extensibility to other systems of the approach presented herein, emphasizing the issue of arbitrary initial conditions.

  15. Protein Kinase Classification with 2866 Hidden Markov Models and One Support Vector Machine

    NASA Technical Reports Server (NTRS)

    Weber, Ryan; New, Michael H.; Fonda, Mark (Technical Monitor)

    2002-01-01

    The main application considered in this paper is predicting true kinases from randomly permuted kinases that share the same length and amino acid distributions as the true kinases. Numerous methods already exist for this classification task, such as HMMs, motif-matchers, and sequence comparison algorithms. We build on some of these efforts by creating a vector from the output of thousands of structurally based HMMs, created offline with Pfam-A seed alignments using SAM-T99, which then must be combined into an overall classification for the protein. Then we use a Support Vector Machine for classifying this large ensemble Pfam-Vector, with a polynomial and chisquared kernel. In particular, the chi-squared kernel SVM performs better than the HMMs and better than the BLAST pairwise comparisons, when predicting true from false kinases in some respects, but no one algorithm is best for all purposes or in all instances so we consider the particular strengths and weaknesses of each.

  16. The complex variable reproducing kernel particle method for bending problems of thin plates on elastic foundations

    NASA Astrophysics Data System (ADS)

    Chen, L.; Cheng, Y. M.

    2018-07-01

    In this paper, the complex variable reproducing kernel particle method (CVRKPM) for solving the bending problems of isotropic thin plates on elastic foundations is presented. In CVRKPM, one-dimensional basis function is used to obtain the shape function of a two-dimensional problem. CVRKPM is used to form the approximation function of the deflection of the thin plates resting on elastic foundation, the Galerkin weak form of thin plates on elastic foundation is employed to obtain the discretized system equations, the penalty method is used to apply the essential boundary conditions, and Winkler and Pasternak foundation models are used to consider the interface pressure between the plate and the foundation. Then the corresponding formulae of CVRKPM for thin plates on elastic foundations are presented in detail. Several numerical examples are given to discuss the efficiency and accuracy of CVRKPM in this paper, and the corresponding advantages of the present method are shown.

  17. The Big Bang, Superstring Theory and the origin of life on the Earth.

    PubMed

    Trevors, J T

    2006-03-01

    This article examines the origin of life on Earth and its connection to the Superstring Theory, that attempts to explain all phenomena in the universe (Theory of Everything) and unify the four known forces and relativity and quantum theory. The four forces of gravity, electro-magnetism, strong and weak nuclear were all present and necessary for the origin of life on the Earth. It was the separation of the unified force into four singular forces that allowed the origin of life.

  18. On the Singular Incompressible Limit of Inviscid Compressible Fluids

    NASA Astrophysics Data System (ADS)

    Secchi, P.

    We consider the Euler equations of barotropic inviscid compressible fluids in a bounded domain. It is well known that, as the Mach number goes to zero, the compressible flows approximate the solution of the equations of motion of inviscid, incompressible fluids. In this paper we discuss, for the boundary case, the different kinds of convergence under various assumptions on the data, in particular the weak convergence in the case of uniformly bounded initial data and the strong convergence in the norm of the data space.

  19. End Point of the Ultraspinning Instability and Violation of Cosmic Censorship.

    PubMed

    Figueras, Pau; Kunesch, Markus; Lehner, Luis; Tunyasuvunakool, Saran

    2017-04-14

    We determine the end point of the axisymmetric ultraspinning instability of asymptotically flat Myers-Perry black holes in D=6 spacetime dimensions. In the nonlinear regime, this instability gives rise to a sequence of concentric rings connected by segments of black membrane on the rotation plane. The latter become thinner over time, resulting in the formation of a naked singularity in finite asymptotic time and hence a violation of the weak cosmic censorship conjecture in asymptotically flat higher-dimensional spaces.

  20. End Point of the Ultraspinning Instability and Violation of Cosmic Censorship

    NASA Astrophysics Data System (ADS)

    Figueras, Pau; Kunesch, Markus; Lehner, Luis; Tunyasuvunakool, Saran

    2017-04-01

    We determine the end point of the axisymmetric ultraspinning instability of asymptotically flat Myers-Perry black holes in D =6 spacetime dimensions. In the nonlinear regime, this instability gives rise to a sequence of concentric rings connected by segments of black membrane on the rotation plane. The latter become thinner over time, resulting in the formation of a naked singularity in finite asymptotic time and hence a violation of the weak cosmic censorship conjecture in asymptotically flat higher-dimensional spaces.

  1. Weak-Lensing Detection of Cl 1604+4304 at z=0.90

    NASA Astrophysics Data System (ADS)

    Margoniner, V. E.; Lubin, L. M.; Wittman, D. M.; Squires, G. K.

    2005-01-01

    We present a weak-lensing analysis of the high-redshift cluster Cl 1604+4304. At z=0.90, this is the highest redshift cluster yet detected with weak lensing. It is also one of a sample of high-redshift, optically selected clusters whose X-ray temperatures are lower than expected based on their velocity dispersions. Both the gas temperature and galaxy velocity dispersion are proxies for its mass, which can be determined more directly by a lensing analysis. Modeling the cluster as a singular isothermal sphere, we find that the mass contained within projected radius R is (3.69+/-1.47)[R/(500 kpc)]×1014 Msolar. This corresponds to an inferred velocity dispersion of 1004+/-199 km s-1, which agrees well with the velocity dispersion of 989+98-76 km s-1 recently measured by Gal & Lubin. These numbers are higher than the 575+110-85 km s-1 inferred from Cl 1604+4304's X-ray temperature; however, all three velocity dispersion estimates are consistent within ~1.9 σ.

  2. Phase structure of NJL model with weak renormalization group

    NASA Astrophysics Data System (ADS)

    Aoki, Ken-Ichi; Kumamoto, Shin-Ichiro; Yamada, Masatoshi

    2018-06-01

    We analyze the chiral phase structure of the Nambu-Jona-Lasinio model at finite temperature and density by using the functional renormalization group (FRG). The renormalization group (RG) equation for the fermionic effective potential V (σ ; t) is given as a partial differential equation, where σ : = ψ bar ψ and t is a dimensionless RG scale. When the dynamical chiral symmetry breaking (DχSB) occurs at a certain scale tc, V (σ ; t) has singularities originated from the phase transitions, and then one cannot follow RG flows after tc. In this study, we introduce the weak solution method to the RG equation in order to follow the RG flows after the DχSB and to evaluate the dynamical mass and the chiral condensate in low energy scales. It is shown that the weak solution of the RG equation correctly captures vacuum structures and critical phenomena within the pure fermionic system. We show the chiral phase diagram on temperature, chemical potential and the four-Fermi coupling constant.

  3. Distributed delays in a hybrid model of tumor-immune system interplay.

    PubMed

    Caravagna, Giulio; Graudenzi, Alex; d'Onofrio, Alberto

    2013-02-01

    A tumor is kinetically characterized by the presence of multiple spatio-temporal scales in which its cells interplay with, for instance, endothelial cells or Immune system effectors, exchanging various chemical signals. By its nature, tumor growth is an ideal object of hybrid modeling where discrete stochastic processes model low-numbers entities, and mean-field equations model abundant chemical signals. Thus, we follow this approach to model tumor cells, effector cells and Interleukin-2, in order to capture the Immune surveillance effect. We here present a hybrid model with a generic delay kernel accounting that, due to many complex phenomena such as chemical transportation and cellular differentiation, the tumor-induced recruitment of effectors exhibits a lag period. This model is a Stochastic Hybrid Automata and its semantics is a Piecewise Deterministic Markov process where a two-dimensional stochastic process is interlinked to a multi-dimensional mean-field system. We instantiate the model with two well-known weak and strong delay kernels and perform simulations by using an algorithm to generate trajectories of this process. Via simulations and parametric sensitivity analysis techniques we (i) relate tumor mass growth with the two kernels, we (ii) measure the strength of the Immune surveillance in terms of probability distribution of the eradication times, and (iii) we prove, in the oscillatory regime, the existence of a stochastic bifurcation resulting in delay-induced tumor eradication.

  4. Risk Classification with an Adaptive Naive Bayes Kernel Machine Model.

    PubMed

    Minnier, Jessica; Yuan, Ming; Liu, Jun S; Cai, Tianxi

    2015-04-22

    Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models.

  5. Pairing in a dry Fermi sea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, Thomas A.; Staar, Peter; Mishra, V.

    In the traditional Bardeen–Cooper–Schrieffer theory of superconductivity, the amplitude for the propagation of a pair of electrons with momentum k and -k has a log singularity as the temperature decreases. This so-called Cooper instability arises from the presence of an electron Fermi sea. It means that an attractive interaction, no matter how weak, will eventually lead to a pairing instability. However, in the pseudogap regime of the cuprate superconductors, where parts of the Fermi surface are destroyed, this log singularity is suppressed, raising the question of how pairing occurs in the absence of a Fermi sea. In this paper, wemore » report Hubbard model numerical results and the analysis of angular-resolved photoemission experiments on a cuprate superconductor. Finally, in contrast to the traditional theory, we find that in the pseudogap regime the pairing instability arises from an increase in the strength of the spin–fluctuation pairing interaction as the temperature decreases rather than the Cooper log instability.« less

  6. Pairing in a dry Fermi sea

    DOE PAGES

    Maier, Thomas A.; Staar, Peter; Mishra, V.; ...

    2016-06-17

    In the traditional Bardeen–Cooper–Schrieffer theory of superconductivity, the amplitude for the propagation of a pair of electrons with momentum k and -k has a log singularity as the temperature decreases. This so-called Cooper instability arises from the presence of an electron Fermi sea. It means that an attractive interaction, no matter how weak, will eventually lead to a pairing instability. However, in the pseudogap regime of the cuprate superconductors, where parts of the Fermi surface are destroyed, this log singularity is suppressed, raising the question of how pairing occurs in the absence of a Fermi sea. In this paper, wemore » report Hubbard model numerical results and the analysis of angular-resolved photoemission experiments on a cuprate superconductor. Finally, in contrast to the traditional theory, we find that in the pseudogap regime the pairing instability arises from an increase in the strength of the spin–fluctuation pairing interaction as the temperature decreases rather than the Cooper log instability.« less

  7. Static black hole solutions with a self-interacting conformally coupled scalar field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dotti, Gustavo; Gleiser, Reinaldo J.; Martinez, Cristian

    2008-05-15

    We study static, spherically symmetric black hole solutions of the Einstein equations with a positive cosmological constant and a conformally coupled self-interacting scalar field. Exact solutions for this model found by Martinez, Troncoso, and Zanelli were subsequently shown to be unstable under linear gravitational perturbations, with modes that diverge arbitrarily fast. We find that the moduli space of static, spherically symmetric solutions that have a regular horizon--and satisfy the weak and dominant energy conditions outside the horizon--is a singular subset of a two-dimensional space parametrized by the horizon radius and the value of the scalar field at the horizon. Themore » singularity of this space of solutions provides an explanation for the instability of the Martinez, Troncoso, and Zanelli spacetimes and leads to the conclusion that, if we include stability as a criterion, there are no physically acceptable black hole solutions for this system that contain a cosmological horizon in the exterior of its event horizon.« less

  8. A Galerkin method for linear PDE systems in circular geometries with structural acoustic applications

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.

    1994-01-01

    A Galerkin method for systems of PDE's in circular geometries is presented with motivating problems being drawn from structural, acoustic, and structural acoustic applications. Depending upon the application under consideration, piecewise splines or Legendre polynomials are used when approximating the system dynamics with modifications included to incorporate the analytic solution decay near the coordinate singularity. This provides an efficient method which retains its accuracy throughout the circular domain without degradation at singularity. Because the problems under consideration are linear or weakly nonlinear with constant or piecewise constant coefficients, transform methods for the problems are not investigated. While the specific method is developed for the two dimensional wave equations on a circular domain and the equation of transverse motion for a thin circular plate, examples demonstrating the extension of the techniques to a fully coupled structural acoustic system are used to illustrate the flexibility of the method when approximating the dynamics of more complex systems.

  9. Numerical solution methods for viscoelastic orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1988-01-01

    Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.

  10. Green operators for low regularity spacetimes

    NASA Astrophysics Data System (ADS)

    Sanchez Sanchez, Yafet; Vickers, James

    2018-02-01

    In this paper we define and construct advanced and retarded Green operators for the wave operator on spacetimes with low regularity. In order to do so we require that the spacetime satisfies the condition of generalised hyperbolicity which is equivalent to well-posedness of the classical inhomogeneous problem with zero initial data where weak solutions are properly supported. Moreover, we provide an explicit formula for the kernel of the Green operators in terms of an arbitrary eigenbasis of H 1 and a suitable Green matrix that solves a system of second order ODEs.

  11. Mathematical model describing the thyroids-pituitary axis with distributed time delays in hormone transportation

    NASA Astrophysics Data System (ADS)

    Neamţu, Mihaela; Stoian, Dana; Navolan, Dan Bogdan

    2014-12-01

    In the present paper we provide a mathematical model that describe the hypothalamus-pituitary-thyroid axis in autoimmune (Hashimoto's) thyroiditis. Since there is a spatial separation between thyroid and pituitary gland in the body, time is needed for transportation of thyrotropin and thyroxine between the glands. Thus, the distributed time delays are considered as both weak and Dirac kernels. The delayed model is analyzed regarding the stability and bifurcation behavior. The last part contains some numerical simulations to illustrate the effectiveness of our results and conclusions.

  12. Characterization of cancer and normal tissue fluorescence through wavelet transform and singular value decomposition

    NASA Astrophysics Data System (ADS)

    Gharekhan, Anita H.; Biswal, Nrusingh C.; Gupta, Sharad; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.

    2008-02-01

    The statistical and characteristic features of the polarized fluorescence spectra from cancer, normal and benign human breast tissues are studied through wavelet transform and singular value decomposition. The discrete wavelets enabled one to isolate high and low frequency spectral fluctuations, which revealed substantial randomization in the cancerous tissues, not present in the normal cases. In particular, the fluctuations fitted well with a Gaussian distribution for the cancerous tissues in the perpendicular component. One finds non-Gaussian behavior for normal and benign tissues' spectral variations. The study of the difference of intensities in parallel and perpendicular channels, which is free from the diffusive component, revealed weak fluorescence activity in the 630nm domain, for the cancerous tissues. This may be ascribable to porphyrin emission. The role of both scatterers and fluorophores in the observed minor intensity peak for the cancer case is experimentally confirmed through tissue-phantom experiments. Continuous Morlet wavelet also highlighted this domain for the cancerous tissue fluorescence spectra. Correlation in the spectral fluctuation is further studied in different tissue types through singular value decomposition. Apart from identifying different domains of spectral activity for diseased and non-diseased tissues, we found random matrix support for the spectral fluctuations. The small eigenvalues of the perpendicular polarized fluorescence spectra of cancerous tissues fitted remarkably well with random matrix prediction for Gaussian random variables, confirming our observations about spectral fluctuations in the wavelet domain.

  13. Geometric description of modular and weak values in discrete quantum systems using the Majorana representation

    NASA Astrophysics Data System (ADS)

    Cormann, Mirko; Caudano, Yves

    2017-07-01

    We express modular and weak values of observables of three- and higher-level quantum systems in their polar form. The Majorana representation of N-level systems in terms of symmetric states of N  -  1 qubits provides us with a description on the Bloch sphere. With this geometric approach, we find that modular and weak values of observables of N-level quantum systems can be factored in N  -  1 contributions. Their modulus is determined by the product of N  -  1 ratios involving projection probabilities between qubits, while their argument is deduced from a sum of N  -  1 solid angles on the Bloch sphere. These theoretical results allow us to study the geometric origin of the quantum phase discontinuity around singularities of weak values in three-level systems. We also analyze the three-box paradox (Aharonov and Vaidman 1991 J. Phys. A: Math. Gen. 24 2315-28) from the point of view of a bipartite quantum system. In the Majorana representation of this paradox, an observer comes to opposite conclusions about the entanglement state of the particles that were successfully pre- and postselected.

  14. A Framework for Propagation of Uncertainties in the Kepler Data Analysis Pipeline

    NASA Technical Reports Server (NTRS)

    Clarke, Bruce D.; Allen, Christopher; Bryson, Stephen T.; Caldwell, Douglas A.; Chandrasekaran, Hema; Cote, Miles T.; Girouard, Forrest; Jenkins, Jon M.; Klaus, Todd C.; Li, Jie; hide

    2010-01-01

    The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by simultaneously observing 100,000 stellar targets nearly continuously over a three and a half year period. The 96-megapixel focal plane consists of 42 charge-coupled devices (CCD) each containing two 1024 x 1100 pixel arrays. Cross-correlations between calibrated pixels are introduced by common calibrations performed on each CCD requiring downstream data products access to the calibrated pixel covariance matrix in order to properly estimate uncertainties. The prohibitively large covariance matrices corresponding to the 75,000 calibrated pixels per CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used to implement standard propagation of uncertainties (POU) in the Kepler Science Operations Center (SOC) data processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent calibration transformation allowing the full covariance matrix of any subset of calibrated pixels to be recalled on-the-fly at any step in the calibration process. Singular value decomposition (SVD) is used to compress and low-pass filter the raw uncertainty data as well as any data dependent kernels. The combination of POU framework and SVD compression provide downstream consumers of the calibrated pixel data access to the full covariance matrix of any subset of the calibrated pixels traceable to pixel level measurement uncertainties without having to store, retrieve and operate on prohibitively large covariance matrices. We describe the POU Framework and SVD compression scheme and its implementation in the Kepler SOC pipeline.

  15. Multilinear Graph Embedding: Representation and Regularization for Images.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, Nadine; Prestel, S.; Ritzmann, M.

    We present the first public implementation of antenna-based QCD initial- and final-state showers. The shower kernels are 2→3 antenna functions, which capture not only the collinear dynamics but also the leading soft (coherent) singularities of QCD matrix elements. We define the evolution measure to be inversely proportional to the leading poles, hence gluon emissions are evolved in a p ⊥ measure inversely proportional to the eikonal, while processes that only contain a single pole (e.g., g → qq¯) are evolved in virtuality. Non-ordered emissions are allowed, suppressed by an additional power of 1/Q 2. Recoils and kinematics are governed bymore » exact on-shell 2 → 3 phase-space factorisations. This first implementation is limited to massless QCD partons and colourless resonances. Tree-level matrix-element corrections are included for QCD up to O(α 4 s) (4 jets), and for Drell–Yan and Higgs production up to O(α 3 s) (V / H + 3 jets). Finally, the resulting algorithm has been made publicly available in Vincia 2.0.« less

  17. A comparative mathematical analysis of RL and RC electrical circuits via Atangana-Baleanu and Caputo-Fabrizio fractional derivatives

    NASA Astrophysics Data System (ADS)

    Abro, Kashif Ali; Memon, Anwar Ahmed; Uqaili, Muhammad Aslam

    2018-03-01

    This research article is analyzed for the comparative study of RL and RC electrical circuits by employing newly presented Atangana-Baleanu and Caputo-Fabrizio fractional derivatives. The governing ordinary differential equations of RL and RC electrical circuits have been fractionalized in terms of fractional operators in the range of 0 ≤ ξ ≤ 1 and 0 ≤ η ≤ 1. The analytic solutions of fractional differential equations for RL and RC electrical circuits have been solved by using the Laplace transform with its inversions. General solutions have been investigated for periodic and exponential sources by implementing the Atangana-Baleanu and Caputo-Fabrizio fractional operators separately. The investigated solutions have been expressed in terms of simple elementary functions with convolution product. On the basis of newly fractional derivatives with and without singular kernel, the voltage and current have interesting behavior with several similarities and differences for the periodic and exponential sources.

  18. A simplified implementation of van der Waals density functionals for first-principles molecular dynamics applications

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Gygi, François

    2012-06-01

    We present a simplified implementation of the non-local van der Waals correlation functional introduced by Dion et al. [Phys. Rev. Lett. 92, 246401 (2004)] and reformulated by Román-Pérez et al. [Phys. Rev. Lett. 103, 096102 (2009)]. The proposed numerical approach removes the logarithmic singularity of the kernel function. Complete expressions of the self-consistent correlation potential and of the stress tensor are given. Combined with various choices of exchange functionals, five versions of van der Waals density functionals are implemented. Applications to the computation of the interaction energy of the benzene-water complex and to the computation of the equilibrium cell parameters of the benzene crystal are presented. As an example of crystal structure calculation involving a mixture of hydrogen bonding and dispersion interactions, we compute the equilibrium structure of two polymorphs of aspirin (2-acetoxybenzoic acid, C9H8O4) in the P21/c monoclinic structure.

  19. Direct discriminant locality preserving projection with Hammerstein polynomial expansion.

    PubMed

    Chen, Xi; Zhang, Jiashu; Li, Defang

    2012-12-01

    Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.

  20. Weak crystallization theory of metallic alloys

    DOE PAGES

    Martin, Ivar; Gopalakrishnan, Sarang; Demler, Eugene A.

    2016-06-20

    Crystallization is one of the most familiar, but hardest to analyze, phase transitions. The principal reason is that crystallization typically occurs via a strongly first-order phase transition, and thus rigorous treatment would require comparing energies of an infinite number of possible crystalline states with the energy of liquid. A great simplification occurs when crystallization transition happens to be weakly first order. In this case, weak crystallization theory, based on unbiased Ginzburg-Landau expansion, can be applied. Even beyond its strict range of validity, it has been a useful qualitative tool for understanding crystallization. In its standard form, however, weak crystallization theorymore » cannot explain the existence of a majority of observed crystalline and quasicrystalline states. Here we extend the weak crystallization theory to the case of metallic alloys. In this paper, we identify a singular effect of itinerant electrons on the form of weak crystallization free energy. It is geometric in nature, generating strong dependence of free energy on the angles between ordering wave vectors of ionic density. That leads to stabilization of fcc, rhombohedral, and icosahedral quasicrystalline (iQC) phases, which are absent in the generic theory with only local interactions. Finally, as an application, we find the condition for stability of iQC that is consistent with the Hume-Rothery rules known empirically for the majority of stable iQC; namely, the length of the primary Bragg-peak wave vector is approximately equal to the diameter of the Fermi sphere.« less

  1. A Least-Squares-Based Weak Galerkin Finite Element Method for Second Order Elliptic Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Wang, Junping; Ye, Xiu

    Here, in this article, we introduce a least-squares-based weak Galerkin finite element method for the second order elliptic equation. This new method is shown to provide very accurate numerical approximations for both the primal and the flux variables. In contrast to other existing least-squares finite element methods, this new method allows us to use discontinuous approximating functions on finite element partitions consisting of arbitrary polygon/polyhedron shapes. We also develop a Schur complement algorithm for the resulting discretization problem by eliminating all the unknowns that represent the solution information in the interior of each element. Optimal order error estimates for bothmore » the primal and the flux variables are established. An extensive set of numerical experiments are conducted to demonstrate the robustness, reliability, flexibility, and accuracy of the least-squares-based weak Galerkin finite element method. Finally, the numerical examples cover a wide range of applied problems, including singularly perturbed reaction-diffusion equations and the flow of fluid in porous media with strong anisotropy and heterogeneity.« less

  2. A Least-Squares-Based Weak Galerkin Finite Element Method for Second Order Elliptic Equations

    DOE PAGES

    Mu, Lin; Wang, Junping; Ye, Xiu

    2017-08-17

    Here, in this article, we introduce a least-squares-based weak Galerkin finite element method for the second order elliptic equation. This new method is shown to provide very accurate numerical approximations for both the primal and the flux variables. In contrast to other existing least-squares finite element methods, this new method allows us to use discontinuous approximating functions on finite element partitions consisting of arbitrary polygon/polyhedron shapes. We also develop a Schur complement algorithm for the resulting discretization problem by eliminating all the unknowns that represent the solution information in the interior of each element. Optimal order error estimates for bothmore » the primal and the flux variables are established. An extensive set of numerical experiments are conducted to demonstrate the robustness, reliability, flexibility, and accuracy of the least-squares-based weak Galerkin finite element method. Finally, the numerical examples cover a wide range of applied problems, including singularly perturbed reaction-diffusion equations and the flow of fluid in porous media with strong anisotropy and heterogeneity.« less

  3. Kernel-imbedded Gaussian processes for disease classification using microarray gene expression data

    PubMed Central

    Zhao, Xin; Cheung, Leo Wang-Kit

    2007-01-01

    Background Designing appropriate machine learning methods for identifying genes that have a significant discriminating power for disease outcomes has become more and more important for our understanding of diseases at genomic level. Although many machine learning methods have been developed and applied to the area of microarray gene expression data analysis, the majority of them are based on linear models, which however are not necessarily appropriate for the underlying connection between the target disease and its associated explanatory genes. Linear model based methods usually also bring in false positive significant features more easily. Furthermore, linear model based algorithms often involve calculating the inverse of a matrix that is possibly singular when the number of potentially important genes is relatively large. This leads to problems of numerical instability. To overcome these limitations, a few non-linear methods have recently been introduced to the area. Many of the existing non-linear methods have a couple of critical problems, the model selection problem and the model parameter tuning problem, that remain unsolved or even untouched. In general, a unified framework that allows model parameters of both linear and non-linear models to be easily tuned is always preferred in real-world applications. Kernel-induced learning methods form a class of approaches that show promising potentials to achieve this goal. Results A hierarchical statistical model named kernel-imbedded Gaussian process (KIGP) is developed under a unified Bayesian framework for binary disease classification problems using microarray gene expression data. In particular, based on a probit regression setting, an adaptive algorithm with a cascading structure is designed to find the appropriate kernel, to discover the potentially significant genes, and to make the optimal class prediction accordingly. A Gibbs sampler is built as the core of the algorithm to make Bayesian inferences. Simulation studies showed that, even without any knowledge of the underlying generative model, the KIGP performed very close to the theoretical Bayesian bound not only in the case with a linear Bayesian classifier but also in the case with a very non-linear Bayesian classifier. This sheds light on its broader usability to microarray data analysis problems, especially to those that linear methods work awkwardly. The KIGP was also applied to four published microarray datasets, and the results showed that the KIGP performed better than or at least as well as any of the referred state-of-the-art methods did in all of these cases. Conclusion Mathematically built on the kernel-induced feature space concept under a Bayesian framework, the KIGP method presented in this paper provides a unified machine learning approach to explore both the linear and the possibly non-linear underlying relationship between the target features of a given binary disease classification problem and the related explanatory gene expression data. More importantly, it incorporates the model parameter tuning into the framework. The model selection problem is addressed in the form of selecting a proper kernel type. The KIGP method also gives Bayesian probabilistic predictions for disease classification. These properties and features are beneficial to most real-world applications. The algorithm is naturally robust in numerical computation. The simulation studies and the published data studies demonstrated that the proposed KIGP performs satisfactorily and consistently. PMID:17328811

  4. Resolvability of regional density structure

    NASA Astrophysics Data System (ADS)

    Plonka, A.; Fichtner, A.

    2016-12-01

    Lateral density variations are the source of mass transport in the Earth at all scales, acting as drivers of convectivemotion. However, the density structure of the Earth remains largely unknown since classic seismic observables and gravityprovide only weak constraints with strong trade-offs. Current density models are therefore often based on velocity scaling,making strong assumptions on the origin of structural heterogeneities, which may not necessarily be correct. Our goal is to assessif 3D density structure may be resolvable with emerging full-waveform inversion techniques. We have previously quantified the impact of regional-scale crustal density structure on seismic waveforms with the conclusion that reasonably sized density variations within thecrust can leave a strong imprint on both travel times and amplitudes, and, while this can produce significant biases in velocity and Q estimates, the seismic waveform inversion for density may become feasible. In this study we performprincipal component analyses of sensitivity kernels for P velocity, S velocity, and density. This is intended to establish theextent to which these kernels are linearly independent, i.e. the extent to which the different parameters may be constrainedindependently. Since the density imprint we observe is not exclusively linked to travel times and amplitudes of specific phases,we consider waveform differences between complete seismograms. We test the method using a known smooth model of the crust and seismograms with clear Love and Rayleigh waves, showing that - as expected - the first principal kernel maximizes sensitivity to SH and SV velocity structure, respectively, and that the leakage between S velocity, P velocity and density parameter spaces is minimal in the chosen setup. Next, we apply the method to data from 81 events around the Iberian Penninsula, registered in total by 492 stations. The objective is to find a principal kernel which would maximize the sensitivity to density, potentially allowing for independent density resolution, and, as the final goal, for direct density inversion.

  5. Irradiation performance of HTGR fuel rods in HFIR experiments HRB-7 and -8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valentine, K.H.; Homan, F.J.; Long, E.L. Jr.

    1977-05-01

    The HRB-7 and -8 experiments were designed as a comprehensive test of mixed thorium-uranium oxide fissile particles with Th:U ratios from 0 to 8 for HTGR recycle application. In addition, fissile particles derived from Weak-Acid Resin (WAR) were tested as a potential backup type of fissile particle for HTGR recycle. These experiments were conducted at two temperatures (1250 and 1500/sup 0/C) to determine the influence of operating temperature on the performance parameters studied. The minor objectives were comparison of advanced coating designs where ZrC replaced SiC in the Triso design, testing of fuel coated in laboratory-scale equipment with fuel coatedmore » in production-scale coaters, comparison of the performance of /sup 233/U-bearing particles with that of /sup 235/U-bearing particles, comparison of the performance of Biso coatings with Triso coatings for particles containing the same type of kernel, and testing of multijunction tungsten-rhenium thermocouples. All objectives were accomplished. As a result of these experiments the mixed thorium-uranium oxide fissile kernel was replaced by a WAR-derived particle in the reference recycle design. A tentative decision to make this change had been reached before the HRB-7 and -8 capsules were examined, and the results of the examination confirmed the accuracy of the previous decision. Even maximum dilution (Th/U approximately equal to 8) of the mixed thorium-uranium oxide kernel was insufficient to prevent amoeba of the kernels at rates that are unacceptable in a large HTGR. Other results showed the performance of /sup 233/U-bearing particles to be identical to that of /sup 235/U-bearing particles, the performance of fuel coated in production-scale equipment to be at least as good as that of fuel coated in laboratory-scale coaters, the performance of ZrC coatings to be very promising, and Biso coatings to be inferior to Triso coatings relative to fission product retention.« less

  6. Fractal density modeling of crustal heterogeneity from the KTB deep hole

    NASA Astrophysics Data System (ADS)

    Chen, Guoxiong; Cheng, Qiuming

    2017-03-01

    Fractal or multifractal concepts have significantly enlightened our understanding of crustal heterogeneity. Much attention has focused on 1/f scaling natures of physicochemical heterogeneity of Earth crust from fractal increment perspective. In this study, fractal density model from fractal clustering point of view is used to characterize the scaling behaviors of heterogeneous sources recorded at German Continental Deep Drilling Program (KTB) main hole, and of special contribution is the local and global multifractal analysis revisited by using Haar wavelet transform (HWT). Fractal density modeling of mass accumulation generalizes the unit of rock density from integer (e.g., g/cm3) to real numbers (e.g., g/cmα), so that crustal heterogeneities with respect to source accumulation are quantified by singularity strength of fractal density in α-dimensional space. From that perspective, we found that the bulk densities of metamorphic rocks exhibit fractal properties but have a weak multifractality, decreasing with the depth. The multiscaling natures of chemical logs also have been evidenced, and the observed distinct fractal laws for mineral contents are related to their different geochemical behaviors within complex lithological context. Accordingly, scaling distributions of mineral contents have been recognized as a main contributor to the multifractal natures of heterogeneous density for low-porosity crystalline rocks. This finally allows us to use de Wijs cascade process to explain the mechanism of fractal density. In practice, the proposed local singularity analysis based on HWT is suggested as an attractive high-pass filtering to amplify weak signatures of well logs as well as to delineate microlithological changes.

  7. Spontaneous evolution of microstructure in materials

    NASA Astrophysics Data System (ADS)

    Kirkaldy, J. S.

    1993-08-01

    Microstructures which evolve spontaneously from random solutions in near isolation often exhibit patterns of remarkable symmetry which can only in part be explained by boundary and crystallographic effects. With reference to the detailed experimental record, we seek the source of causality in this natural tendency to constructive autonomy, usually designated as a principle of pattern or wavenumber selection in a free boundary problem. The phase field approach which incorporates detailed boundary structure and global rate equations has enjoyed some currency in removing internal degrees of freedom, and this will be examined critically in reference to the migration of phase-antiphase boundaries produced in an order-disorder transformation. Analogous problems for singular interfaces including solute trapping are explored. The microscopic solvability hypothesis has received much attention, particularly in relation to dendrite morphology and the Saffman-Taylor fingering problem in hydrodynamics. A weak form of this will be illustrated in relation to local equilibrium binary solidification cells which renders the free boundary problem unique. However, the main thrust of this article concerns dynamic configurations at anisotropic singular interfaces and the related patterns of eutectoid(ic)s, nonequilibrium cells, cellular dendrites, and Liesegang figures where there is a recognizable macroscopic phase space of pattern fluctuations and/or solitons. These possess a weakly defective stability point and thereby submit to a statistical principle of maximum path probability and to a variety of corollary dissipation principles in the determination of a unique average patterning behavior. A theoretical development of the principle based on Hamilton's principle for frictional systems is presented in an Appendix. Elements of the principles of scaling, universality, and deterministic chaos are illustrated.

  8. A high-efficiency spin polarizer based on edge and surface disordered silicene nanoribbons

    NASA Astrophysics Data System (ADS)

    Xu, Ning; Zhang, Haiyang; Wu, Xiuqiang; Chen, Qiao; Ding, Jianwen

    2018-07-01

    Using the tight-binding formalism, we explore the effect of weak disorder upon the conductance of zigzag edge silicene nanoribbons (SiNRs), in the limit of phase-coherent transport. We find that the fashion of the conductance varies with disorder, and depends strongly on the type of disorder. Conductance dips are observed at the Van Hove singularities, owing to quasilocalized states existing in surface disordered SiNRs. A conductance gap is observed around the Fermi energy for both edge and surface disordered SiNRs, because edge states are localized. The average conductance of the disordered SiNRs decreases exponentially with the increase of disorder, and finally tends to disappear. The near-perfect spin polarization can be realized in SiNRs with a weak edge or surface disorder, and also can be attained by both the local electric field and the exchange field.

  9. The fermionic projector in a time-dependent external potential: Mass oscillation property and Hadamard states

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Murro, Simone; Röken, Christian

    2016-07-01

    We give a non-perturbative construction of the fermionic projector in Minkowski space coupled to a time-dependent external potential which is smooth and decays faster than quadratically for large times. The weak and strong mass oscillation properties are proven. We show that the integral kernel of the fermionic projector is of the Hadamard form, provided that the time integral of the spatial sup-norm of the potential satisfies a suitable bound. This gives rise to an algebraic quantum field theory of Dirac fields in an external potential with a distinguished pure quasi-free Hadamard state.

  10. Extended nonlinear feedback model for describing episodes of high inflation

    NASA Astrophysics Data System (ADS)

    Szybisz, Martín A.; Szybisz, Leszek

    2017-01-01

    An extension of the nonlinear feedback (NLF) formalism to describe regimes of hyper- and high-inflation in economy is proposed in the present work. In the NLF model the consumer price index (CPI) exhibits a finite time singularity of the type 1 /(tc - t) (1 - β) / β, with β > 0, predicting a blow up of the economy at a critical time tc. However, this model fails in determining tc in the case of weak hyperinflation regimes like, e.g., that occurred in Israel. To overcome this trouble, the NLF model is extended by introducing a parameter γ, which multiplies all terms with past growth rate index (GRI). In this novel approach the solution for CPI is also analytic being proportional to the Gaussian hypergeometric function 2F1(1 / β , 1 / β , 1 + 1 / β ; z) , where z is a function of β, γ, and tc. For z → 1 this hypergeometric function diverges leading to a finite time singularity, from which a value of tc can be determined. This singularity is also present in GRI. It is shown that the interplay between parameters β and γ may produce phenomena of multiple equilibria. An analysis of the severe hyperinflation occurred in Hungary proves that the novel model is robust. When this model is used for examining data of Israel a reasonable tc is got. High-inflation regimes in Mexico and Iceland, which exhibit weaker inflations than that of Israel, are also successfully described.

  11. Topological phase transition of decoupling quasi-two-dimensional vortex pairs in La1- y Sm y MnO3 + δ ( y = 0.85, 1.0)

    NASA Astrophysics Data System (ADS)

    Bukhanko, F. N.; Bukhanko, A. F.

    2016-10-01

    Characteristic signs of the universal Nelson-Kosterlitz jump of the superconducting liquid density in the temperature dependences of the magnetization of La1- y Sm y MnO3 + δ samples with samarium concentrations y = 0.85 and 1.0, which are measured in magnetic fields 100 Oe ≤ H ≤ 3.5 kOe, are detected. As the temperature increases, the sample with y = 0.85 exhibits a crescent-shaped singularity in the dc magnetization curve near the critical temperature of decoupling vortex-antivortex pairs ( T KT ≡ T c ≈ 43 K), which is independent of measuring magnetic field H and is characteristic of the dissociation of 2D vortex pairs. A similar singularity is also detected in the sample with a samarium concentration y = 1.0 at a significantly lower temperature ( T KT ≈ 12 K). The obtained experimental results are explained in terms of the topological Kosterlitz-Thouless phase transition of dissociation of 2D vortex pairs in a quasi-two-dimensional weak Josephson coupling network.

  12. Characterization of an elastic target in a shallow water waveguide by decomposition of the time-reversal operator.

    PubMed

    Philippe, Franck D; Prada, Claire; de Rosny, Julien; Clorennec, Dominique; Minonzio, Jean-Gabriel; Fink, Mathias

    2008-08-01

    This paper reports the results of an investigation into extracting of the backscattered frequency signature of a target in a waveguide. Retrieving the target signature is difficult because it is blurred by waveguide reflections and modal interference. It is shown that the decomposition of the time-reversal operator method provides a solution to this problem. Using a modal theory, this paper shows that the first singular value associated with a target is proportional to the backscattering form function. It is linked to the waveguide geometry through a factor that weakly depends on frequency as long as the target is far from the boundaries. Using the same approach, the second singular value is shown to be proportional to the second derivative of the angular form function which is a relevant parameter for target identification. Within this framework the coupling between two targets is considered. Small scale experimental studies are performed in the 3.5 MHz frequency range for 3 mm spheres in a 28 mm deep and 570 mm long waveguide and confirm the theoretical results.

  13. The little sibling of the big rip singularity

    NASA Astrophysics Data System (ADS)

    Bouhmadi-López, Mariam; Errahmani, Ahmed; Martín-Moruno, Prado; Ouali, Taoufik; Tavakoli, Yaser

    2015-07-01

    In this paper, we present a new cosmological event, which we named the little sibling of the big rip. This event is much smoother than the big rip singularity. When the little sibling of the big rip is reached, the Hubble rate and the scale factor blow up, but the cosmic derivative of the Hubble rate does not. This abrupt event takes place at an infinite cosmic time where the scalar curvature explodes. We show that a doomsday à la little sibling of the big rip is compatible with an accelerating universe, indeed at present it would mimic perfectly a ΛCDM scenario. It turns out that, even though the event seems to be harmless as it takes place in the infinite future, the bound structures in the universe would be unavoidably destroyed on a finite cosmic time from now. The model can be motivated by considering that the weak energy condition should not be strongly violated in our universe, and it could give us some hints about the status of recently formulated nonlinear energy conditions.

  14. Kernel abortion in maize : I. Carbohydrate concentration patterns and Acid invertase activity of maize kernels induced to abort in vitro.

    PubMed

    Hanft, J M; Jones, R J

    1986-06-01

    Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.

  15. Evaluation of gravitational curvatures of a tesseroid in spherical integral kernels

    NASA Astrophysics Data System (ADS)

    Deng, Xiao-Le; Shen, Wen-Bin

    2018-04-01

    Proper understanding of how the Earth's mass distributions and redistributions influence the Earth's gravity field-related functionals is crucial for numerous applications in geodesy, geophysics and related geosciences. Calculations of the gravitational curvatures (GC) have been proposed in geodesy in recent years. In view of future satellite missions, the sixth-order developments of the gradients are becoming requisite. In this paper, a set of 3D integral GC formulas of a tesseroid mass body have been provided by spherical integral kernels in the spatial domain. Based on the Taylor series expansion approach, the numerical expressions of the 3D GC formulas are provided up to sixth order. Moreover, numerical experiments demonstrate the correctness of the 3D Taylor series approach for the GC formulas with order as high as sixth order. Analogous to other gravitational effects (e.g., gravitational potential, gravity vector, gravity gradient tensor), numerically it is found that there exist the very-near-area problem and polar singularity problem in the GC east-east-radial, north-north-radial and radial-radial-radial components in spatial domain, and compared to the other gravitational effects, the relative approximation errors of the GC components are larger due to not only the influence of the geocentric distance but also the influence of the latitude. This study shows that the magnitude of each term for the nonzero GC functionals by a grid resolution 15^' } } × 15^' }} at GOCE satellite height can reach of about 10^{-16} m^{-1} s2 for zero order, 10^{-24 } or 10^{-23} m^{-1} s2 for second order, 10^{-29} m^{-1} s2 for fourth order and 10^{-35} or 10^{-34} m^{-1} s2 for sixth order, respectively.

  16. 7 CFR 810.602 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...

  17. Kernel Abortion in Maize 1

    PubMed Central

    Hanft, Jonathan M.; Jones, Robert J.

    1986-01-01

    Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846

  18. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  19. 7 CFR 810.1202 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...

  20. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize.

    PubMed

    Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.

  1. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize

    PubMed Central

    Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143

  2. Theory of High-T{sub c} Superconducting Cuprates Based on Experimental Evidence

    DOE R&D Accomplishments Database

    Abrikosov, A. A.

    1999-12-10

    A model of superconductivity in layered high-temperature superconducting cuprates is proposed, based on the extended saddle point singularities in the electron spectrum, weak screening of the Coulomb interaction and phonon-mediated interaction between electrons plus a small short-range repulsion of Hund's, or spin-fluctuation, origin. This permits to explain the large values of T{sub c}, features of the isotope effect on oxygen and copper, the existence of two types of the order parameter, the peak in the inelastic neutron scattering, the positive curvature of the upper critical field, as function of temperature etc.

  3. Quantitative estimation of the energy flux during an explosive chromospheric evaporation in a white light flare kernel observed by Hinode, IRIS, SDO, and RHESSI

    NASA Astrophysics Data System (ADS)

    Lee, Kyoung-Sun; Imada, Shinsuke; Kyoko, Watanabe; Bamba, Yumi; Brooks, David H.

    2016-10-01

    An X1.6 flare occurred at the AR 12192 on 2014 October 22 at14:02 UT was observed by Hinode, IRIS, SDO, and RHESSI. We analyze a bright kernel which produces a white light (WL) flare with continuum enhancement and a hard X-ray (HXR) peak. Taking advantage of the spectroscopic observations of IRIS and Hinode/EIS, we measure the temporal variation of the plasma properties in the bright kernel in the chromosphere and corona. We found that explosive evaporation was observed when the WL emission occurred, even though the intensity enhancement in hotter lines is quite weak. The temporal correlation of the WL emission, HXR peak, and evaporation flows indicate that the WL emission was produced by accelerated electrons. To understand the white light emission processes, we calculated the deposited energy flux from the non-thermal electrons observed by RHESSI and compared it to the dissipated energy estimated from the chromospheric line (Mg II triplet) observed by IRIS. The deposited energy flux from the non-thermal electrons is about 3.1 × 1010erg cm-2 s-1 when we consider a cut-off energy 20 keV. The estimated energy flux from the temperature changes in the chromosphere measured from the Mg II subordinate line is about 4.6-6.7×109erg cm-2 s-1, ˜ 15-22% of the deposited energy. By comparison of these estimated energy fluxes we conclude that the continuum enhancement was directly produced by the non-thermal electrons.

  4. 7 CFR 810.802 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...

  5. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  6. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  7. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  8. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  9. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  10. Classification With Truncated Distance Kernel.

    PubMed

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  11. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    PubMed Central

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-01-01

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202

  12. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.

  13. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  14. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  15. Color-suppression of non-planar diagrams in bosonic bound states

    NASA Astrophysics Data System (ADS)

    Alvarenga Nogueira, J. H.; Ji, Chueng-Ryong; Ydrefors, E.; Frederico, T.

    2018-02-01

    We study the suppression of non-planar diagrams in a scalar QCD model of a meson system in 3 + 1 space-time dimensions due to the inclusion of the color degrees of freedom. As a prototype of the color-singlet meson, we consider a flavor-nonsinglet system consisting of a scalar-quark and a scalar-antiquark with equal masses exchanging a scalar-gluon of a different mass, which is investigated within the framework of the homogeneous Bethe-Salpeter equation. The equation is solved by using the Nakanishi representation for the manifestly covariant bound-state amplitude and its light-front projection. The resulting non-singular integral equation is solved numerically. The damping of the impact of the cross-ladder kernel on the binding energies are studied in detail. The color-suppression of the cross-ladder effects on the light-front wave function and the elastic electromagnetic form factor are also discussed. As our results show, the suppression appears significantly large for Nc = 3, which supports the use of rainbow-ladder truncations in practical non-perturbative calculations within QCD.

  16. SELF-GRAVITATIONAL FORCE CALCULATION OF SECOND-ORDER ACCURACY FOR INFINITESIMALLY THIN GASEOUS DISKS IN POLAR COORDINATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsiang-Hsu; Taam, Ronald E.; Yen, David C. C., E-mail: yen@math.fju.edu.tw

    Investigating the evolution of disk galaxies and the dynamics of proto-stellar disks can involve the use of both a hydrodynamical and a Poisson solver. These systems are usually approximated as infinitesimally thin disks using two-dimensional Cartesian or polar coordinates. In Cartesian coordinates, the calculations of the hydrodynamics and self-gravitational forces are relatively straightforward for attaining second-order accuracy. However, in polar coordinates, a second-order calculation of self-gravitational forces is required for matching the second-order accuracy of hydrodynamical schemes. We present a direct algorithm for calculating self-gravitational forces with second-order accuracy without artificial boundary conditions. The Poisson integral in polar coordinates ismore » expressed in a convolution form and the corresponding numerical complexity is nearly linear using a fast Fourier transform. Examples with analytic solutions are used to verify that the truncated error of this algorithm is of second order. The kernel integral around the singularity is applied to modify the particle method. The use of a softening length is avoided and the accuracy of the particle method is significantly improved.« less

  17. Multi-Regge kinematics and the moduli space of Riemann spheres with marked points

    DOE PAGES

    Del Duca, Vittorio; Druc, Stefan; Drummond, James; ...

    2016-08-25

    We show that scattering amplitudes in planar N = 4 Super Yang-Mills in multi-Regge kinematics can naturally be expressed in terms of single-valued iterated integrals on the moduli space of Riemann spheres with marked points. As a consequence, scattering amplitudes in this limit can be expressed as convolutions that can easily be computed using Stokes’ theorem. We apply this framework to MHV amplitudes to leading-logarithmic accuracy (LLA), and we prove that at L loops all MHV amplitudes are determined by amplitudes with up to L + 4 external legs. We also investigate non-MHV amplitudes, and we show that they canmore » be obtained by convoluting the MHV results with a certain helicity flip kernel. We classify all leading singularities that appear at LLA in the Regge limit for arbitrary helicity configurations and any number of external legs. In conclusion, we use our new framework to obtain explicit analytic results at LLA for all MHV amplitudes up to five loops and all non-MHV amplitudes with up to eight external legs and four loops.« less

  18. VINCIA for hadron colliders

    DOE PAGES

    Fischer, Nadine; Prestel, S.; Ritzmann, M.; ...

    2016-10-28

    We present the first public implementation of antenna-based QCD initial- and final-state showers. The shower kernels are 2→3 antenna functions, which capture not only the collinear dynamics but also the leading soft (coherent) singularities of QCD matrix elements. We define the evolution measure to be inversely proportional to the leading poles, hence gluon emissions are evolved in a p ⊥ measure inversely proportional to the eikonal, while processes that only contain a single pole (e.g., g → qq¯) are evolved in virtuality. Non-ordered emissions are allowed, suppressed by an additional power of 1/Q 2. Recoils and kinematics are governed bymore » exact on-shell 2 → 3 phase-space factorisations. This first implementation is limited to massless QCD partons and colourless resonances. Tree-level matrix-element corrections are included for QCD up to O(α 4 s) (4 jets), and for Drell–Yan and Higgs production up to O(α 3 s) (V / H + 3 jets). Finally, the resulting algorithm has been made publicly available in Vincia 2.0.« less

  19. Modelling groundwater fractal flow with fractional differentiation via Mittag-Leffler law

    NASA Astrophysics Data System (ADS)

    Ahokposi, D. P.; Atangana, Abdon; Vermeulen, D. P.

    2017-04-01

    Modelling the flow of groundwater within a network of fractures is perhaps one of the most difficult exercises within the field of geohydrology. This physical problem has attracted the attention of several scientists across the globe. Already two different types of differentiations have been used to attempt modelling this problem including the classical and the fractional differentiation. In this paper, we employed the most recent concept of differentiation based on the non-local and non-singular kernel called the generalized Mittag-Leffler function, to reshape the model of groundwater fractal flow. We presented the existence of positive solution of the new model. Using the fixed-point approach, we established the uniqueness of the positive solution. We solve the new model with three different numerical schemes including implicit, explicit and Crank-Nicholson numerical methods. Experimental data collected from four constant discharge tests conducted in a typical fractured crystalline rock aquifer of the Northern Limb (Bushveld Complex) in the Limpopo Province (South Africa) are compared with the numerical solutions. It is worth noting that the four boreholes (BPAC1, BPAC2, BPAC3, and BPAC4) are located on Faults.

  20. Distinguishing autofluorescence of normal, benign, and cancerous breast tissues through wavelet domain correlation studies.

    PubMed

    Gharekhan, Anita H; Arora, Siddharth; Oza, Ashok N; Sureshkumar, Mundan B; Pradhan, Asima; Panigrahi, Prasanta K

    2011-08-01

    Using the multiresolution ability of wavelets and effectiveness of singular value decomposition (SVD) to identify statistically robust parameters, we find a number of local and global features, capturing spectral correlations in the co- and cross-polarized channels, at different scales (of human breast tissues). The copolarized component, being sensitive to intrinsic fluorescence, shows different behavior for normal, benign, and cancerous tissues, in the emission domain of known fluorophores, whereas the perpendicular component, being more prone to the diffusive effect of scattering, points out differences in the Kernel-Smoother density estimate employed to the principal components, between malignant, normal, and benign tissues. The eigenvectors, corresponding to the dominant eigenvalues of the correlation matrix in SVD, also exhibit significant differences between the three tissue types, which clearly reflects the differences in the spectral correlation behavior. Interestingly, the most significant distinguishing feature manifests in the perpendicular component, corresponding to porphyrin emission range in the cancerous tissue. The fact that perpendicular component is strongly influenced by depolarization, and porphyrin emissions in cancerous tissue has been found to be strongly depolarized, may be the possible cause of the above observation.

  1. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  2. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  3. Existence and uniqueness of weak solutions of the compressible spherically symmetric Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Huang, Xiangdi

    2017-02-01

    One of the most influential fundamental tools in harmonic analysis is the Riesz transforms. It maps Lp functions to Lp functions for any p ∈ (1 , ∞) which plays an important role in singular operators. As an application in fluid dynamics, the norm equivalence between ‖∇u‖Lp and ‖ div u ‖ Lp +‖ curl u ‖ Lp is well established for p ∈ (1 , ∞). However, since Riesz operators sent bounded functions only to BMO functions, there is no hope to bound ‖∇u‖L∞ in terms of ‖ div u ‖ L∞ +‖ curl u ‖ L∞. As pointed out by Hoff (2006) [11], this is the main obstacle to obtain uniqueness of weak solutions for isentropic compressible flows. Fortunately, based on new observations, see Lemma 2.2, we derive an exact estimate for ‖∇u‖L∞ ≤ (2 + 1 / N)‖ div u ‖ L∞ for any N-dimensional radially symmetric vector functions u. As a direct application, we give an affirmative answer to the open problem of uniqueness of some weak solutions to the compressible spherically symmetric flows in a bounded ball.

  4. Possible 3rd order phase transition at T=0 in 4D gluodynamics

    NASA Astrophysics Data System (ADS)

    Li, L.; Meurice, Y.

    2006-02-01

    We revisit the question of the convergence of lattice perturbation theory for a pure SU(3) lattice gauge theory in four dimensions. Using a series for the average plaquette up to order 10 in the weak coupling parameter β-1, we show that the analysis of the extrapolated ratio and the extrapolated slope suggests the possibility of a nonanalytical power behavior of the form (1/β-1/5.7(1))1.0(1), in agreement with another analysis based on the same assumption. This would imply that the third derivative of the free energy density diverges near β=5.7. We show that the peak in the third derivative of the free energy present on 44 lattices disappears if the size of the lattice is increased isotropically up to a 104 lattice. On the other hand, on 4×L3 lattices, a jump in the third derivative persists when L increases, and follows closely the known values of βc for the first order finite temperature transition. We show that the apparent contradiction at zero temperature can be resolved by moving the singularity in the complex 1/β plane. If the imaginary part of the location of the singularity Γ is within the range 0.001<Γ<0.01, it is possible to limit the second derivative of P within an acceptable range without affecting drastically the behavior of the perturbative coefficients. We discuss the possibility of checking the existence of these complex singularities by using the strong coupling expansion or calculating the zeroes of the partition function.

  5. Exploiting graph kernels for high performance biomedical relation extraction.

    PubMed

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.

  6. 7 CFR 810.2202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernels, foreign material, and shrunken and broken kernels. The sum of these three factors may not exceed... the removal of dockage and shrunken and broken kernels. (g) Heat-damaged kernels. Kernels, pieces of... sample after the removal of dockage and shrunken and broken kernels. (h) Other grains. Barley, corn...

  7. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  8. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...

  9. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  10. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    PubMed

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Unconventional protein sources: apricot seed kernels.

    PubMed

    Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M

    1981-09-01

    Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.

  12. An introduction to kernel-based learning algorithms.

    PubMed

    Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B

    2001-01-01

    This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.

  13. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  14. Design of CT reconstruction kernel specifically for clinical lung imaging

    NASA Astrophysics Data System (ADS)

    Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.

    2005-04-01

    In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.

  15. Quality changes in macadamia kernel between harvest and farm-gate.

    PubMed

    Walton, David A; Wallace, Helen M

    2011-02-01

    Macadamia integrifolia, Macadamia tetraphylla and their hybrids are cultivated for their edible kernels. After harvest, nuts-in-shell are partially dried on-farm and sorted to eliminate poor-quality kernels before consignment to a processor. During these operations, kernel quality may be lost. In this study, macadamia nuts-in-shell were sampled at five points of an on-farm postharvest handling chain from dehusking to the final storage silo to assess quality loss prior to consignment. Shoulder damage, weight of pieces and unsound kernel were assessed for raw kernels, and colour, mottled colour and surface damage for roasted kernels. Shoulder damage, weight of pieces and unsound kernel for raw kernels increased significantly between the dehusker and the final silo. Roasted kernels displayed a significant increase in dark colour, mottled colour and surface damage during on-farm handling. Significant loss of macadamia kernel quality occurred on a commercial farm during sorting and storage of nuts-in-shell before nuts were consigned to a processor. Nuts-in-shell should be dried as quickly as possible and on-farm handling minimised to maintain optimum kernel quality. 2010 Society of Chemical Industry.

  16. A new discriminative kernel from probabilistic models.

    PubMed

    Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert

    2002-10-01

    Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.

  17. Functional redundancy and sensitivity of fish assemblages in European rivers, lakes and estuarine ecosystems.

    PubMed

    Teichert, Nils; Lepage, Mario; Sagouis, Alban; Borja, Angel; Chust, Guillem; Ferreira, Maria Teresa; Pasquaud, Stéphanie; Schinegger, Rafaela; Segurado, Pedro; Argillier, Christine

    2017-12-14

    The impact of species loss on ecosystems functioning depends on the amount of trait similarity between species, i.e. functional redundancy, but it is also influenced by the order in which species are lost. Here we investigated redundancy and sensitivity patterns across fish assemblages in lakes, rivers and estuaries. Several scenarios of species extinction were simulated to determine whether the loss of vulnerable species (with high propensity of extinction when facing threats) causes a greater functional alteration than random extinction. Our results indicate that the functional redundancy tended to increase with species richness in lakes and rivers, but not in estuaries. We demonstrated that i) in the three systems, some combinations of functional traits are supported by non-redundant species, ii) rare species in rivers and estuaries support singular functions not shared by dominant species, iii) the loss of vulnerable species can induce greater functional alteration in rivers than in lakes and estuaries. Overall, the functional structure of fish assemblages in rivers is weakly buffered against species extinction because vulnerable species support singular functions. More specifically, a hotspot of functional sensitivity was highlighted in the Iberian Peninsula, which emphasizes the usefulness of quantitative criteria to determine conservation priorities.

  18. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  19. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  20. Knotted fields and explicit fibrations for lemniscate knots

    NASA Astrophysics Data System (ADS)

    Bode, B.; Dennis, M. R.; Foster, D.; King, R. P.

    2017-06-01

    We give an explicit construction of complex maps whose nodal lines have the form of lemniscate knots. We review the properties of lemniscate knots, defined as closures of braids where all strands follow the same transverse (1, ℓ) Lissajous figure, and are therefore a subfamily of spiral knots generalizing the torus knots. We then prove that such maps exist and are in fact fibrations with appropriate choices of parameters. We describe how this may be useful in physics for creating knotted fields, in quantum mechanics, optics and generalizing to rational maps with application to the Skyrme-Faddeev model. We also prove how this construction extends to maps with weakly isolated singularities.

  1. Detection and identification of concealed weapons using matrix pencil

    NASA Astrophysics Data System (ADS)

    Adve, Raviraj S.; Thayaparan, Thayananthan

    2011-06-01

    The detection and identification of concealed weapons is an extremely hard problem due to the weak signature of the target buried within the much stronger signal from the human body. This paper furthers the automatic detection and identification of concealed weapons by proposing the use of an effective approach to obtain the resonant frequencies in a measurement. The technique, based on Matrix Pencil, a scheme for model based parameter estimation also provides amplitude information, hence providing a level of confidence in the results. Of specific interest is the fact that Matrix Pencil is based on a singular value decomposition, making the scheme robust against noise.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bovier, A.; Klein, A.

    We show that the formal perturbation expansion of the invariant measure for the Anderson model in one dimension has singularities at all energies E/sub 0/ = 2 cos ..pi..(p/q); we derive a modified expansion near these energies that we show to have finite coefficients to all orders. Moreover, we show that the first q - 3 of them coincide with those of the naive expansion, while there is an anomaly in the (q - 2)th term. This also gives a weak disorder expansion for the Liapunov exponent and for the density of states. This generalizes previous results of Kappus andmore » Wegner and of Derrida and Gardner.« less

  3. Multifractality and heteroscedastic dynamics: An application to time series analysis

    NASA Astrophysics Data System (ADS)

    Nascimento, C. M.; Júnior, H. B. N.; Jennings, H. D.; Serva, M.; Gleria, Iram; Viswanathan, G. M.

    2008-01-01

    An increasingly important problem in physics concerns scale invariance symmetry in diverse complex systems, often characterized by heteroscedastic dynamics. We investigate the nature of the relationship between the heteroscedastic and fractal aspects of the dynamics of complex systems, by analyzing the sensitivity to heteroscedasticity of the scaling properties of weakly nonstationary time series. By using multifractal detrended fluctuation analysis, we study the singularity spectra of currency exchange rate fluctuations, after partially or completely eliminating n-point correlations via data shuffling techniques. We conclude that heteroscedasticity can significantly increase multifractality and interpret these findings in the context of self-organizing and adaptive complex systems.

  4. Resolvability of regional density structure and the road to direct density inversion - a principal-component approach to resolution analysis

    NASA Astrophysics Data System (ADS)

    Płonka, Agnieszka; Fichtner, Andreas

    2017-04-01

    Lateral density variations are the source of mass transport in the Earth at all scales, acting as drivers of convective motion. However, the density structure of the Earth remains largely unknown since classic seismic observables and gravity provide only weak constraints with strong trade-offs. Current density models are therefore often based on velocity scaling, making strong assumptions on the origin of structural heterogeneities, which may not necessarily be correct. Our goal is to assess if 3D density structure may be resolvable with emerging full-waveform inversion techniques. We have previously quantified the impact of regional-scale crustal density structure on seismic waveforms with the conclusion that reasonably sized density variations within the crust can leave a strong imprint on both travel times and amplitudes, and, while this can produce significant biases in velocity and Q estimates, the seismic waveform inversion for density may become feasible. In this study we perform principal component analyses of sensitivity kernels for P velocity, S velocity, and density. This is intended to establish the extent to which these kernels are linearly independent, i.e. the extent to which the different parameters may be constrained independently. We apply the method to data from 81 events around the Iberian Penninsula, registered in total by 492 stations. The objective is to find a principal kernel which would maximize the sensitivity to density, potentially allowing for as independent as possible density resolution. We find that surface (mosty Rayleigh) waves have significant sensitivity to density, and that the trade-off with velocity is negligible. We also show the preliminary results of the inversion.

  5. Investigating light curve modulation via kernel smoothing. II. New additional modes in single-mode OGLE classical Cepheids

    NASA Astrophysics Data System (ADS)

    Süveges, Maria; Anderson, Richard I.

    2018-04-01

    Detailed knowledge of the variability of classical Cepheids, in particular their modulations and mode composition, provides crucial insight into stellar structure and pulsation. However, tiny modulations of the dominant radial-mode pulsation were recently found to be very frequent, possibly ubiquitous in Cepheids, which makes secondary modes difficult to detect and analyse, since these modulations can easily mask the potentially weak secondary modes. The aim of this study is to re-investigate the secondary mode content in the sample of OGLE-III and -IV single-mode classical Cepheids using kernel regression with adaptive kernel width for pre-whitening, instead of using a constant-parameter model. This leads to a more precise removal of the modulated dominant pulsation, and enables a more complete survey of secondary modes with frequencies outside a narrow range around the primary. Our analysis reveals that significant secondary modes occur more frequently among first overtone Cepheids than previously thought. The mode composition appears significantly different in the Large and Small Magellanic Clouds, suggesting a possible dependence on chemical composition. In addition to the formerly identified non-radial mode at P2 ≈ 0.6…0.65P1 (0.62-mode), and a cluster of modes with near-primary frequency, we find two more candidate non-radial modes. One is a numerous group of secondary modes with P2 ≈ 1.25P1, which may represent the fundamental of the 0.62-mode, supposed to be the first harmonic of an l ∈ {7, 8, 9} non-radial mode. The other new mode is at P2 ≈ 1.46P1, possibly analogous to a similar, rare mode recently discovered among first overtone RR Lyrae stars.

  6. Multi-thermal observations of the 2010 October 16 flare:heating of a ribbon via loops, or a blast wave?

    NASA Astrophysics Data System (ADS)

    Christe, Steven; Inglis, A.; Aschwanden, M.; Dennis, B.

    2011-05-01

    On 2010 October 16th SDO/AIA observed its first flare using automatic exposure control. Coincidentally, this flare also exhibited a large number of interesting features. Firstly, a large ribbon significantly to the solar west of the flare kernel was ignited and was visible in all AIA wavelengths, posing the question as to how this energy was deposited and how it relates to the main flare site. A faint blast wave also emanates from the flare kernel, visible in AIA and observed traveling to the solar west at an estimated speed of 1000 km/s. This blast wave is associated with a weak white-light CME observed with STEREO B and a Type II radio burst observed from Green Bank Observatory (GBSRBS). One possibility is that this blast wave is responsible for the heating of the ribbon. However, closer scrutiny reveals that the flare site and the ribbon are in fact connected magnetically via coronal loops which are heated during the main energy release. These loops are distinct from the expected hot, post-flare loops present within the main flare kernel. RHESSI spectra indicate that these loops are heated to approximately 10 MK in the immediate flare aftermath. Using the multi-temperature capabilities of AIA in combination with RHESSI, and by employing the cross-correlation mapping technique, we are able to measure the loop temperatures as a function of time over several post-flare hours and hence measure the loop cooling rate. We find that the time delay between the appearance of loops in the hottest channel, 131 A, and the cool 171 A channel, is 70 minutes. Yet the causality of this event remains unclear. Is the ribbon heated via these interconnected loops or via a blast wave?

  7. Spatiotemporal characteristics of elderly population’s traffic accidents in Seoul using space-time cube and space-time kernel density estimation

    PubMed Central

    Cho, Nahye; Son, Serin

    2018-01-01

    The purpose of this study is to analyze how the spatiotemporal characteristics of traffic accidents involving the elderly population in Seoul are changing by time period. We applied kernel density estimation and hotspot analyses to analyze the spatial characteristics of elderly people’s traffic accidents, and the space-time cube, emerging hotspot, and space-time kernel density estimation analyses to analyze the spatiotemporal characteristics. In addition, we analyzed elderly people’s traffic accidents by dividing cases into those in which the drivers were elderly people and those in which elderly people were victims of traffic accidents, and used the traffic accidents data in Seoul for 2013 for analysis. The main findings were as follows: (1) the hotspots for elderly people’s traffic accidents differed according to whether they were drivers or victims. (2) The hourly analysis showed that the hotspots for elderly drivers’ traffic accidents are in specific areas north of the Han River during the period from morning to afternoon, whereas the hotspots for elderly victims are distributed over a wide area from daytime to evening. (3) Monthly analysis showed that the hotspots are weak during winter and summer, whereas they are strong in the hiking and climbing areas in Seoul during spring and fall. Further, elderly victims’ hotspots are more sporadic than elderly drivers’ hotspots. (4) The analysis for the entire period of 2013 indicates that traffic accidents involving elderly people are increasing in specific areas on the north side of the Han River. We expect the results of this study to aid in reducing the number of traffic accidents involving elderly people in the future. PMID:29768453

  8. Spatiotemporal characteristics of elderly population's traffic accidents in Seoul using space-time cube and space-time kernel density estimation.

    PubMed

    Kang, Youngok; Cho, Nahye; Son, Serin

    2018-01-01

    The purpose of this study is to analyze how the spatiotemporal characteristics of traffic accidents involving the elderly population in Seoul are changing by time period. We applied kernel density estimation and hotspot analyses to analyze the spatial characteristics of elderly people's traffic accidents, and the space-time cube, emerging hotspot, and space-time kernel density estimation analyses to analyze the spatiotemporal characteristics. In addition, we analyzed elderly people's traffic accidents by dividing cases into those in which the drivers were elderly people and those in which elderly people were victims of traffic accidents, and used the traffic accidents data in Seoul for 2013 for analysis. The main findings were as follows: (1) the hotspots for elderly people's traffic accidents differed according to whether they were drivers or victims. (2) The hourly analysis showed that the hotspots for elderly drivers' traffic accidents are in specific areas north of the Han River during the period from morning to afternoon, whereas the hotspots for elderly victims are distributed over a wide area from daytime to evening. (3) Monthly analysis showed that the hotspots are weak during winter and summer, whereas they are strong in the hiking and climbing areas in Seoul during spring and fall. Further, elderly victims' hotspots are more sporadic than elderly drivers' hotspots. (4) The analysis for the entire period of 2013 indicates that traffic accidents involving elderly people are increasing in specific areas on the north side of the Han River. We expect the results of this study to aid in reducing the number of traffic accidents involving elderly people in the future.

  9. Weak Magnetic Fields in Two Herbig Ae Systems: The SB2 AK Sco and the Presumed Binary HD 95881

    NASA Astrophysics Data System (ADS)

    Järvinen, S. P.; Carroll, T. A.; Hubrig, S.; Ilyin, I.; Schöller, M.; Castelli, F.; Hummel, C. A.; Petr-Gotzens, M. G.; Korhonen, H.; Weigelt, G.; Pogodin, M. A.; Drake, N. A.

    2018-05-01

    We report the detection of weak mean longitudinal magnetic fields in the Herbig Ae double-lined spectroscopic binary AK Sco and in the presumed spectroscopic Herbig Ae binary HD 95881 using observations with the High Accuracy Radial velocity Planet Searcher polarimeter (HARPSpol) attached to the European Southern Observatory’s (ESO’s) 3.6 m telescope. Employing a multi-line singular value decomposition method, we detect a mean longitudinal magnetic field < {B}{{z}}> =-83+/- 31 G in the secondary component of AK Sco on one occasion. For HD 95881, we measure < {B}{{z}}> =-93+/- 25 G and < {B}{{z}}> =105+/- 29 G at two different observing epochs. For all the detections the false alarm probability is smaller than 10‑5. For AK Sco system, we discover that accretion diagnostic Na I doublet lines and photospheric lines show intensity variations over the observing nights. The double-lined spectral appearance of HD 95881 is presented here for the first time.

  10. Morphological instabilities of rapidly solidified binary alloys under weak flow

    NASA Astrophysics Data System (ADS)

    Kowal, Katarzyna; Davis, Stephen

    2017-11-01

    Additive manufacturing, or three-dimensional printing, offers promising advantages over existing manufacturing techniques. However, it is still subject to a range of undesirable effects. One of these involves the onset of flow resulting from sharp thermal gradients within the laser melt pool, affecting the morphological stability of the solidified alloys. We examine the linear stability of the interface of a rapidly solidifying binary alloy under weak boundary-layer flow by performing an asymptotic analysis for a singular perturbation problem that arises as a result of departures from the equilibrium phase diagram. Under no flow, the problem involves cellular and pulsatile instabilities, stabilised by surface tension and attachment kinetics. We find that travelling waves appear as a result of flow and we map out the effect of flow on two absolute stability boundaries as well as on the cells and solute bands that have been observed in experiments under no flow. This work is supported by the National Institute of Standards and Technology [Grant Number 70NANB14H012].

  11. Precision muon physics

    NASA Astrophysics Data System (ADS)

    Gorringe, T. P.; Hertzog, D. W.

    2015-09-01

    The muon is playing a unique role in sub-atomic physics. Studies of muon decay both determine the overall strength and establish the chiral structure of weak interactions, as well as setting extraordinary limits on charged-lepton-flavor-violating processes. Measurements of the muon's anomalous magnetic moment offer singular sensitivity to the completeness of the standard model and the predictions of many speculative theories. Spectroscopy of muonium and muonic atoms gives unmatched determinations of fundamental quantities including the magnetic moment ratio μμ /μp, lepton mass ratio mμ /me, and proton charge radius rp. Also, muon capture experiments are exploring elusive features of weak interactions involving nucleons and nuclei. We will review the experimental landscape of contemporary high-precision and high-sensitivity experiments with muons. One focus is the novel methods and ingenious techniques that achieve such precision and sensitivity in recent, present, and planned experiments. Another focus is the uncommonly broad and topical range of questions in atomic, nuclear and particle physics that such experiments explore.

  12. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    PubMed

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P < 0.05) but the unbroken kernels became significantly harder. Moisture content and moisture uptake rate were positively correlated, and cooked rice hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  13. Einstein-Langevin and Einstein-Fokker-Planck equations for Oppenheimer-Snyder gravitational collapse in a spacetime with conformal vacuum fluctuations

    NASA Astrophysics Data System (ADS)

    Miller, Steven David

    1999-10-01

    A consistent extension of the Oppenheimer-Snyder gravitational collapse formalism is presented which incorporates stochastic, conformal, vacuum fluctuations of the metric tensor. This results in a tractable approach to studying the possible effects of vacuum fluctuations on collapse and singularity formation. The motivation here, is that it is known that coupling stochastic noise to a classical field theory can lead to workable methodologies that accommodate or reproduce many aspects of quantum theory, turbulence or structure formation. The effect of statistically averaging over the metric fluctuations gives the appearance of a deterministic Riemannian structure, with an induced non-vanishing cosmological constant arising from the nonlinearity. The Oppenheimer-Snyder collapse of a perfect fluid or dust star in the fluctuating or `turbulent' spacetime, is reformulated in terms of nonlinear Einstein-Langevin field equations, with an additional noise source in the energy-momentum tensor. The smooth deterministic worldlines of collapsing matter within the classical Oppenheimer-Snyder model, now become nonlinear Brownian motions due to the backreaction induced by vacuum fluctuations. As the star collapses, the matter worldlines become increasingly randomized since the backreaction coupling to the vacuum fluctuations is nonlinear; the input assumptions of the Hawking-Penrose singularity theorems should then be violated. Solving the nonlinear Einstein-Langevin field equation for collapse - via the Ito interpretation - gives a singularity-free solution, which is equivalent to the original Oppenheimer solution but with higher-order stochastic corrections; the original singular solution is recovered in the limit of zero vacuum fluctuations. The `geometro-hydrodynamics' of noisy gravitational collapse, were also translated into an equivalent mathematical formulation in terms of nonlinear Einstein-Fokker-Planck (EFP) continuity equations with respect to comoving coordinates: these describe the collapse as a conserved flow of probability. A solution was found in the dilute limit of weak fluctuations where the EFP equation is linearized. There is zero probability that the star collapses to a singular state in the presence of background vacuum fluctuations, but the singularity returns with unit probability when the fluctuations are reduced to zero. Finally, an EFP equation was considered with respect to standard exterior coordinates. Using the thermal Brownian motion paradigm, an exact stationary or equilibrium solution was found in the infinite standard time relaxation limit. The solution gives the conditions required for the final collapsed object (a black hole) to be in thermal equilibrium with the background vacuum fluctuations. From this solution, one recovers the Hawking temperature without using field theory. The stationary solution then seems to correspond to a black hole in thermal equilibrium with a fluctuating conformal scalar field; or the Hawking-Hartle state.

  14. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  15. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Study on Energy Productivity Ratio (EPR) at palm kernel oil processing factory: case study on PT-X at Sumatera Utara Plantation

    NASA Astrophysics Data System (ADS)

    Haryanto, B.; Bukit, R. Br; Situmeang, E. M.; Christina, E. P.; Pandiangan, F.

    2018-02-01

    The purpose of this study was to determine the performance, productivity and feasibility of the operation of palm kernel processing plant based on Energy Productivity Ratio (EPR). EPR is expressed as the ratio of output to input energy and by-product. Palm Kernel plan is process in palm kernel to become palm kernel oil. The procedure started from collecting data needed as energy input such as: palm kernel prices, energy demand and depreciation of the factory. The energy output and its by-product comprise the whole production price such as: palm kernel oil price and the remaining products such as shells and pulp price. Calculation the equality of energy of palm kernel oil is to analyze the value of Energy Productivity Ratio (EPR) bases on processing capacity per year. The investigation has been done in Kernel Oil Processing Plant PT-X at Sumatera Utara plantation. The value of EPR was 1.54 (EPR > 1), which indicated that the processing of palm kernel into palm kernel oil is feasible to be operated based on the energy productivity.

  17. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    PubMed Central

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  18. WEAK GALERKIN METHODS FOR SECOND ORDER ELLIPTIC INTERFACE PROBLEMS

    PubMed Central

    MU, LIN; WANG, JUNPING; WEI, GUOWEI; YE, XIU; ZHAO, SHAN

    2013-01-01

    Weak Galerkin methods refer to general finite element methods for partial differential equations (PDEs) in which differential operators are approximated by their weak forms as distributions. Such weak forms give rise to desirable flexibilities in enforcing boundary and interface conditions. A weak Galerkin finite element method (WG-FEM) is developed in this paper for solving elliptic PDEs with discontinuous coefficients and interfaces. Theoretically, it is proved that high order numerical schemes can be designed by using the WG-FEM with polynomials of high order on each element. Extensive numerical experiments have been carried to validate the WG-FEM for solving second order elliptic interface problems. High order of convergence is numerically confirmed in both L2 and L∞ norms for the piecewise linear WG-FEM. Special attention is paid to solve many interface problems, in which the solution possesses a certain singularity due to the nonsmoothness of the interface. A challenge in research is to design nearly second order numerical methods that work well for problems with low regularity in the solution. The best known numerical scheme in the literature is of order O(h) to O(h1.5) for the solution itself in L∞ norm. It is demonstrated that the WG-FEM of the lowest order, i.e., the piecewise constant WG-FEM, is capable of delivering numerical approximations that are of order O(h1.75) to O(h2) in the L∞ norm for C1 or Lipschitz continuous interfaces associated with a C1 or H2 continuous solution. PMID:24072935

  19. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  20. An SVM model with hybrid kernels for hydrological time series

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  1. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Multiple kernels learning-based biological entity relationship extraction method.

    PubMed

    Dongliang, Xu; Jingchang, Pan; Bailing, Wang

    2017-09-20

    Automatic extracting protein entity interaction information from biomedical literature can help to build protein relation network and design new drugs. There are more than 20 million literature abstracts included in MEDLINE, which is the most authoritative textual database in the field of biomedicine, and follow an exponential growth over time. This frantic expansion of the biomedical literature can often be difficult to absorb or manually analyze. Thus efficient and automated search engines are necessary to efficiently explore the biomedical literature using text mining techniques. The P, R, and F value of tag graph method in Aimed corpus are 50.82, 69.76, and 58.61%, respectively. The P, R, and F value of tag graph kernel method in other four evaluation corpuses are 2-5% higher than that of all-paths graph kernel. And The P, R and F value of feature kernel and tag graph kernel fuse methods is 53.43, 71.62 and 61.30%, respectively. The P, R and F value of feature kernel and tag graph kernel fuse methods is 55.47, 70.29 and 60.37%, respectively. It indicated that the performance of the two kinds of kernel fusion methods is better than that of simple kernel. In comparison with the all-paths graph kernel method, the tag graph kernel method is superior in terms of overall performance. Experiments show that the performance of the multi-kernels method is better than that of the three separate single-kernel method and the dual-mutually fused kernel method used hereof in five corpus sets.

  3. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  4. 7 CFR 810.206 - Grades and grade requirements for barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... weight per bushel (pounds) Sound barley (percent) Maximum Limits of— Damaged kernels 1 (percent) Heat damaged kernels (percent) Foreign material (percent) Broken kernels (percent) Thin barley (percent) U.S... or otherwise of distinctly low quality. 1 Includes heat-damaged kernels. Injured-by-frost kernels and...

  5. The Semantics of Plurals: A Defense of Singularism

    ERIC Educational Resources Information Center

    Florio, Salvatore

    2010-01-01

    In this dissertation, I defend "semantic singularism", which is the view that syntactically plural terms, such as "they" or "Russell and Whitehead", are semantically singular. A semantically singular term is a term that denotes a single entity. Semantic singularism is to be distinguished from "syntactic singularism", according to which…

  6. Shear nulling after PSF Gaussianisation: Moment-based weak lensing measurements with subpercent noise bias

    NASA Astrophysics Data System (ADS)

    Herbonnet, Ricardo; Buddendiek, Axel; Kuijken, Konrad

    2017-03-01

    Context. Current optical imaging surveys for cosmology cover large areas of sky. Exploiting the statistical power of these surveys for weak lensing measurements requires shape measurement methods with subpercent systematic errors. Aims: We introduce a new weak lensing shear measurement algorithm, shear nulling after PSF Gaussianisation (SNAPG), designed to avoid the noise biases that affect most other methods. Methods: SNAPG operates on images that have been convolved with a kernel that renders the point spread function (PSF) a circular Gaussian, and uses weighted second moments of the sources. The response of such second moments to a shear of the pre-seeing galaxy image can be predicted analytically, allowing us to construct a shear nulling scheme that finds the shear parameters for which the observed galaxies are consistent with an unsheared, isotropically oriented population of sources. The inverse of this nulling shear is then an estimate of the gravitational lensing shear. Results: We identify the uncertainty of the estimated centre of each galaxy as the source of noise bias, and incorporate an approximate estimate of the centroid covariance into the scheme. We test the method on extensive suites of simulated galaxies of increasing complexity, and find that it is capable of shear measurements with multiplicative bias below 0.5 percent.

  7. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  8. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  9. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will not...

  10. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  11. The Classification of Diabetes Mellitus Using Kernel k-means

    NASA Astrophysics Data System (ADS)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  12. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  13. Detection of maize kernels breakage rate based on K-means clustering

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping

    2017-04-01

    In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.

  14. Modeling adaptive kernels from probabilistic phylogenetic trees.

    PubMed

    Nicotra, Luca; Micheli, Alessio

    2009-01-01

    Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.

  15. Aflatoxin and nutrient contents of peanut collected from local market and their processed foods

    NASA Astrophysics Data System (ADS)

    Ginting, E.; Rahmianna, A. A.; Yusnawan, E.

    2018-01-01

    Peanut is succeptable to aflatoxin contamination and the sources of peanut as well as processing methods considerably affect aflatoxin content of the products. Therefore, the study on aflatoxin and nutrient contents of peanut collected from local market and their processed foods were performed. Good kernels of peanut were prepared into fried peanut, pressed-fried peanut, peanut sauce, peanut press cake, fermented peanut press cake (tempe) and fried tempe, while blended kernels (good and poor kernels) were processed into peanut sauce and tempe and poor kernels were only processed into tempe. The results showed that good and blended kernels which had high number of sound/intact kernels (82,46% and 62,09%), contained 9.8-9.9 ppb of aflatoxin B1, while slightly higher level was seen in poor kernels (12.1 ppb). However, the moisture, ash, protein, and fat contents of the kernels were similar as well as the products. Peanut tempe and fried tempe showed the highest increase in protein content, while decreased fat contents were seen in all products. The increase in aflatoxin B1 of peanut tempe prepared from poor kernels > blended kernels > good kernels. However, it averagely decreased by 61.2% after deep-fried. Excluding peanut tempe and fried tempe, aflatoxin B1 levels in all products derived from good kernels were below the permitted level (15 ppb). This suggests that sorting peanut kernels as ingredients and followed by heat processing would decrease the aflatoxin content in the products.

  16. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  17. Automatic face naming by learning discriminative affinity matrices from weakly labeled images.

    PubMed

    Xiao, Shijie; Xu, Dong; Wu, Jianxin

    2015-10-01

    Given a collection of images, where each image contains several faces and is associated with a few names in the corresponding caption, the goal of face naming is to infer the correct name for each face. In this paper, we propose two new methods to effectively solve this problem by learning two discriminative affinity matrices from these weakly labeled images. We first propose a new method called regularized low-rank representation by effectively utilizing weakly supervised information to learn a low-rank reconstruction coefficient matrix while exploring multiple subspace structures of the data. Specifically, by introducing a specially designed regularizer to the low-rank representation method, we penalize the corresponding reconstruction coefficients related to the situations where a face is reconstructed by using face images from other subjects or by using itself. With the inferred reconstruction coefficient matrix, a discriminative affinity matrix can be obtained. Moreover, we also develop a new distance metric learning method called ambiguously supervised structural metric learning by using weakly supervised information to seek a discriminative distance metric. Hence, another discriminative affinity matrix can be obtained using the similarity matrix (i.e., the kernel matrix) based on the Mahalanobis distances of the data. Observing that these two affinity matrices contain complementary information, we further combine them to obtain a fused affinity matrix, based on which we develop a new iterative scheme to infer the name of each face. Comprehensive experiments demonstrate the effectiveness of our approach.

  18. Dissipation, intermittency, and singularities in incompressible turbulent flows

    NASA Astrophysics Data System (ADS)

    Debue, P.; Shukla, V.; Kuzzay, D.; Faranda, D.; Saw, E.-W.; Daviaud, F.; Dubrulle, B.

    2018-05-01

    We examine the connection between the singularities or quasisingularities in the solutions of the incompressible Navier-Stokes equation (INSE) and the local energy transfer and dissipation, in order to explore in detail how the former contributes to the phenomenon of intermittency. We do so by analyzing the velocity fields (a) measured in the experiments on the turbulent von Kármán swirling flow at high Reynolds numbers and (b) obtained from the direct numerical simulations of the INSE at a moderate resolution. To compute the local interscale energy transfer and viscous dissipation in experimental and supporting numerical data, we use the weak solution formulation generalization of the Kármán-Howarth-Monin equation. In the presence of a singularity in the velocity field, this formulation yields a nonzero dissipation (inertial dissipation) in the limit of an infinite resolution. Moreover, at finite resolutions, it provides an expression for local interscale energy transfers down to the scale where the energy is dissipated by viscosity. In the presence of a quasisingularity that is regularized by viscosity, the formulation provides the contribution to the viscous dissipation due to the presence of the quasisingularity. Therefore, our formulation provides a concrete support to the general multifractal description of the intermittency. We present the maps and statistics of the interscale energy transfer and show that the extreme events of this transfer govern the intermittency corrections and are compatible with a refined similarity hypothesis based on this transfer. We characterize the probability distribution functions of these extreme events via generalized Pareto distribution analysis and find that the widths of the tails are compatible with a similarity of the second kind. Finally, we make a connection between the topological and the statistical properties of the extreme events of the interscale energy transfer field and its multifractal properties.

  19. Singularities in Optimal Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Guptill, J. D.; Berke, L.

    1992-01-01

    Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.

  20. Singularities in optimal structural design

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Guptill, J. D.; Berke, L.

    1992-01-01

    Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.

  1. Naked singularity resolution in cylindrical collapse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurita, Yasunari; Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto, 606-8502; Nakao, Ken-ichi

    In this paper, we study the gravitational collapse of null dust in cylindrically symmetric spacetime. The naked singularity necessarily forms at the symmetry axis. We consider the situation in which null dust is emitted again from the naked singularity formed by the collapsed null dust and investigate the backreaction by this emission for the naked singularity. We show a very peculiar but physically important case in which the same amount of null dust as that of the collapsed one is emitted from the naked singularity as soon as the ingoing null dust hits the symmetry axis and forms the nakedmore » singularity. In this case, although this naked singularity satisfies the strong curvature condition by Krolak (limiting focusing condition), geodesics which hit the singularity can be extended uniquely across the singularity. Therefore, we may say that the collapsing null dust passes through the singularity formed by itself and then leaves for infinity. Finally, the singularity completely disappears and the flat spacetime remains.« less

  2. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  3. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  4. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  5. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  6. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  7. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  8. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  9. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  10. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  11. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  12. Wavelet SVM in Reproducing Kernel Hilbert Space for hyperspectral remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Du, Peijun; Tan, Kun; Xing, Xiaoshi

    2010-12-01

    Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.

  13. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature

    PubMed Central

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838

  15. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  16. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature.

    PubMed

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.

  17. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    PubMed

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  18. Recording from two neurons: second-order stimulus reconstruction from spike trains and population coding.

    PubMed

    Fernandes, N M; Pinto, B D L; Almeida, L O B; Slaets, J F W; Köberle, R

    2010-10-01

    We study the reconstruction of visual stimuli from spike trains, representing the reconstructed stimulus by a Volterra series up to second order. We illustrate this procedure in a prominent example of spiking neurons, recording simultaneously from the two H1 neurons located in the lobula plate of the fly Chrysomya megacephala. The fly views two types of stimuli, corresponding to rotational and translational displacements. Second-order reconstructions require the manipulation of potentially very large matrices, which obstructs the use of this approach when there are many neurons. We avoid the computation and inversion of these matrices using a convenient set of basis functions to expand our variables in. This requires approximating the spike train four-point functions by combinations of two-point functions similar to relations, which would be true for gaussian stochastic processes. In our test case, this approximation does not reduce the quality of the reconstruction. The overall contribution to stimulus reconstruction of the second-order kernels, measured by the mean squared error, is only about 5% of the first-order contribution. Yet at specific stimulus-dependent instants, the addition of second-order kernels represents up to 100% improvement, but only for rotational stimuli. We present a perturbative scheme to facilitate the application of our method to weakly correlated neurons.

  19. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Exploring the spectrum of planar AdS4 /CFT3 at finite coupling

    NASA Astrophysics Data System (ADS)

    Bombardelli, Diego; Cavaglià, Andrea; Conti, Riccardo; Tateo, Roberto

    2018-04-01

    The Quantum Spectral Curve (QSC) equations for planar N=6 super-conformal Chern-Simons (SCS) are solved numerically at finite values of the coupling constant for states in the sl(2\\Big|1) sector. New weak coupling results for conformal dimensions of operators outside the sl(2) -like sector are obtained by adapting a recently proposed algorithm for the QSC perturbative solution. Besides being interesting in their own right, these perturbative results are necessary initial inputs for the numerical algorithm to converge on the correct solution. The non-perturbative numerical outcomes nicely interpolate between the weak coupling and the known semiclassical expansions, and novel strong coupling exact results are deduced from the numerics. Finally, the existence of contour crossing singularities in the TBA equations for the operator 20 is ruled out by our analysis. The results of this paper are an important test of the QSC formalism for this model, open the way to new quantitative studies and provide further evidence in favour of the conjectured weak/strong coupling duality between N=6 SCS and type IIA superstring theory on AdS4 × CP 3. Attached to the arXiv submission, a Mathematica implementation of the numerical method and ancillary files containing the numerical results are provided.

  1. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    PubMed Central

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  2. Combined multi-kernel head computed tomography images optimized for depicting both brain parenchyma and bone.

    PubMed

    Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki

    2014-01-01

    The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.

  3. Cycle of phase, coherence and polarization singularities in Young's three-pinhole experiment.

    PubMed

    Pang, Xiaoyan; Gbur, Greg; Visser, Taco D

    2015-12-28

    It is now well-established that a variety of singularities can be characterized and observed in optical wavefields. It is also known that these phase singularities, polarization singularities and coherence singularities are physically related, but the exact nature of their relationship is still somewhat unclear. We show how a Young-type three-pinhole interference experiment can be used to create a continuous cycle of transformations between classes of singularities, often accompanied by topological reactions in which different singularities are created and annihilated. This arrangement serves to clarify the relationships between the different singularity types, and provides a simple tool for further exploration.

  4. Numerical analysis of singular solutions of two-dimensional problems of asymmetric elasticity

    NASA Astrophysics Data System (ADS)

    Korepanov, V. V.; Matveenko, V. P.; Fedorov, A. Yu.; Shardakov, I. N.

    2013-07-01

    An algorithm for the numerical analysis of singular solutions of two-dimensional problems of asymmetric elasticity is considered. The algorithm is based on separation of a power-law dependence from the finite-element solution in a neighborhood of singular points in the domain under study, where singular solutions are possible. The obtained power-law dependencies allow one to conclude whether the stresses have singularities and what the character of these singularities is. The algorithm was tested for problems of classical elasticity by comparing the stress singularity exponents obtained by the proposed method and from known analytic solutions. Problems with various cases of singular points, namely, body surface points at which either the smoothness of the surface is violated, or the type of boundary conditions is changed, or distinct materials are in contact, are considered as applications. The stress singularity exponents obtained by using the models of classical and asymmetric elasticity are compared. It is shown that, in the case of cracks, the stress singularity exponents are the same for the elasticity models under study, but for other cases of singular points, the stress singularity exponents obtained on the basis of asymmetric elasticity have insignificant quantitative distinctions from the solutions of the classical elasticity.

  5. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  6. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  7. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  8. graphkernels: R and Python packages for graph comparison

    PubMed Central

    Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-01-01

    Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902

  9. Aflatoxin variability in pistachios.

    PubMed Central

    Mahoney, N E; Rodriguez, S B

    1996-01-01

    Pistachio fruit components, including hulls (mesocarps and epicarps), seed coats (testas), and kernels (seeds), all contribute to variable aflatoxin content in pistachios. Fresh pistachio kernels were individually inoculated with Aspergillus flavus and incubated 7 or 10 days. Hulled, shelled kernels were either left intact or wounded prior to inoculation. Wounded kernels, with or without the seed coat, were readily colonized by A. flavus and after 10 days of incubation contained 37 times more aflatoxin than similarly treated unwounded kernels. The aflatoxin levels in the individual wounded pistachios were highly variable. Neither fungal colonization nor aflatoxin was detected in intact kernels without seed coats. Intact kernels with seed coats had limited fungal colonization and low aflatoxin concentrations compared with their wounded counterparts. Despite substantial fungal colonization of wounded hulls, aflatoxin was not detected in hulls. Aflatoxin levels were significantly lower in wounded kernels with hulls than in kernels of hulled pistachios. Both the seed coat and a water-soluble extract of hulls suppressed aflatoxin production by A. flavus. PMID:8919781

  10. graphkernels: R and Python packages for graph comparison.

    PubMed

    Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-02-01

    Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.

  11. On important precursor of singular optics (tutorial)

    NASA Astrophysics Data System (ADS)

    Polyanskii, Peter V.; Felde, Christina V.; Bogatyryova, Halina V.; Konovchuk, Alexey V.

    2018-01-01

    The rise of singular optics is usually associated with the seminal paper by J. F. Nye and M. V. Berry [Proc. R. Soc. Lond. A, 336, 165-189 (1974)]. Intense development of this area of modern photonics has started since the early eighties of the XX century due to invention of the interfrence technique for detection and diagnostics of phase singularities, such as optical vortices in complex speckle-structured light fields. The next powerful incentive for formation of singular optics into separate area of the science on light was connectected with discovering of very practical technique for creation of singular optical beams of various kinds on the base of computer-generated holograms. In the eghties and ninetieth of the XX century, singular optics evolved, almost entirely, under the approximation of complete coherency of light field. Only at the threshold of the XXI century, it has been comprehended that the singular-optics approaches can be fruitfully expanded onto partially spatially coherent, partially polarized and polychromatic light fields supporting singularities of new kinds, that has been resulted in establishing of correlation singular optics. Here we show that correlation singular optics has much deeper roots, ascending to "pre-singular" and even pre-laser epoch and associated with the concept of partial coherence and polarization. It is remarcable that correlation singular optics in its present interpretation has forestalled the standard coherent singular optics. This paper is timed to the sixtieth anniversary of the most profound precursor of modern correlation singular optics [J. Opt. Soc. Am., 47, 895-902 (1957)].

  12. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  14. 7 CFR 810.204 - Grades and grade requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...— Damaged kernels 1 (percent) Foreign material (percent) Other grains (percent) Skinned and broken kernels....0 10.0 15.0 1 Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered against sound barley. Notes: Malting barley shall not be infested in accordance with...

  15. Instantaneous and dynamical decoherence

    NASA Astrophysics Data System (ADS)

    Polonyi, Janos

    2018-04-01

    Two manifestations of decoherence, called instantaneous and dynamical, are investigated. The former reflects the suppression of the interference between the components of the current state while the latter reflects that within the initial state. These types of decoherence are computed in the case of the Brownian motion and the harmonic and anharmonic oscillators within the semiclassical approximation. A remarkable phenomenon, namely the opposite orientation of the time arrow of the dynamical variables compared to that of the quantum fluctuations generates a double exponential time dependence of the dynamical decoherence in the presence of a harmonic force. For the weakly anharmonic oscillator the dynamical decoherence is found to depend in a singular way on the amount of the anharmonicity.

  16. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Hodges, Dewey H.; Leung, Martin S.; Bless, Robert R.

    1991-01-01

    The proposed investigation on a Matched Asymptotic Expansion (MAE) method was carried out. It was concluded that the method of MAE is not applicable to launch vehicle ascent trajectory optimization due to a lack of a suitable stretched variable. More work was done on the earlier regular perturbation approach using a piecewise analytic zeroth order solution to generate a more accurate approximation. In the meantime, a singular perturbation approach using manifold theory is also under current investigation. Work on a general computational environment based on the use of MACSYMA and the weak Hamiltonian finite element method continued during this period. This methodology is capable of the solution of a large class of optimal control problems.

  17. A Discontinuous Galerkin Method for Parabolic Problems with Modified hp-Finite Element Approximation Technique

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.

    2004-01-01

    A recent paper is generalized to a case where the spatial region is taken in R(sup 3). The region is assumed to be a thin body, such as a panel on the wing or fuselage of an aerospace vehicle. The traditional h- as well as hp-finite element methods are applied to the surface defined in the x - y variables, while, through the thickness, the technique of the p-element is employed. Time and spatial discretization scheme based upon an assumption of certain weak singularity of double vertical line u(sub t) double vertical line 2, is used to derive an optimal a priori error estimate for the current method.

  18. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  19. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  20. 7 CFR 810.205 - Grades and grade requirements for Two-rowed Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (percent) Maximum limits of— Wild oats (percent) Foreign material (percent) Skinned and broken kernels... Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered...

  1. Evolution of singularities in a partially coherent vortex beam.

    PubMed

    van Dijk, Thomas; Visser, Taco D

    2009-04-01

    We study the evolution of phase singularities and coherence singularities in a Laguerre-Gauss beam that is rendered partially coherent by letting it pass through a spatial light modulator. The original beam has an on-axis minumum of intensity--a phase singularity--that transforms into a maximum of the far-field intensity. In contrast, although the original beam has no coherence singularities, such singularities are found to develop as the beam propagates. This disappearance of one kind of singularity and the gradual appearance of another is illustrated with numerical examples.

  2. Naked singularity, firewall, and Hawking radiation.

    PubMed

    Zhang, Hongsheng

    2017-06-21

    Spacetime singularity has always been of interest since the proof of the Penrose-Hawking singularity theorem. Naked singularity naturally emerges from reasonable initial conditions in the collapsing process. A recent interesting approach in black hole information problem implies that we need a firewall to break the surplus entanglements among the Hawking photons. Classically, the firewall becomes a naked singularity. We find some vacuum analytical solutions in R n -gravity of the firewall-type and use these solutions as concrete models to study the naked singularities. By using standard quantum theory, we investigate the Hawking radiation emitted from the black holes with naked singularities. Here we show that the singularity itself does not destroy information. A unitary quantum theory works well around a firewall-type singularity. We discuss the validity of our result in general relativity. Further our result demonstrates that the temperature of the Hawking radiation still can be expressed in the form of the surface gravity divided by 2π. This indicates that a naked singularity may not compromise the Hakwing evaporation process.

  3. Detection of ochratoxin A contamination in stored wheat using near-infrared hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Senthilkumar, T.; Jayas, D. S.; White, N. D. G.; Fields, P. G.; Gräfenhan, T.

    2017-03-01

    Near-infrared (NIR) hyperspectral imaging system was used to detect five concentration levels of ochratoxin A (OTA) in contaminated wheat kernels. The wheat kernels artificially inoculated with two different OTA producing Penicillium verrucosum strains, two different non-toxigenic P. verrucosum strains, and sterile control wheat kernels were subjected to NIR hyperspectral imaging. The acquired three-dimensional data were reshaped into readable two-dimensional data. Principal Component Analysis (PCA) was applied to the two dimensional data to identify the key wavelengths which had greater significance in detecting OTA contamination in wheat. Statistical and histogram features extracted at the key wavelengths were used in the linear, quadratic and Mahalanobis statistical discriminant models to differentiate between sterile control, five concentration levels of OTA contamination in wheat kernels, and five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels. The classification models differentiated sterile control samples from OTA contaminated wheat kernels and non-OTA producing P. verrucosum inoculated wheat kernels with a 100% accuracy. The classification models also differentiated between five concentration levels of OTA contaminated wheat kernels and between five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels with a correct classification of more than 98%. The non-OTA producing P. verrucosum inoculated wheat kernels and OTA contaminated wheat kernels subjected to hyperspectral imaging provided different spectral patterns.

  4. Application of kernel method in fluorescence molecular tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Baikejiang, Reheman; Li, Changqing

    2017-02-01

    Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.

  5. Credit scoring analysis using kernel discriminant

    NASA Astrophysics Data System (ADS)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  6. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  7. On the Weyl curvature hypothesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoica, Ovidiu Cristinel, E-mail: holotronix@gmail.com

    2013-11-15

    The Weyl curvature hypothesis of Penrose attempts to explain the high homogeneity and isotropy, and the very low entropy of the early universe, by conjecturing the vanishing of the Weyl tensor at the Big-Bang singularity. In previous papers it has been proposed an equivalent form of Einstein’s equation, which extends it and remains valid at an important class of singularities (including in particular the Schwarzschild, FLRW, and isotropic singularities). Here it is shown that if the Big-Bang singularity is from this class, it also satisfies the Weyl curvature hypothesis. As an application, we study a very general example of cosmologicalmore » models, which generalizes the FLRW model by dropping the isotropy and homogeneity constraints. This model also generalizes isotropic singularities, and a class of singularities occurring in Bianchi cosmologies. We show that the Big-Bang singularity of this model is of the type under consideration, and satisfies therefore the Weyl curvature hypothesis. -- Highlights: •The singularities we introduce are described by finite geometric/physical objects. •Our singularities have smooth Riemann and Weyl curvatures. •We show they satisfy Penrose’s Weyl curvature hypothesis (Weyl=0 at singularities). •Examples: FLRW, isotropic singularities, an extension of Schwarzschild’s metric. •Example: a large class of singularities which may be anisotropic and inhomogeneous.« less

  8. GRMDA: Graph Regression for MiRNA-Disease Association Prediction

    PubMed Central

    Chen, Xing; Yang, Jing-Ru; Guan, Na-Na; Li, Jian-Qiang

    2018-01-01

    Nowadays, as more and more associations between microRNAs (miRNAs) and diseases have been discovered, miRNA has gradually become a hot topic in the biological field. Because of the high consumption of time and money on carrying out biological experiments, computational method which can help scientists choose the most likely associations between miRNAs and diseases for further experimental studies is desperately needed. In this study, we proposed a method of Graph Regression for MiRNA-Disease Association prediction (GRMDA) which combines known miRNA-disease associations, miRNA functional similarity, disease semantic similarity, and Gaussian interaction profile kernel similarity. We used Gaussian interaction profile kernel similarity to supplement the shortage of miRNA functional similarity and disease semantic similarity. Furthermore, the graph regression was synchronously performed in three latent spaces, including association space, miRNA similarity space, and disease similarity space, by using two matrix factorization approaches called Singular Value Decomposition and Partial Least-Squares to extract important related attributes and filter the noise. In the leave-one-out cross validation and five-fold cross validation, GRMDA obtained the AUCs of 0.8272 and 0.8080 ± 0.0024, respectively. Thus, its performance is better than some previous models. In the case study of Lymphoma using the recorded miRNA-disease associations in HMDD V2.0 database, 88% of top 50 predicted miRNAs were verified by experimental literatures. In order to test the performance of GRMDA on new diseases with no known related miRNAs, we took Breast Neoplasms as an example by regarding all the known related miRNAs as unknown ones. We found that 100% of top 50 predicted miRNAs were verified. Moreover, 84% of top 50 predicted miRNAs in case study for Esophageal Neoplasms based on HMDD V1.0 were verified to have known associations. In conclusion, GRMDA is an effective and practical method for miRNA-disease association prediction. PMID:29515453

  9. GRMDA: Graph Regression for MiRNA-Disease Association Prediction.

    PubMed

    Chen, Xing; Yang, Jing-Ru; Guan, Na-Na; Li, Jian-Qiang

    2018-01-01

    Nowadays, as more and more associations between microRNAs (miRNAs) and diseases have been discovered, miRNA has gradually become a hot topic in the biological field. Because of the high consumption of time and money on carrying out biological experiments, computational method which can help scientists choose the most likely associations between miRNAs and diseases for further experimental studies is desperately needed. In this study, we proposed a method of Graph Regression for MiRNA-Disease Association prediction (GRMDA) which combines known miRNA-disease associations, miRNA functional similarity, disease semantic similarity, and Gaussian interaction profile kernel similarity. We used Gaussian interaction profile kernel similarity to supplement the shortage of miRNA functional similarity and disease semantic similarity. Furthermore, the graph regression was synchronously performed in three latent spaces, including association space, miRNA similarity space, and disease similarity space, by using two matrix factorization approaches called Singular Value Decomposition and Partial Least-Squares to extract important related attributes and filter the noise. In the leave-one-out cross validation and five-fold cross validation, GRMDA obtained the AUCs of 0.8272 and 0.8080 ± 0.0024, respectively. Thus, its performance is better than some previous models. In the case study of Lymphoma using the recorded miRNA-disease associations in HMDD V2.0 database, 88% of top 50 predicted miRNAs were verified by experimental literatures. In order to test the performance of GRMDA on new diseases with no known related miRNAs, we took Breast Neoplasms as an example by regarding all the known related miRNAs as unknown ones. We found that 100% of top 50 predicted miRNAs were verified. Moreover, 84% of top 50 predicted miRNAs in case study for Esophageal Neoplasms based on HMDD V1.0 were verified to have known associations. In conclusion, GRMDA is an effective and practical method for miRNA-disease association prediction.

  10. Modeling of thin-walled structures interacting with acoustic media as constrained two-dimensional continua

    NASA Astrophysics Data System (ADS)

    Rabinskiy, L. N.; Zhavoronok, S. I.

    2018-04-01

    The transient interaction of acoustic media and elastic shells is considered on the basis of the transition function approach. The three-dimensional hyperbolic initial boundary-value problem is reduced to a two-dimensional problem of shell theory with integral operators approximating the acoustic medium effect on the shell dynamics. The kernels of these integral operators are determined by the elementary solution of the problem of acoustic waves diffraction at a rigid obstacle with the same boundary shape as the wetted shell surface. The closed-form elementary solution for arbitrary convex obstacles can be obtained at the initial interaction stages on the background of the so-called “thin layer hypothesis”. Thus, the shell–wave interaction model defined by integro-differential dynamic equations with analytically determined kernels of integral operators becomes hence two-dimensional but nonlocal in time. On the other hand, the initial interaction stage results in localized dynamic loadings and consequently in complex strain and stress states that require higher-order shell theories. Here the modified theory of I.N.Vekua–A.A.Amosov-type is formulated in terms of analytical continuum dynamics. The shell model is constructed on a two-dimensional manifold within a set of field variables, Lagrangian density, and constraint equations following from the boundary conditions “shifted” from the shell faces to its base surface. Such an approach allows one to construct consistent low-order shell models within a unified formal hierarchy. The equations of the N th-order shell theory are singularly perturbed and contain second-order partial derivatives with respect to time and surface coordinates whereas the numerical integration of systems of first-order equations is more efficient. Such systems can be obtained as Hamilton–de Donder–Weyl-type equations for the Lagrangian dynamical system. The Hamiltonian formulation of the elementary N th-order shell theory is here briefly described.

  11. A trade-off between model resolution and variance with selected Rayleigh-wave data

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Xu, Y.

    2008-01-01

    Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (??? 2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. First, we employed a data-resolution matrix to select data that would be well predicted and to explain advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher mode data are normally more accurately predicted than fundamental mode data because of restrictions on the data kernel for the inversion system. Second, we obtained an optimal damping vector in a vicinity of an inverted model by the singular value decomposition of a trade-off function of model resolution and variance. In the end of the paper, we used a real-world example to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher mode data in inversion can provide better results. We also calculated model-resolution matrices of these examples to show the potential of increasing model resolution with selected surface-wave data. With the optimal damping vector, we can improve and assess an inverted model obtained by a damped least-square method.

  12. Correlation and classification of single kernel fluorescence hyperspectral data with aflatoxin concentration in corn kernels inoculated with Aspergillus flavus spores.

    PubMed

    Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D

    2010-05-01

    The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to 0.91 when a threshold of either 20 or 100 ng g(-1) was used. Overall, the results indicate that fluorescence hyperspectral imaging may be applicable in estimating aflatoxin content in individual corn kernels.

  13. Dynamics of Proton Spin: Role of qqq Force

    NASA Astrophysics Data System (ADS)

    Mitra, A. N.

    The analytic structure of the qqq wave function, obtained recently in the high momentum regime of QCD, is employed for the formulation of baryonic transition amplitudes via quark loops. A new aspect of this study is the role of a direct (Y -shaped, Mercedes-Benz type) qqq force in generating the qqq wave function The dynamics is that of a Salpeter-like equation (3D support for the kernel) formulated covariantly on the light front, a la Markov-Yukawa Transversality Principle (MYTP) which warrants a 2-way interconnection between the 3D and 4D Bethe-Salpeter (BSE) forms for 2 as well as 3 fermion quarks. The dynamics of this 3-body force shows up through a characteristic singularity in the hypergeometric differential equation for the 3D wave function ϕ, corresponding to a negative eigenvalue of the spin operator iσ1·σ2 × σ3 which is an integral part of the qqq force. As a first application of this wave function to the problem of the proton spin anomaly, the two-gluon contribution to the anomaly yields an estimate of the right sign, although somewhat smaller in magnitude.

  14. Volume integral equation for electromagnetic scattering: Rigorous derivation and analysis for a set of multilayered particles with piecewise-smooth boundaries in a passive host medium

    NASA Astrophysics Data System (ADS)

    Yurkin, Maxim A.; Mishchenko, Michael I.

    2018-04-01

    We present a general derivation of the frequency-domain volume integral equation (VIE) for the electric field inside a nonmagnetic scattering object from the differential Maxwell equations, transmission boundary conditions, radiation condition at infinity, and locally-finite-energy condition. The derivation applies to an arbitrary spatially finite group of particles made of isotropic materials and embedded in a passive host medium, including those with edges, corners, and intersecting internal interfaces. This is a substantially more general type of scatterer than in all previous derivations. We explicitly treat the strong singularity of the integral kernel, but keep the entire discussion accessible to the applied scattering community. We also consider the known results on the existence and uniqueness of VIE solution and conjecture a general sufficient condition for that. Finally, we discuss an alternative way of deriving the VIE for an arbitrary object by means of a continuous transformation of the everywhere smooth refractive-index function into a discontinuous one. Overall, the paper examines and pushes forward the state-of-the-art understanding of various analytical aspects of the VIE.

  15. Classification of Phylogenetic Profiles for Protein Function Prediction: An SVM Approach

    NASA Astrophysics Data System (ADS)

    Kotaru, Appala Raju; Joshi, Ramesh C.

    Predicting the function of an uncharacterized protein is a major challenge in post-genomic era due to problems complexity and scale. Having knowledge of protein function is a crucial link in the development of new drugs, better crops, and even the development of biochemicals such as biofuels. Recently numerous high-throughput experimental procedures have been invented to investigate the mechanisms leading to the accomplishment of a protein’s function and Phylogenetic profile is one of them. Phylogenetic profile is a way of representing a protein which encodes evolutionary history of proteins. In this paper we proposed a method for classification of phylogenetic profiles using supervised machine learning method, support vector machine classification along with radial basis function as kernel for identifying functionally linked proteins. We experimentally evaluated the performance of the classifier with the linear kernel, polynomial kernel and compared the results with the existing tree kernel. In our study we have used proteins of the budding yeast saccharomyces cerevisiae genome. We generated the phylogenetic profiles of 2465 yeast genes and for our study we used the functional annotations that are available in the MIPS database. Our experiments show that the performance of the radial basis kernel is similar to polynomial kernel is some functional classes together are better than linear, tree kernel and over all radial basis kernel outperformed the polynomial kernel, linear kernel and tree kernel. In analyzing these results we show that it will be feasible to make use of SVM classifier with radial basis function as kernel to predict the gene functionality using phylogenetic profiles.

  16. Intraear Compensation of Field Corn, Zea mays, from Simulated and Naturally Occurring Injury by Ear-Feeding Larvae.

    PubMed

    Steckel, S; Stewart, S D

    2015-06-01

    Ear-feeding larvae, such as corn earworm, Helicoverpa zea Boddie (Lepidoptera: Noctuidae), can be important insect pests of field corn, Zea mays L., by feeding on kernels. Recently introduced, stacked Bacillus thuringiensis (Bt) traits provide improved protection from ear-feeding larvae. Thus, our objective was to evaluate how injury to kernels in the ear tip might affect yield when this injury was inflicted at the blister and milk stages. In 2010, simulated corn earworm injury reduced total kernel weight (i.e., yield) at both the blister and milk stage. In 2011, injury to ear tips at the milk stage affected total kernel weight. No differences in total kernel weight were found in 2013, regardless of when or how much injury was inflicted. Our data suggested that kernels within the same ear could compensate for injury to ear tips by increasing in size, but this increase was not always statistically significant or sufficient to overcome high levels of kernel injury. For naturally occurring injury observed on multiple corn hybrids during 2011 and 2012, our analyses showed either no or a minimal relationship between number of kernels injured by ear-feeding larvae and the total number of kernels per ear, total kernel weight, or the size of individual kernels. The results indicate that intraear compensation for kernel injury to ear tips can occur under at least some conditions. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Co-Labeling for Multi-View Weakly Labeled Learning.

    PubMed

    Xu, Xinxing; Li, Wen; Xu, Dong; Tsang, Ivor W

    2016-06-01

    It is often expensive and time consuming to collect labeled training samples in many real-world applications. To reduce human effort on annotating training samples, many machine learning techniques (e.g., semi-supervised learning (SSL), multi-instance learning (MIL), etc.) have been studied to exploit weakly labeled training samples. Meanwhile, when the training data is represented with multiple types of features, many multi-view learning methods have shown that classifiers trained on different views can help each other to better utilize the unlabeled training samples for the SSL task. In this paper, we study a new learning problem called multi-view weakly labeled learning, in which we aim to develop a unified approach to learn robust classifiers by effectively utilizing different types of weakly labeled multi-view data from a broad range of tasks including SSL, MIL and relative outlier detection (ROD). We propose an effective approach called co-labeling to solve the multi-view weakly labeled learning problem. Specifically, we model the learning problem on each view as a weakly labeled learning problem, which aims to learn an optimal classifier from a set of pseudo-label vectors generated by using the classifiers trained from other views. Unlike traditional co-training approaches using a single pseudo-label vector for training each classifier, our co-labeling approach explores different strategies to utilize the predictions from different views, biases and iterations for generating the pseudo-label vectors, making our approach more robust for real-world applications. Moreover, to further improve the weakly labeled learning on each view, we also exploit the inherent group structure in the pseudo-label vectors generated from different strategies, which leads to a new multi-layer multiple kernel learning problem. Promising results for text-based image retrieval on the NUS-WIDE dataset as well as news classification and text categorization on several real-world multi-view datasets clearly demonstrate that our proposed co-labeling approach achieves state-of-the-art performance for various multi-view weakly labeled learning problems including multi-view SSL, multi-view MIL and multi-view ROD.

  18. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  19. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  20. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  1. Resolution of quantum singularities

    NASA Astrophysics Data System (ADS)

    Konkowski, Deborah; Helliwell, Thomas

    2017-01-01

    A review of quantum singularities in static and conformally static spacetimes is given. A spacetime is said to be quantum mechanically non-singular if a quantum wave packet does not feel, in some sense, the presence of a singularity; mathematically, this means that the wave operator is essentially self-adjoint on the space of square integrable functions. Spacetimes with classical mild singularities (quasiregular ones) to spacetimes with classical strong curvature singularities have been tested. Here we discuss the similarities and differences between classical singularities that are healed quantum mechanically and those that are not. Possible extensions of the mathematical technique to more physically realistic spacetimes are discussed.

  2. The geometry of singularities and the black hole information paradox

    NASA Astrophysics Data System (ADS)

    Stoica, O. C.

    2015-07-01

    The information loss occurs in an evaporating black hole only if the time evolution ends at the singularity. But as we shall see, the black hole solutions admit analytical extensions beyond the singularities, to globally hyperbolic solutions. The method used is similar to that for the apparent singularity at the event horizon, but at the singularity, the resulting metric is degenerate. When the metric is degenerate, the covariant derivative, the curvature, and the Einstein equation become singular. However, recent advances in the geometry of spacetimes with singular metric show that there are ways to extend analytically the Einstein equation and other field equations beyond such singularities. This means that the information can get out of the singularity. In the case of charged black holes, the obtained solutions have nonsingular electromagnetic field. As a bonus, if particles are such black holes, spacetime undergoes dimensional reduction effects like those required by some approaches to perturbative Quantum Gravity.

  3. Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers.

    PubMed

    Sochat, Vanessa V; Prybol, Cameron J; Kurtzer, Gregory M

    2017-01-01

    Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub's primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers.

  4. Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers

    PubMed Central

    Prybol, Cameron J.; Kurtzer, Gregory M.

    2017-01-01

    Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub’s primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers. PMID:29186161

  5. Big bounce with finite-time singularity: The F(R) gravity description

    NASA Astrophysics Data System (ADS)

    Odintsov, S. D.; Oikonomou, V. K.

    An alternative to the Big Bang cosmologies is obtained by the Big Bounce cosmologies. In this paper, we study a bounce cosmology with a Type IV singularity occurring at the bouncing point in the context of F(R) modified gravity. We investigate the evolution of the Hubble radius and we examine the issue of primordial cosmological perturbations in detail. As we demonstrate, for the singular bounce, the primordial perturbations originating from the cosmological era near the bounce do not produce a scale-invariant spectrum and also the short wavelength modes after these exit the horizon, do not freeze, but grow linearly with time. After presenting the cosmological perturbations study, we discuss the viability of the singular bounce model, and our results indicate that the singular bounce must be combined with another cosmological scenario, or should be modified appropriately, in order that it leads to a viable cosmology. The study of the slow-roll parameters leads to the same result indicating that the singular bounce theory is unstable at the singularity point for certain values of the parameters. We also conformally transform the Jordan frame singular bounce, and as we demonstrate, the Einstein frame metric leads to a Big Rip singularity. Therefore, the Type IV singularity in the Jordan frame becomes a Big Rip singularity in the Einstein frame. Finally, we briefly study a generalized singular cosmological model, which contains two Type IV singularities, with quite appealing features.

  6. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  7. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  8. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  9. Wigner functions defined with Laplace transform kernels.

    PubMed

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  10. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  11. Influence of wheat kernel physical properties on the pulverizing process.

    PubMed

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  12. Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.

    PubMed

    Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D

    2016-04-01

    Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the kernel fraction was redried at 60°C for 48 h in a forced-air oven and dry sieved to determine GMPS and surface area. Linear relationships between CSPS from WPCS (n=80) and kernel fraction GMPS, surface area, and proportion passing through the 4.75-mm screen were poor. Strong quadratic relationships between proportion of kernel fraction passing through the 4.75-mm screen and kernel fraction GMPS and surface area were observed. These findings suggest that hydrodynamic separation and dry sieving of the kernel fraction may provide a better assessment of kernel breakage in WPCS than CSPS. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. Singularity in structural optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Guptill, J. D.; Berke, L.

    1993-01-01

    The conditions under which global and local singularities may arise in structural optimization are examined. Examples of these singularities are presented, and a framework is given within which the singularities can be recognized. It is shown, in particular, that singularities can be identified through the analysis of stress-displacement relations together with compatibility conditions or the displacement-stress relations derived by the integrated force method of structural analysis. Methods of eliminating the effects of singularities are suggested and illustrated numerically.

  14. Classification of corn kernels contaminated with aflatoxins using fluorescence and reflectance hyperspectral images analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas

    2015-05-01

    Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.

  15. Influence of Kernel Age on Fumonisin B1 Production in Maize by Fusarium moniliforme

    PubMed Central

    Warfield, Colleen Y.; Gilchrist, David G.

    1999-01-01

    Production of fumonisins by Fusarium moniliforme on naturally infected maize ears is an important food safety concern due to the toxic nature of this class of mycotoxins. Assessing the potential risk of fumonisin production in developing maize ears prior to harvest requires an understanding of the regulation of toxin biosynthesis during kernel maturation. We investigated the developmental-stage-dependent relationship between maize kernels and fumonisin B1 production by using kernels collected at the blister (R2), milk (R3), dough (R4), and dent (R5) stages following inoculation in culture at their respective field moisture contents with F. moniliforme. Highly significant differences (P ≤ 0.001) in fumonisin B1 production were found among kernels at the different developmental stages. The highest levels of fumonisin B1 were produced on the dent stage kernels, and the lowest levels were produced on the blister stage kernels. The differences in fumonisin B1 production among kernels at the different developmental stages remained significant (P ≤ 0.001) when the moisture contents of the kernels were adjusted to the same level prior to inoculation. We concluded that toxin production is affected by substrate composition as well as by moisture content. Our study also demonstrated that fumonisin B1 biosynthesis on maize kernels is influenced by factors which vary with the developmental age of the tissue. The risk of fumonisin contamination may begin early in maize ear development and increases as the kernels reach physiological maturity. PMID:10388675

  16. An Improved Transformation and Optimized Sampling Scheme for the Numerical Evaluation of Singular and Near-Singular Potentials

    NASA Technical Reports Server (NTRS)

    Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.

    2007-01-01

    Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.

  17. Topological dynamics of optical singularities in speckle-fields induced by photorefractive scattering in a LiNbO{sub 3} : Fe crystal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasil'ev, Vasilii I; Soskin, M S

    2013-02-28

    A natural singular dynamics of elliptically polarised speckle-fields induced by the 'optical damage' effect in a photorefractive crystal of lithium niobate by a passing beam of a helium - neon laser is studied by the developed methods of singular optics. For the polarisation singularities (C points), a new class of chain reactions, namely, singular chain reactions are discovered and studied. It is shown that they obey the topological charge and sum Poincare index conservation laws. In addition, they exist for all the time of crystal irradiation. They consist of a series of interlocking chains, where singularity pairs arising in amore » chain annihilate with singularities from neighbouring independently created chains. Less often singular 'loop' reactions are observed where arising pairs of singularities annihilate after reversible transformations in within the boundaries of a single speckle. The type of a singular reaction is determined by a topology and dynamics of the speckles, in which the reactions are developing. (laser optics 2012)« less

  18. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  19. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Novel near-infrared sampling apparatus for single kernel analysis of oil content in maize.

    PubMed

    Janni, James; Weinstock, B André; Hagen, Lisa; Wright, Steve

    2008-04-01

    A method of rapid, nondestructive chemical and physical analysis of individual maize (Zea mays L.) kernels is needed for the development of high value food, feed, and fuel traits. Near-infrared (NIR) spectroscopy offers a robust nondestructive method of trait determination. However, traditional NIR bulk sampling techniques cannot be applied successfully to individual kernels. Obtaining optimized single kernel NIR spectra for applied chemometric predictive analysis requires a novel sampling technique that can account for the heterogeneous forms, morphologies, and opacities exhibited in individual maize kernels. In this study such a novel technique is described and compared to less effective means of single kernel NIR analysis. Results of the application of a partial least squares (PLS) derived model for predictive determination of percent oil content per individual kernel are shown.

  1. Computed tomography coronary stent imaging with iterative reconstruction: a trade-off study between medium kernel and sharp kernel.

    PubMed

    Zhou, Qijing; Jiang, Biao; Dong, Fei; Huang, Peiyu; Liu, Hongtao; Zhang, Minming

    2014-01-01

    To evaluate the improvement of iterative reconstruction in image space (IRIS) technique in computed tomographic (CT) coronary stent imaging with sharp kernel, and to make a trade-off analysis. Fifty-six patients with 105 stents were examined by 128-slice dual-source CT coronary angiography (CTCA). Images were reconstructed using standard filtered back projection (FBP) and IRIS with both medium kernel and sharp kernel applied. Image noise and the stent diameter were investigated. Image noise was measured both in background vessel and in-stent lumen as objective image evaluation. Image noise score and stent score were performed as subjective image evaluation. The CTCA images reconstructed with IRIS were associated with significant noise reduction compared to that of CTCA images reconstructed using FBP technique in both of background vessel and in-stent lumen (the background noise decreased by approximately 25.4% ± 8.2% in medium kernel (P

  2. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  3. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    PubMed

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  4. Mapping QTLs controlling kernel dimensions in a wheat inter-varietal RIL mapping population.

    PubMed

    Cheng, Ruiru; Kong, Zhongxin; Zhang, Liwei; Xie, Quan; Jia, Haiyan; Yu, Dong; Huang, Yulong; Ma, Zhengqiang

    2017-07-01

    Seven kernel dimension QTLs were identified in wheat, and kernel thickness was found to be the most important dimension for grain weight improvement. Kernel morphology and weight of wheat (Triticum aestivum L.) affect both yield and quality; however, the genetic basis of these traits and their interactions has not been fully understood. In this study, to investigate the genetic factors affecting kernel morphology and the association of kernel morphology traits with kernel weight, kernel length (KL), width (KW) and thickness (KT) were evaluated, together with hundred-grain weight (HGW), in a recombinant inbred line population derived from Nanda2419 × Wangshuibai, with data from five trials (two different locations over 3 years). The results showed that HGW was more closely correlated with KT and KW than with KL. A whole genome scan revealed four QTLs for KL, one for KW and two for KT, distributed on five different chromosomes. Of them, QKl.nau-2D for KL, and QKt.nau-4B and QKt.nau-5A for KT were newly identified major QTLs for the respective traits, explaining up to 32.6 and 41.5% of the phenotypic variations, respectively. Increase of KW and KT and reduction of KL/KT and KW/KT ratios always resulted in significant higher grain weight. Lines combining the Nanda 2419 alleles of the 4B and 5A intervals had wider, thicker, rounder kernels and a 14% higher grain weight in the genotype-based analysis. A strong, negative linear relationship of the KW/KT ratio with grain weight was observed. It thus appears that kernel thickness is the most important kernel dimension factor in wheat improvement for higher yield. Mapping and marker identification of the kernel dimension-related QTLs definitely help realize the breeding goals.

  5. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Can accretion disk properties observationally distinguish black holes from naked singularities?

    NASA Astrophysics Data System (ADS)

    Kovács, Z.; Harko, T.

    2010-12-01

    Naked singularities are hypothetical astrophysical objects, characterized by a gravitational singularity without an event horizon. Penrose has proposed a conjecture, according to which there exists a cosmic censor who forbids the occurrence of naked singularities. Distinguishing between astrophysical black holes and naked singularities is a major challenge for present day observational astronomy. In the context of stationary and axially symmetrical geometries, a possibility of differentiating naked singularities from black holes is through the comparative study of thin accretion disks properties around rotating naked singularities and Kerr-type black holes, respectively. In the present paper, we consider accretion disks around axially-symmetric rotating naked singularities, obtained as solutions of the field equations in the Einstein-massless scalar field theory. A first major difference between rotating naked singularities and Kerr black holes is in the frame dragging effect, the angular velocity of a rotating naked singularity being inversely proportional to its spin parameter. Because of the differences in the exterior geometry, the thermodynamic and electromagnetic properties of the disks (energy flux, temperature distribution and equilibrium radiation spectrum) are different for these two classes of compact objects, consequently giving clear observational signatures that could discriminate between black holes and naked singularities. For specific values of the spin parameter and of the scalar charge, the energy flux from the disk around a rotating naked singularity can exceed by several orders of magnitude the flux from the disk of a Kerr black hole. In addition to this, it is also shown that the conversion efficiency of the accreting mass into radiation by rotating naked singularities is always higher than the conversion efficiency for black holes, i.e., naked singularities provide a much more efficient mechanism for converting mass into radiation than black holes. Thus, these observational signatures may provide the necessary tools from clearly distinguishing rotating naked singularities from Kerr-type black holes.

  7. Adaptive kernel function using line transect sampling

    NASA Astrophysics Data System (ADS)

    Albadareen, Baker; Ismail, Noriszura

    2018-04-01

    The estimation of f(0) is crucial in the line transect method which is used for estimating population abundance in wildlife survey's. The classical kernel estimator of f(0) has a high negative bias. Our study proposes an adaptation in the kernel function which is shown to be more efficient than the usual kernel estimator. A simulation study is adopted to compare the performance of the proposed estimators with the classical kernel estimators.

  8. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  9. Are Singularities Integral to General Theory of Relativity?

    NASA Astrophysics Data System (ADS)

    Krori, K.; Dutta, S.

    2011-11-01

    Since the 1960s the general relativists have been deeply obsessed with the possibilities of GTR singularities - blackhole as well as cosmological singularities. Senovilla, for the first time, followed by others, showed that there are cylindrically symmetric cosmological space-times which are free of singularities. On the other hand, Krori et al. have presently shown that spherically symmetric cosmological space-times - which later reduce to FRW space-times may also be free of singularities. Besides, Mitra has in the mean-time come forward with some realistic calculations which seem to rule out the possibility of a blackhole singularity. So whether singularities are integral to GTR seems to come under a shadow.

  10. Pollen source effects on growth of kernel structures and embryo chemical compounds in maize.

    PubMed

    Tanaka, W; Mantese, A I; Maddonni, G A

    2009-08-01

    Previous studies have reported effects of pollen source on the oil concentration of maize (Zea mays) kernels through modifications to both the embryo/kernel ratio and embryo oil concentration. The present study expands upon previous analyses by addressing pollen source effects on the growth of kernel structures (i.e. pericarp, endosperm and embryo), allocation of embryo chemical constituents (i.e. oil, protein, starch and soluble sugars), and the anatomy and histology of the embryos. Maize kernels with different oil concentration were obtained from pollinations with two parental genotypes of contrasting oil concentration. The dynamics of the growth of kernel structures and allocation of embryo chemical constituents were analysed during the post-flowering period. Mature kernels were dissected to study the anatomy (embryonic axis and scutellum) and histology [cell number and cell size of the scutellums, presence of sub-cellular structures in scutellum tissue (starch granules, oil and protein bodies)] of the embryos. Plants of all crosses exhibited a similar kernel number and kernel weight. Pollen source modified neither the growth period of kernel structures, nor pericarp growth rate. By contrast, pollen source determined a trade-off between embryo and endosperm growth rates, which impacted on the embryo/kernel ratio of mature kernels. Modifications to the embryo size were mediated by scutellum cell number. Pollen source also affected (P < 0.01) allocation of embryo chemical compounds. Negative correlations among embryo oil concentration and those of starch (r = 0.98, P < 0.01) and soluble sugars (r = 0.95, P < 0.05) were found. Coincidently, embryos with low oil concentration had an increased (P < 0.05-0.10) scutellum cell area occupied by starch granules and fewer oil bodies. The effects of pollen source on both embryo/kernel ratio and allocation of embryo chemicals seems to be related to the early established sink strength (i.e. sink size and sink activity) of the embryos.

  11. Ground-state magnetization of the Ising spin glass: A recursive numerical method and Chen-Ma scaling

    NASA Astrophysics Data System (ADS)

    Sepehrinia, Reza; Chalangari, Fartash

    2018-03-01

    The ground-state properties of quasi-one-dimensional (Q1D) Ising spin glass are investigated using an exact numerical approach and analytical arguments. A set of coupled recursive equations for the ground-state energy are introduced and solved numerically. For various types of coupling distribution, we obtain accurate results for magnetization, particularly in the presence of a weak external magnetic field. We show that in the weak magnetic field limit, similar to the 1D model, magnetization exhibits a singular power-law behavior with divergent susceptibility. Remarkably, the spectrum of magnetic exponents is markedly different from that of the 1D system even in the case of two coupled chains. The magnetic exponent makes a crossover from being dependent on a distribution function to a constant value independent of distribution. We provide an analytic theory for these observations by extending the Chen-Ma argument to the Q1D case. We derive an analytical formula for the exponent which is in perfect agreement with the numerical results.

  12. Compact Groups analysis using weak gravitational lensing

    NASA Astrophysics Data System (ADS)

    Chalela, Martín; Gonzalez, Elizabeth Johana; Garcia Lambas, Diego; Foëx, Gael

    2017-05-01

    We present a weak lensing analysis of a sample of Sloan Digital Sky Survey compact groups (CGs). Using the measured radial density contrast profile, we derive the average masses under the assumption of spherical symmetry, obtaining a velocity dispersion for the singular isothermal spherical model, σV = 270 ± 40 km s-1, and for the NFW model, R_{200}=0.53± 0.10 h_{70}^{-1} Mpc. We test three different definitions of CG centres to identify which best traces the true dark matter halo centre, concluding that a luminosity-weighted centre is the most suitable choice. We also study the lensing signal dependence on CG physical radius, group surface brightness and morphological mixing. We find that groups with more concentrated galaxy members show steeper mass profiles and larger velocity dispersions. We argue that both, a possible lower fraction of interloper and a true steeper profile, could be playing a role in this effect. Straightforward velocity dispersion estimates from member spectroscopy yield σV ≈ 230 km s-1 in agreement with our lensing results.

  13. Superfluid H3e in globally isotropic random media

    NASA Astrophysics Data System (ADS)

    Ikeda, Ryusuke; Aoyama, Kazushi

    2009-02-01

    Recent theoretical and experimental studies of superfluid H3e in aerogels with a global anisotropy created, e.g., by an external stress have definitely shown that the A -like phase with an equal-spin pairing in such aerogel samples is in the Anderson-Brinkman-Morel (ABM) (or axial) pairing state. In this paper, the A -like phase of superfluid H3e in globally isotropic aerogel is studied in detail by assuming a weakly disordered system in which singular topological defects are absent. Through calculation of the free energy, a disordered ABM state is found to be the best candidate of the pairing state of the globally isotropic A -like phase. Further, it is found through a one-loop renormalization-group calculation that the coreless continuous vortices (or vortex-Skyrmions) are irrelevant to the long-distance behavior of disorder-induced textures, and that the superfluidity is maintained in spite of lack of the conventional superfluid long-range order. Therefore, the globally isotropic A -like phase at weak disorder is, like in the case with a globally stretched anisotropy, a glass phase with the ABM pairing and shows superfluidity.

  14. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall be...

  15. 7 CFR 51.2090 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... defect which makes a kernel or piece of kernel unsuitable for human consumption, and includes decay...: Shriveling when the kernel is seriously withered, shrunken, leathery, tough or only partially developed: Provided, that partially developed kernels are not considered seriously damaged if more than one-fourth of...

  16. Anisotropic hydrodynamics with a scalar collisional kernel

    NASA Astrophysics Data System (ADS)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  17. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Straight-chain halocarbon forming fluids for TRISO fuel kernel production - Tests with yttria-stabilized zirconia microspheres

    NASA Astrophysics Data System (ADS)

    Baker, M. P.; King, J. C.; Gorman, B. P.; Braley, J. C.

    2015-03-01

    Current methods of TRISO fuel kernel production in the United States use a sol-gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.

  19. Numerical study of the ignition behavior of a post-discharge kernel injected into a turbulent stratified cross-flow

    NASA Astrophysics Data System (ADS)

    Jaravel, Thomas; Labahn, Jeffrey; Ihme, Matthias

    2017-11-01

    The reliable initiation of flame ignition by high-energy spark kernels is critical for the operability of aviation gas turbines. The evolution of a spark kernel ejected by an igniter into a turbulent stratified environment is investigated using detailed numerical simulations with complex chemistry. At early times post ejection, comparisons of simulation results with high-speed Schlieren data show that the initial trajectory of the kernel is well reproduced, with a significant amount of air entrainment from the surrounding flow that is induced by the kernel ejection. After transiting in a non-flammable mixture, the kernel reaches a second stream of flammable methane-air mixture, where the successful of the kernel ignition was found to depend on the local flow state and operating conditions. By performing parametric studies, the probability of kernel ignition was identified, and compared with experimental observations. The ignition behavior is characterized by analyzing the local chemical structure, and its stochastic variability is also investigated.

  20. The site, size, spatial stability, and energetics of an X-ray flare kernel

    NASA Technical Reports Server (NTRS)

    Petrasso, R.; Gerassimenko, M.; Nolte, J.

    1979-01-01

    The site, size evolution, and energetics of an X-ray kernel that dominated a solar flare during its rise and somewhat during its peak are investigated. The position of the kernel remained stationary to within about 3 arc sec over the 30-min interval of observations, despite pulsations in the kernel X-ray brightness in excess of a factor of 10. This suggests a tightly bound, deeply rooted magnetic structure, more plausibly associated with the near chromosphere or low corona rather than with the high corona. The H-alpha flare onset coincided with the appearance of the kernel, again suggesting a close spatial and temporal coupling between the chromospheric H-alpha event and the X-ray kernel. At the first kernel brightness peak its size was no larger than about 2 arc sec, when it accounted for about 40% of the total flare flux. In the second rise phase of the kernel, a source power input of order 2 times 10 to the 24th ergs/sec is minimally required.

  1. Algorithms for sorting unsigned linear genomes by the DCJ operations.

    PubMed

    Jiang, Haitao; Zhu, Binhai; Zhu, Daming

    2011-02-01

    The double cut and join operation (abbreviated as DCJ) has been extensively used for genomic rearrangement. Although the DCJ distance between signed genomes with both linear and circular (uni- and multi-) chromosomes is well studied, the only known result for the NP-complete unsigned DCJ distance problem is an approximation algorithm for unsigned linear unichromosomal genomes. In this article, we study the problem of computing the DCJ distance on two unsigned linear multichromosomal genomes (abbreviated as UDCJ). We devise a 1.5-approximation algorithm for UDCJ by exploiting the distance formula for signed genomes. In addition, we show that UDCJ admits a weak kernel of size 2k and hence an FPT algorithm running in O(2(2k)n) time.

  2. The pre-image problem in kernel methods.

    PubMed

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  3. Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.

    PubMed

    Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I

    2016-03-01

    The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor.

  4. Development of a kernel function for clinical data.

    PubMed

    Daemen, Anneleen; De Moor, Bart

    2009-01-01

    For most diseases and examinations, clinical data such as age, gender and medical history guides clinical management, despite the rise of high-throughput technologies. To fully exploit such clinical information, appropriate modeling of relevant parameters is required. As the widely used linear kernel function has several disadvantages when applied to clinical data, we propose a new kernel function specifically developed for this data. This "clinical kernel function" more accurately represents similarities between patients. Evidently, three data sets were studied and significantly better performances were obtained with a Least Squares Support Vector Machine when based on the clinical kernel function compared to the linear kernel function.

  5. Manycore Performance-Portability: Kokkos Multidimensional Array Library

    DOE PAGES

    Edwards, H. Carter; Sunderland, Daniel; Porter, Vicki; ...

    2012-01-01

    Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel executionmore » performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].« less

  6. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    PubMed

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  7. Impact of deep learning on the normalization of reconstruction kernel effects in imaging biomarker quantification: a pilot study in CT emphysema

    NASA Astrophysics Data System (ADS)

    Jin, Hyeongmin; Heo, Changyong; Kim, Jong Hyo

    2018-02-01

    Differing reconstruction kernels are known to strongly affect the variability of imaging biomarkers and thus remain as a barrier in translating the computer aided quantification techniques into clinical practice. This study presents a deep learning application to CT kernel conversion which converts a CT image of sharp kernel to that of standard kernel and evaluates its impact on variability reduction of a pulmonary imaging biomarker, the emphysema index (EI). Forty cases of low-dose chest CT exams obtained with 120kVp, 40mAs, 1mm thickness, of 2 reconstruction kernels (B30f, B50f) were selected from the low dose lung cancer screening database of our institution. A Fully convolutional network was implemented with Keras deep learning library. The model consisted of symmetric layers to capture the context and fine structure characteristics of CT images from the standard and sharp reconstruction kernels. Pairs of the full-resolution CT data set were fed to input and output nodes to train the convolutional network to learn the appropriate filter kernels for converting the CT images of sharp kernel to standard kernel with a criterion of measuring the mean squared error between the input and target images. EIs (RA950 and Perc15) were measured with a software package (ImagePrism Pulmo, Seoul, South Korea) and compared for the data sets of B50f, B30f, and the converted B50f. The effect of kernel conversion was evaluated with the mean and standard deviation of pair-wise differences in EI. The population mean of RA950 was 27.65 +/- 7.28% for B50f data set, 10.82 +/- 6.71% for the B30f data set, and 8.87 +/- 6.20% for the converted B50f data set. The mean of pair-wise absolute differences in RA950 between B30f and B50f is reduced from 16.83% to 1.95% using kernel conversion. Our study demonstrates the feasibility of applying the deep learning technique for CT kernel conversion and reducing the kernel-induced variability of EI quantification. The deep learning model has a potential to improve the reliability of imaging biomarker, especially in evaluating the longitudinal changes of EI even when the patient CT scans were performed with different kernels.

  8. Metabolic network prediction through pairwise rational kernels.

    PubMed

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy values have been improved, while maintaining lower construction and execution times. The power of using kernels is that almost any sort of data can be represented using kernels. Therefore, completely disparate types of data can be combined to add power to kernel-based machine learning methods. When we compared our proposal using PRKs with other similar kernel, the execution times were decreased, with no compromise of accuracy. We also proved that by combining PRKs with other kernels that include evolutionary information, the accuracy can also also be improved. As our proposal can use any type of sequence data, genes do not need to be properly annotated, avoiding accumulation errors because of incorrect previous annotations.

  9. Phytochemicals from Mangifera pajang Kosterm and their biological activities.

    PubMed

    Ahmad, Sadikah; Sukari, Mohd Aspollah; Ismail, Nurussaadah; Ismail, Intan Safinar; Abdul, Ahmad Bustamam; Abu Bakar, Mohd Fadzelly; Kifli, Nurolaini; Ee, Gwendoline C L

    2015-03-26

    Mangifera pajang Kosterm is a plant species from the mango family (Anacardiaceae). The fruits are edible and have been reported to have high antioxidant content. However, the detailed phytochemical studies of the plant have not been reported previously. This study investigates the phytochemicals and biological activities of different parts of Mangifera pajang. The plant samples were extracted with solvents of different polarity to obtain the crude extracts. The isolated compounds were characterized using spectroscopic methods. The extracts and isolated compounds were subjected to cytotoxicity tests using human breast cancer (MCF-7), human cervical cancer (HeLa) and human colon cancer (HT-29) cells. The free radical scavenging activity test was conducted using the DPPH assay. Antimicrobial activity tests were carried out by using the disc diffusion method. Phytochemical investigation on the kernel, stem bark and leaves of Mangifera pajang led to the isolation of methyl gallate (1), mixture of benzaldehyde (2) and benzyl alcohol (3), mangiferonic acid (4), 3β-hydroxy-cycloart-24-ene-26-oic acid (5), 3β,23-dihydroxy-cycloart-24-ene-26-oic acid (6), lupeol(7) lupenone(8), β-sitosterol(9), stigmasterol(10), trans-sobrerol(11) and quercitrin (12). Crude ethyl acetate and methanol extracts from the kernel indicated strong cytotoxic activity towards MCF-7 and HeLa cells with IC50 values of less than 10 μg/mL, while petroleum ether, chloroform and ethyl acetate extracts of the stem bark showed strong to moderate activity against MCF-7, HeLa and HT-29 cancer cell lines with IC50 values ranging from 5 to 30 μg/mL. As for the antimicrobial assays, only the ethyl acetate and methanol extracts from the kernel displayed some inhibition against the microbes in the antibacterial assays. The kernel extracts showed highest free radical scavenging activity with IC50 values of less than 10 μg/mL, while the ethyl acetate and methanol extracts of leaves displayed only weak activity in the DPPH assays. Phytochemical investigations on various parts of Mangifera pajang have identified terpenoids and a flavonol derivative as major constituents. Bioassay studies have indicated that the crude extracts and isolated compounds have potential as naturally-derived anticancer and antimicrobial agents, besides possess high free radical scavenging activity.

  10. Differential metabolome analysis of field-grown maize kernels in response to drought stress

    USDA-ARS?s Scientific Manuscript database

    Drought stress constrains maize kernel development and can exacerbate aflatoxin contamination. In order to identify drought responsive metabolites and explore pathways involved in kernel responses, a metabolomics analysis was conducted on kernels from a drought tolerant line, Lo964, and a sensitive ...

  11. Occurrence of 'super soft' wheat kernel texture in hexaploid and tetraploid wheats

    USDA-ARS?s Scientific Manuscript database

    Wheat kernel texture is a key trait that governs milling performance, flour starch damage, flour particle size, flour hydration properties, and baking quality. Kernel texture is commonly measured using the Perten Single Kernel Characterization System (SKCS). The SKCS returns texture values (Hardness...

  12. 7 CFR 868.203 - Basis of determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Principles Governing..., heat-damaged kernels, red rice and damaged kernels, chalky kernels, other types, color, and the special grade Parboiled rough rice shall be on the basis of the whole and large broken kernels of milled rice...

  13. 7 CFR 868.203 - Basis of determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Principles Governing..., heat-damaged kernels, red rice and damaged kernels, chalky kernels, other types, color, and the special grade Parboiled rough rice shall be on the basis of the whole and large broken kernels of milled rice...

  14. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the use...

  15. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the use...

  16. On the dynamic singularities in the control of free-floating space manipulators

    NASA Technical Reports Server (NTRS)

    Papadopoulos, E.; Dubowsky, S.

    1989-01-01

    It is shown that free-floating space manipulator systems have configurations which are dynamically singular. At a dynamically singular position, the manipulator is unable to move its end effector in some direction. This problem appears in any free-floating space manipulator system that permits the vehicle to move in response to manipulator motion without correction from the vehicle's attitude control system. Dynamic singularities are functions of the dynamic properties of the system; their existence and locations cannot be predicted solely from the kinematic structure of the manipulator, unlike the singularities for fixed base manipulators. It is also shown that the location of these dynamic singularities in the workplace is dependent upon the path taken by the manipulator in reaching them. Dynamic singularities must be considered in the control, planning and design of free-floating space manipulator systems. A method for calculating these dynamic singularities is presented, and it is shown that the system parameters can be selected to reduce the effect of dynamic singularities on a system's performance.

  17. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays.

  18. Performance Characteristics of a Kernel-Space Packet Capture Module

    DTIC Science & Technology

    2010-03-01

    Defense, or the United States Government . AFIT/GCO/ENG/10-03 PERFORMANCE CHARACTERISTICS OF A KERNEL-SPACE PACKET CAPTURE MODULE THESIS Presented to the...3.1.2.3 Prototype. The proof of concept for this research is the design, development, and comparative performance analysis of a kernel level N2d capture...changes to kernel code 5. Can be used for both user-space and kernel-space capture applications in order to control comparative performance analysis to

  19. High-throughput method for ear phenotyping and kernel weight estimation in maize using ear digital imaging.

    PubMed

    Makanza, R; Zaman-Allah, M; Cairns, J E; Eyre, J; Burgueño, J; Pacheco, Ángela; Diepenbrock, C; Magorokosho, C; Tarekegne, A; Olsen, M; Prasanna, B M

    2018-01-01

    Grain yield, ear and kernel attributes can assist to understand the performance of maize plant under different environmental conditions and can be used in the variety development process to address farmer's preferences. These parameters are however still laborious and expensive to measure. A low-cost ear digital imaging method was developed that provides estimates of ear and kernel attributes i.e., ear number and size, kernel number and size as well as kernel weight from photos of ears harvested from field trial plots. The image processing method uses a script that runs in a batch mode on ImageJ; an open source software. Kernel weight was estimated using the total kernel number derived from the number of kernels visible on the image and the average kernel size. Data showed a good agreement in terms of accuracy and precision between ground truth measurements and data generated through image processing. Broad-sense heritability of the estimated parameters was in the range or higher than that for measured grain weight. Limitation of the method for kernel weight estimation is discussed. The method developed in this work provides an opportunity to significantly reduce the cost of selection in the breeding process, especially for resource constrained crop improvement programs and can be used to learn more about the genetic bases of grain yield determinants.

  20. A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.

    2017-05-01

    Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.

  1. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  2. Brain tumor image segmentation using kernel dictionary learning.

    PubMed

    Jeon Lee; Seung-Jun Kim; Rong Chen; Herskovits, Edward H

    2015-08-01

    Automated brain tumor image segmentation with high accuracy and reproducibility holds a big potential to enhance the current clinical practice. Dictionary learning (DL) techniques have been applied successfully to various image processing tasks recently. In this work, kernel extensions of the DL approach are adopted. Both reconstructive and discriminative versions of the kernel DL technique are considered, which can efficiently incorporate multi-modal nonlinear feature mappings based on the kernel trick. Our novel discriminative kernel DL formulation allows joint learning of a task-driven kernel-based dictionary and a linear classifier using a K-SVD-type algorithm. The proposed approaches were tested using real brain magnetic resonance (MR) images of patients with high-grade glioma. The obtained preliminary performances are competitive with the state of the art. The discriminative kernel DL approach is seen to reduce computational burden without much sacrifice in performance.

  3. SEMI-SUPERVISED OBJECT RECOGNITION USING STRUCTURE KERNEL

    PubMed Central

    Wang, Botao; Xiong, Hongkai; Jiang, Xiaoqian; Ling, Fan

    2013-01-01

    Object recognition is a fundamental problem in computer vision. Part-based models offer a sparse, flexible representation of objects, but suffer from difficulties in training and often use standard kernels. In this paper, we propose a positive definite kernel called “structure kernel”, which measures the similarity of two part-based represented objects. The structure kernel has three terms: 1) the global term that measures the global visual similarity of two objects; 2) the part term that measures the visual similarity of corresponding parts; 3) the spatial term that measures the spatial similarity of geometric configuration of parts. The contribution of this paper is to generalize the discriminant capability of local kernels to complex part-based object models. Experimental results show that the proposed kernel exhibit higher accuracy than state-of-art approaches using standard kernels. PMID:23666108

  4. Burrower bugs (Heteroptera: Cydnidae) in peanut: seasonal species abundance, tillage effects, grade reduction effects, insecticide efficacy, and management.

    PubMed

    Chapin, Jay W; Thomas, James S

    2003-08-01

    Pitfall traps placed in South Carolina peanut, Arachis hypogaea (L.), fields collected three species of burrower bugs (Cydnidae): Cyrtomenus ciliatus (Palisot de Beauvois), Sehirus cinctus cinctus (Palisot de Beauvois), and Pangaeus bilineatus (Say). Cyrtomenus ciliatus was rarely collected. Sehirus cinctus produced a nymphal cohort in peanut during May and June, probably because of abundant henbit seeds, Lamium amplexicaule L., in strip-till production systems. No S. cinctus were present during peanut pod formation. Pangaeus bilineatus was the most abundant species collected and the only species associated with peanut kernel feeding injury. Overwintering P. bilineatus adults were present in a conservation tillage peanut field before planting and two to three subsequent generations were observed. Few nymphs were collected until the R6 (full seed) growth stage. Tillage and choice of cover crop affected P. bilineatus populations. Peanuts strip-tilled into corn or wheat residue had greater P. bilineatus populations and kernel-feeding than conventional tillage or strip-tillage into rye residue. Fall tillage before planting a wheat cover crop also reduced burrower bug feeding on peanut. At-pegging (early July) granular chlorpyrifos treatments were most consistent in suppressing kernel feeding. Kernels fed on by P. bilineatus were on average 10% lighter than unfed on kernels. Pangaeus bilineatus feeding reduced peanut grade by reducing individual kernel weight, and increasing the percentage damaged kernels. Each 10% increase in kernels fed on by P. bilineatus was associated with a 1.7% decrease in total sound mature kernels, and kernel feeding levels above 30% increase the risk of damaged kernel grade penalties.

  5. Imaging and automated detection of Sitophilus oryzae (Coleoptera: Curculionidae) pupae in hard red winter wheat.

    PubMed

    Toews, Michael D; Pearson, Tom C; Campbell, James F

    2006-04-01

    Computed tomography, an imaging technique commonly used for diagnosing internal human health ailments, uses multiple x-rays and sophisticated software to recreate a cross-sectional representation of a subject. The use of this technique to image hard red winter wheat, Triticum aestivm L., samples infested with pupae of Sitophilus oryzae (L.) was investigated. A software program was developed to rapidly recognize and quantify the infested kernels. Samples were imaged in a 7.6-cm (o.d.) plastic tube containing 0, 50, or 100 infested kernels per kg of wheat. Interkernel spaces were filled with corn oil so as to increase the contrast between voids inside kernels and voids among kernels. Automated image processing, using a custom C language software program, was conducted separately on each 100 g portion of the prepared samples. The average detection accuracy in the five infested kernels per 100-g samples was 94.4 +/- 7.3% (mean +/- SD, n = 10), whereas the average detection accuracy in the 10 infested kernels per 100-g sample was 87.3 +/- 7.9% (n = 10). Detection accuracy in the 10 infested kernels per 100-g samples was slightly less than the five infested kernels per 100-g samples because of some infested kernels overlapping with each other or air bubbles in the oil. A mean of 1.2 +/- 0.9 (n = 10) bubbles (per tube) was incorrectly classed as infested kernels in replicates containing no infested kernels. In light of these positive results, future studies should be conducted using additional grains, insect species, and life stages.

  6. Relationship of source and sink in determining kernel composition of maize

    PubMed Central

    Seebauer, Juliann R.; Singletary, George W.; Krumpelman, Paulette M.; Ruffo, Matías L.; Below, Frederick E.

    2010-01-01

    The relative role of the maternal source and the filial sink in controlling the composition of maize (Zea mays L.) kernels is unclear and may be influenced by the genotype and the N supply. The objective of this study was to determine the influence of assimilate supply from the vegetative source and utilization of assimilates by the grain sink on the final composition of maize kernels. Intermated B73×Mo17 recombinant inbred lines (IBM RILs) which displayed contrasting concentrations of endosperm starch were grown in the field with deficient or sufficient N, and the source supply altered by ear truncation (45% reduction) at 15 d after pollination (DAP). The assimilate supply into the kernels was determined at 19 DAP using the agar trap technique, and the final kernel composition was measured. The influence of N supply and kernel ear position on final kernel composition was also determined for a commercial hybrid. Concentrations of kernel protein and starch could be altered by genotype or the N supply, but remained fairly constant along the length of the ear. Ear truncation also produced a range of variation in endosperm starch and protein concentrations. The C/N ratio of the assimilate supply at 19 DAP was directly related to the final kernel composition, with an inverse relationship between the concentrations of starch and protein in the mature endosperm. The accumulation of kernel starch and protein in maize is uniform along the ear, yet adaptable within genotypic limits, suggesting that kernel composition is source limited in maize. PMID:19917600

  7. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects. Copyright © 2016 Crop Science Society of America.

  8. Genetic dissection of the maize kernel development process via conditional QTL mapping for three developing kernel-related traits in an immortalized F2 population.

    PubMed

    Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua

    2016-02-01

    Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.

  9. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  10. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing, manufacturing, packing, processing, preparing, treating...

  11. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  12. 7 CFR 51.1241 - Damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... which have been broken to the extent that the kernel within is plainly visible without minute... discoloration beneath, but the peanut shall be judged as it appears with the talc. (c) Kernels which are rancid or decayed. (d) Moldy kernels. (e) Kernels showing sprouts extending more than one-eighth inch from...

  13. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  14. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  15. Genome-wide Association Analysis of Kernel Weight in Hard Winter Wheat

    USDA-ARS?s Scientific Manuscript database

    Wheat kernel weight is an important and heritable component of wheat grain yield and a key predictor of flour extraction. Genome-wide association analysis was conducted to identify genomic regions associated with kernel weight and kernel weight environmental response in 8 trials of 299 hard winter ...

  16. 7 CFR 999.400 - Regulation governing the importation of filberts.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Definitions. (1) Filberts means filberts or hazelnuts. (2) Inshell filberts means filberts, the kernels or edible portions of which are contained in the shell. (3) Shelled filberts means the kernels of filberts... Filbert kernels or portions of filbert kernels shall meet the following requirements: (1) Well dried and...

  17. 7 CFR 51.1404 - Tolerances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... (2) For kernel defects, by count. (i) 12 percent for pecans with kernels which fail to meet the... kernels which are seriously damaged: Provided, That not more than six-sevenths of this amount, or 6 percent, shall be allowed for kernels which are rancid, moldy, decayed or injured by insects: And provided...

  18. Enhanced gluten properties in soft kernel durum wheat

    USDA-ARS?s Scientific Manuscript database

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  19. End-use quality of soft kernel durum wheat

    USDA-ARS?s Scientific Manuscript database

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  20. 7 CFR 51.2560 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... are excessively thin kernels and can have black, brown or gray surface with a dark interior color and the immaturity has adversely affected the flavor of the kernel. (2) Kernel spotting refers to dark brown or dark gray spots aggregating more than one-eighth of the surface of the kernel. (g) Serious...

Top