Point kernel calculations of skyshine exposure rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roseberry, M.L.; Shultis, J.K.
1982-02-01
A simple point kernel model is presented for the calculation of skyshine exposure rates arising from the atmospheric reflection of gamma radiation produced by a vertically collimated or a shielded point source. This model is shown to be in good agreement with benchmark experimental data from a /sup 60/Co source for distances out to 700 m.
SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamberlain, S; Roswell Park Cancer Institute, Buffalo, NY; French, S
Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3more » × 10{sup 6} to 3 × 10{sup 7}); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10{sup 6} was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10{sup 6} have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.« less
Improvements to the kernel function method of steady, subsonic lifting surface theory
NASA Technical Reports Server (NTRS)
Medan, R. T.
1974-01-01
The application of a kernel function lifting surface method to three dimensional, thin wing theory is discussed. A technique for determining the influence functions is presented. The technique is shown to require fewer quadrature points, while still calculating the influence functions accurately enough to guarantee convergence with an increasing number of spanwise quadrature points. The method also treats control points on the wing leading and trailing edges. The report introduces and employs an aspect of the kernel function method which apparently has never been used before and which significantly enhances the efficiency of the kernel function approach.
SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moriya, S; Sato, M; Tachibana, H
Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation runningmore » on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.« less
Initial Simulations of RF Waves in Hot Plasmas Using the FullWave Code
NASA Astrophysics Data System (ADS)
Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo
2017-10-01
FullWave is a simulation tool that models RF fields in hot inhomogeneous magnetized plasmas. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. In an rf field, the hot plasma dielectric response is limited to the distance of a few particles' Larmor radii, near the magnetic field line passing through the test point. The localization of the hot plasma dielectric response results in a sparse matrix of the problem thus significantly reduces the size of the problem and makes the simulations faster. We will present the initial results of modeling of rf waves using the Fullwave code, including calculation of nonlocal conductivity kernel in 2D Tokamak geometry; the interpolation of conductivity kernel from test points to adaptive cloud of computational points; and the results of self-consistent simulations of 2D rf fields using calculated hot plasma conductivity kernel in a tokamak plasma with reduced parameters. Work supported by the US DOE ``SBIR program.
NASA Astrophysics Data System (ADS)
Evstatiev, Evstati; Svidzinski, Vladimir; Spencer, Andy; Galkin, Sergei
2014-10-01
Full wave 3-D modeling of RF fields in hot magnetized nonuniform plasma requires calculation of nonlocal conductivity kernel describing the dielectric response of such plasma to the RF field. In many cases, the conductivity kernel is a localized function near the test point which significantly simplifies numerical solution of the full wave 3-D problem. Preliminary results of feasibility analysis of numerical calculation of the conductivity kernel in a 3-D hot nonuniform magnetized plasma in the electron cyclotron frequency range will be reported. This case is relevant to modeling of ECRH in ITER. The kernel is calculated by integrating the linearized Vlasov equation along the unperturbed particle's orbits. Particle's orbits in the nonuniform equilibrium magnetic field are calculated numerically by one of the Runge-Kutta methods. RF electric field is interpolated on a specified grid on which the conductivity kernel is discretized. The resulting integrals in the particle's initial velocity and time are then calculated numerically. Different optimization approaches of the integration are tested in this feasibility analysis. Work is supported by the U.S. DOE SBIR program.
Flexibly imposing periodicity in kernel independent FMM: A multipole-to-local operator approach
NASA Astrophysics Data System (ADS)
Yan, Wen; Shelley, Michael
2018-02-01
An important but missing component in the application of the kernel independent fast multipole method (KIFMM) is the capability for flexibly and efficiently imposing singly, doubly, and triply periodic boundary conditions. In most popular packages such periodicities are imposed with the hierarchical repetition of periodic boxes, which may give an incorrect answer due to the conditional convergence of some kernel sums. Here we present an efficient method to properly impose periodic boundary conditions using a near-far splitting scheme. The near-field contribution is directly calculated with the KIFMM method, while the far-field contribution is calculated with a multipole-to-local (M2L) operator which is independent of the source and target point distribution. The M2L operator is constructed with the far-field portion of the kernel function to generate the far-field contribution with the downward equivalent source points in KIFMM. This method guarantees the sum of the near-field & far-field converge pointwise to results satisfying periodicity and compatibility conditions. The computational cost of the far-field calculation observes the same O (N) complexity as FMM and is designed to be small by reusing the data computed by KIFMM for the near-field. The far-field calculations require no additional control parameters, and observes the same theoretical error bound as KIFMM. We present accuracy and timing test results for the Laplace kernel in singly periodic domains and the Stokes velocity kernel in doubly and triply periodic domains.
NASA Astrophysics Data System (ADS)
Kargin, Vladislav
2018-06-01
We introduce a family of three-dimensional random point fields using the concept of the quaternion determinant. The kernel of each field is an n-dimensional orthogonal projection on a linear space of quaternionic polynomials. We find explicit formulas for the basis of the orthogonal quaternion polynomials and for the kernel of the projection. For number of particles n → ∞, we calculate the scaling limits of the point field in the bulk and at the center of coordinates. We compare our construction with the previously introduced Fermi-sphere point field process.
Church, Cody; Mawko, George; Archambault, John Paul; Lewandowski, Robert; Liu, David; Kehoe, Sharon; Boyd, Daniel; Abraham, Robert; Syme, Alasdair
2018-02-01
Radiopaque microspheres may provide intraprocedural and postprocedural feedback during transarterial radioembolization (TARE). Furthermore, the potential to use higher resolution x-ray imaging techniques as opposed to nuclear medicine imaging suggests that significant improvements in the accuracy and precision of radiation dosimetry calculations could be realized for this type of therapy. This study investigates the absorbed dose kernel for novel radiopaque microspheres including contributions of both short and long-lived contaminant radionuclides while concurrently quantifying the self-shielding of the glass network. Monte Carlo simulations using EGSnrc were performed to determine the dose kernels for all monoenergetic electron emissions and all beta spectra for radionuclides reported in a neutron activation study of the microspheres. Simulations were benchmarked against an accepted 90 Y dose point kernel. Self-shielding was quantified for the microspheres by simulating an isotropically emitting, uniformly distributed source, in glass and in water. The ratio of the absorbed doses was scored as a function of distance from a microsphere. The absorbed dose kernel for the microspheres was calculated for (a) two bead formulations following (b) two different durations of neutron activation, at (c) various time points following activation. Self-shielding varies with time postremoval from the reactor. At early time points, it is less pronounced due to the higher energies of the emissions. It is on the order of 0.4-2.8% at a radial distance of 5.43 mm with increased size from 10 to 50 μm in diameter during the time that the microspheres would be administered to a patient. At long time points, self-shielding is more pronounced and can reach values in excess of 20% near the end of the range of the emissions. Absorbed dose kernels for 90 Y, 90m Y, 85m Sr, 85 Sr, 87m Sr, 89 Sr, 70 Ga, 72 Ga, and 31 Si are presented and used to determine an overall kernel for the microspheres based on weighted activities. The shapes of the absorbed dose kernels are dominated at short times postactivation by the contributions of 70 Ga and 72 Ga. Following decay of the short-lived contaminants, the absorbed dose kernel is effectively that of 90 Y. After approximately 1000 h postactivation, the contributions of 85 Sr and 89 Sr become increasingly dominant, though the absorbed dose-rate around the beads drops by roughly four orders of magnitude. The introduction of high atomic number elements for the purpose of increasing radiopacity necessarily leads to the production of radionuclides other than 90 Y in the microspheres. Most of the radionuclides in this study are short-lived and are likely not of any significant concern for this therapeutic agent. The presence of small quantities of longer lived radionuclides will change the shape of the absorbed dose kernel around a microsphere at long time points postadministration when activity levels are significantly reduced. © 2017 American Association of Physicists in Medicine.
Total Ambient Dose Equivalent Buildup Factor Determination for Nbs04 Concrete.
Duckic, Paulina; Hayes, Robert B
2018-06-01
Buildup factors are dimensionless multiplicative factors required by the point kernel method to account for scattered radiation through a shielding material. The accuracy of the point kernel method is strongly affected by the correspondence of analyzed parameters to experimental configurations, which is attempted to be simplified here. The point kernel method has not been found to have widespread practical use for neutron shielding calculations due to the complex neutron transport behavior through shielding materials (i.e. the variety of interaction mechanisms that neutrons may undergo while traversing the shield) as well as non-linear neutron total cross section energy dependence. In this work, total ambient dose buildup factors for NBS04 concrete are calculated in terms of neutron and secondary gamma ray transmission factors. The neutron and secondary gamma ray transmission factors are calculated using MCNP6™ code with updated cross sections. Both transmission factors and buildup factors are given in a tabulated form. Practical use of neutron transmission and buildup factors warrants rigorously calculated results with all associated uncertainties. In this work, sensitivity analysis of neutron transmission factors and total buildup factors with varying water content has been conducted. The analysis showed significant impact of varying water content in concrete on both neutron transmission factors and total buildup factors. Finally, support vector regression, a machine learning technique, has been engaged to make a model based on the calculated data for calculation of the buildup factors. The developed model can predict most of the data with 20% relative error.
Liu, Derek; Sloboda, Ron S
2014-05-01
Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.
Development of FullWave : Hot Plasma RF Simulation Tool
NASA Astrophysics Data System (ADS)
Svidzinski, Vladimir; Kim, Jin-Soo; Spencer, J. Andrew; Zhao, Liangji; Galkin, Sergei
2017-10-01
Full wave simulation tool, modeling RF fields in hot inhomogeneous magnetized plasma, is being developed. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated in configuration space without limiting approximations by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. This approach allows for better resolution of plasma resonances, antenna structures and complex boundaries. The formulation of FullWave and preliminary results will be presented: construction of the finite differences for approximation of derivatives on adaptive cloud of computational points; model and results of nonlocal conductivity kernel calculation in tokamak geometry; results of 2-D full wave simulations in the cold plasma model in tokamak geometry using the formulated approach; results of self-consistent calculations of hot plasma dielectric response and RF fields in 1-D mirror magnetic field; preliminary results of self-consistent simulations of 2-D RF fields in tokamak using the calculated hot plasma conductivity kernel; development of iterative solver for wave equations. Work is supported by the U.S. DOE SBIR program.
Shiiba, Takuro; Kuga, Naoya; Kuroiwa, Yasuyoshi; Sato, Tatsuhiko
2017-10-01
We assessed the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels (DPKs) calculated using the particle and heavy ion transport code system (PHITS) for patient-specific dosimetry in targeted radionuclide treatment (TRT) and compared our data with published data. All mono-energetic and beta-emitting isotope DPKs calculated using PHITS, both in water and compact bone, were in good agreement with those in literature using other MC codes. PHITS provided reliable mono-energetic electron and beta-emitting isotope scaled DPKs for patient-specific dosimetry. Copyright © 2017 Elsevier Ltd. All rights reserved.
Calculation of electron Dose Point Kernel in water with GEANT4 for medical application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guimaraes, C. C.; Sene, F. F.; Martinelli, J. R.
2009-06-03
The rapid insertion of new technologies in medical physics in the last years, especially in nuclear medicine, has been followed by a great development of faster Monte Carlo algorithms. GEANT4 is a Monte Carlo toolkit that contains the tools to simulate the problems of particle transport through matter. In this work, GEANT4 was used to calculate the dose-point-kernel (DPK) for monoenergetic electrons in water, which is an important reference medium for nuclear medicine. The three different physical models of electromagnetic interactions provided by GEANT4 - Low Energy, Penelope and Standard - were employed. To verify the adequacy of these models,more » the results were compared with references from the literature. For all energies and physical models, the agreement between calculated DPKs and reported values is satisfactory.« less
Maigne, L; Perrot, Y; Schaart, D R; Donnarieix, D; Breton, V
2011-02-07
The GATE Monte Carlo simulation platform based on the GEANT4 toolkit has come into widespread use for simulating positron emission tomography (PET) and single photon emission computed tomography (SPECT) imaging devices. Here, we explore its use for calculating electron dose distributions in water. Mono-energetic electron dose point kernels and pencil beam kernels in water are calculated for different energies between 15 keV and 20 MeV by means of GATE 6.0, which makes use of the GEANT4 version 9.2 Standard Electromagnetic Physics Package. The results are compared to the well-validated codes EGSnrc and MCNP4C. It is shown that recent improvements made to the GEANT4/GATE software result in significantly better agreement with the other codes. We furthermore illustrate several issues of general interest to GATE and GEANT4 users who wish to perform accurate simulations involving electrons. Provided that the electron step size is sufficiently restricted, GATE 6.0 and EGSnrc dose point kernels are shown to agree to within less than 3% of the maximum dose between 50 keV and 4 MeV, while pencil beam kernels are found to agree to within less than 4% of the maximum dose between 15 keV and 20 MeV.
Suitability of point kernel dose calculation techniques in brachytherapy treatment planning
Lakshminarayanan, Thilagam; Subbaiah, K. V.; Thayalan, K.; Kannan, S. E.
2010-01-01
Brachytherapy treatment planning system (TPS) is necessary to estimate the dose to target volume and organ at risk (OAR). TPS is always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in applicators. However, most brachytherapy TPS software packages estimate the absorbed dose at a point, taking care of only the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. There are some degrees of uncertainties in dose rate estimations under realistic clinical conditions. In this regard, an attempt is made to explore the suitability of point kernels for brachytherapy dose rate calculations and develop new interactive brachytherapy package, named as BrachyTPS, to suit the clinical conditions. BrachyTPS is an interactive point kernel code package developed to perform independent dose rate calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. The primary aim of this study is to validate the developed point kernel code package integrated with treatment planning computational systems against the Monte Carlo (MC) results. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, namely (i) Board of Radiation Isotope and Technology (BRIT) low dose rate (LDR) applicator and (ii) Fletcher Green type LDR applicator (iii) Fletcher Williamson high dose rate (HDR) applicator, are studied to test the accuracy of the software. Dose rates computed using the developed code are compared with the relevant results of the MC simulations. Further, attempts are also made to study the dose rate distribution around the commercially available shielded vaginal applicator set (Nucletron). The percentage deviations of BrachyTPS computed dose rate values from the MC results are observed to be within plus/minus 5.5% for BRIT LDR applicator, found to vary from 2.6 to 5.1% for Fletcher green type LDR applicator and are up to −4.7% for Fletcher-Williamson HDR applicator. The isodose distribution plots also show good agreements with the results of previous literatures. The isodose distributions around the shielded vaginal cylinder computed using BrachyTPS code show better agreement (less than two per cent deviation) with MC results in the unshielded region compared to shielded region, where the deviations are observed up to five per cent. The present study implies that the accurate and fast validation of complicated treatment planning calculations is possible with the point kernel code package. PMID:20589118
NARMER-1: a photon point-kernel code with build-up factors
NASA Astrophysics Data System (ADS)
Visonneau, Thierry; Pangault, Laurence; Malouch, Fadhel; Malvagi, Fausto; Dolci, Florence
2017-09-01
This paper presents an overview of NARMER-1, the new generation of photon point-kernel code developed by the Reactor Studies and Applied Mathematics Unit (SERMA) at CEA Saclay Center. After a short introduction giving some history points and the current context of development of the code, the paper exposes the principles implemented in the calculation, the physical quantities computed and surveys the generic features: programming language, computer platforms, geometry package, sources description, etc. Moreover, specific and recent features are also detailed: exclusion sphere, tetrahedral meshes, parallel operations. Then some points about verification and validation are presented. Finally we present some tools that can help the user for operations like visualization and pre-treatment.
Direct Measurement of Wave Kernels in Time-Distance Helioseismology
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.
2006-01-01
Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.
Development of full wave code for modeling RF fields in hot non-uniform plasmas
NASA Astrophysics Data System (ADS)
Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo
2016-10-01
FAR-TECH, Inc. is developing a full wave RF modeling code to model RF fields in fusion devices and in general plasma applications. As an important component of the code, an adaptive meshless technique is introduced to solve the wave equations, which allows resolving plasma resonances efficiently and adapting to the complexity of antenna geometry and device boundary. The computational points are generated using either a point elimination method or a force balancing method based on the monitor function, which is calculated by solving the cold plasma dispersion equation locally. Another part of the code is the conductivity kernel calculation, used for modeling the nonlocal hot plasma dielectric response. The conductivity kernel is calculated on a coarse grid of test points and then interpolated linearly onto the computational points. All the components of the code are parallelized using MPI and OpenMP libraries to optimize the execution speed and memory. The algorithm and the results of our numerical approach to solving 2-D wave equations in a tokamak geometry will be presented. Work is supported by the U.S. DOE SBIR program.
NASA Astrophysics Data System (ADS)
Chao, Nan; Liu, Yong-kuo; Xia, Hong; Ayodeji, Abiodun; Bai, Lu
2018-03-01
During the decommissioning of nuclear facilities, a large number of cutting and demolition activities are performed, which results in a frequent change in the structure and produce many irregular objects. In order to assess dose rates during the cutting and demolition process, a flexible dose assessment method for arbitrary geometries and radiation sources was proposed based on virtual reality technology and Point-Kernel method. The initial geometry is designed with the three-dimensional computer-aided design tools. An approximate model is built automatically in the process of geometric modeling via three procedures namely: space division, rough modeling of the body and fine modeling of the surface, all in combination with collision detection of virtual reality technology. Then point kernels are generated by sampling within the approximate model, and when the material and radiometric attributes are inputted, dose rates can be calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The effectiveness and accuracy of the proposed method was verified by means of simulations using different geometries and the dose rate results were compared with that derived from CIDEC code, MCNP code and experimental measurements.
Ford Motor Company NDE facility shielding design.
Metzger, Robert L; Van Riper, Kenneth A; Jones, Martin H
2005-01-01
Ford Motor Company proposed the construction of a large non-destructive evaluation laboratory for radiography of automotive power train components. The authors were commissioned to design the shielding and to survey the completed facility for compliance with radiation doses for occupationally and non-occupationally exposed personnel. The two X-ray sources are Varian Linatron 3000 accelerators operating at 9-11 MV. One performs computed tomography of automotive transmissions, while the other does real-time radiography of operating engines and transmissions. The shield thickness for the primary barrier and all secondary barriers were determined by point-kernel techniques. Point-kernel techniques did not work well for skyshine calculations and locations where multiple sources (e.g. tube head leakage and various scatter fields) impacted doses. Shielding for these areas was determined using transport calculations. A number of MCNP [Briesmeister, J. F. MCNPCA general Monte Carlo N-particle transport code version 4B. Los Alamos National Laboratory Manual (1997)] calculations focused on skyshine estimates and the office areas. Measurements on the operational facility confirmed the shielding calculations.
Integrating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Champagne, Nathan J.; Wilton, Donald R.
2008-01-01
A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Dejun, E-mail: dejun.lin@gmail.com
2015-09-21
Accurate representation of intermolecular forces has been the central task of classical atomic simulations, known as molecular mechanics. Recent advancements in molecular mechanics models have put forward the explicit representation of permanent and/or induced electric multipole (EMP) moments. The formulas developed so far to calculate EMP interactions tend to have complicated expressions, especially in Cartesian coordinates, which can only be applied to a specific kernel potential function. For example, one needs to develop a new formula each time a new kernel function is encountered. The complication of these formalisms arises from an intriguing and yet obscured mathematical relation between themore » kernel functions and the gradient operators. Here, I uncover this relation via rigorous derivation and find that the formula to calculate EMP interactions is basically invariant to the potential kernel functions as long as they are of the form f(r), i.e., any Green’s function that depends on inter-particle distance. I provide an algorithm for efficient evaluation of EMP interaction energies, forces, and torques for any kernel f(r) up to any arbitrary rank of EMP moments in Cartesian coordinates. The working equations of this algorithm are essentially the same for any kernel f(r). Recently, a few recursive algorithms were proposed to calculate EMP interactions. Depending on the kernel functions, the algorithm here is about 4–16 times faster than these algorithms in terms of the required number of floating point operations and is much more memory efficient. I show that it is even faster than a theoretically ideal recursion scheme, i.e., one that requires 1 floating point multiplication and 1 addition per recursion step. This algorithm has a compact vector-based expression that is optimal for computer programming. The Cartesian nature of this algorithm makes it fit easily into modern molecular simulation packages as compared with spherical coordinate-based algorithms. A software library based on this algorithm has been implemented in C++11 and has been released.« less
Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate
NASA Astrophysics Data System (ADS)
Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.
2008-08-01
The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.
Buck, Christoph; Kneib, Thomas; Tkaczick, Tobias; Konstabel, Kenn; Pigeot, Iris
2015-12-22
Built environment studies provide broad evidence that urban characteristics influence physical activity (PA). However, findings are still difficult to compare, due to inconsistent measures assessing urban point characteristics and varying definitions of spatial scale. Both were found to influence the strength of the association between the built environment and PA. We simultaneously evaluated the effect of kernel approaches and network-distances to investigate the association between urban characteristics and physical activity depending on spatial scale and intensity measure. We assessed urban measures of point characteristics such as intersections, public transit stations, and public open spaces in ego-centered network-dependent neighborhoods based on geographical data of one German study region of the IDEFICS study. We calculated point intensities using the simple intensity and kernel approaches based on fixed bandwidths, cross-validated bandwidths including isotropic and anisotropic kernel functions and considering adaptive bandwidths that adjust for residential density. We distinguished six network-distances from 500 m up to 2 km to calculate each intensity measure. A log-gamma regression model was used to investigate the effect of each urban measure on moderate-to-vigorous physical activity (MVPA) of 400 2- to 9.9-year old children who participated in the IDEFICS study. Models were stratified by sex and age groups, i.e. pre-school children (2 to <6 years) and school children (6-9.9 years), and were adjusted for age, body mass index (BMI), education and safety concerns of parents, season and valid weartime of accelerometers. Association between intensity measures and MVPA strongly differed by network-distance, with stronger effects found for larger network-distances. Simple intensity revealed smaller effect estimates and smaller goodness-of-fit compared to kernel approaches. Smallest variation in effect estimates over network-distances was found for kernel intensity measures based on isotropic and anisotropic cross-validated bandwidth selection. We found a strong variation in the association between the built environment and PA of children based on the choice of intensity measure and network-distance. Kernel intensity measures provided stable results over various scales and improved the assessment compared to the simple intensity measure. Considering different spatial scales and kernel intensity methods might reduce methodological limitations in assessing opportunities for PA in the built environment.
A point kernel algorithm for microbeam radiation therapy
NASA Astrophysics Data System (ADS)
Debus, Charlotte; Oelfke, Uwe; Bartzsch, Stefan
2017-11-01
Microbeam radiation therapy (MRT) is a treatment approach in radiation therapy where the treatment field is spatially fractionated into arrays of a few tens of micrometre wide planar beams of unusually high peak doses separated by low dose regions of several hundred micrometre width. In preclinical studies, this treatment approach has proven to spare normal tissue more effectively than conventional radiation therapy, while being equally efficient in tumour control. So far dose calculations in MRT, a prerequisite for future clinical applications are based on Monte Carlo simulations. However, they are computationally expensive, since scoring volumes have to be small. In this article a kernel based dose calculation algorithm is presented that splits the calculation into photon and electron mediated energy transport, and performs the calculation of peak and valley doses in typical MRT treatment fields within a few minutes. Kernels are analytically calculated depending on the energy spectrum and material composition. In various homogeneous materials peak, valley doses and microbeam profiles are calculated and compared to Monte Carlo simulations. For a microbeam exposure of an anthropomorphic head phantom calculated dose values are compared to measurements and Monte Carlo calculations. Except for regions close to material interfaces calculated peak dose values match Monte Carlo results within 4% and valley dose values within 8% deviation. No significant differences are observed between profiles calculated by the kernel algorithm and Monte Carlo simulations. Measurements in the head phantom agree within 4% in the peak and within 10% in the valley region. The presented algorithm is attached to the treatment planning platform VIRTUOS. It was and is used for dose calculations in preclinical and pet-clinical trials at the biomedical beamline ID17 of the European synchrotron radiation facility in Grenoble, France.
Modeling RF Fields in Hot Plasmas with Parallel Full Wave Code
NASA Astrophysics Data System (ADS)
Spencer, Andrew; Svidzinski, Vladimir; Zhao, Liangji; Galkin, Sergei; Kim, Jin-Soo
2016-10-01
FAR-TECH, Inc. is developing a suite of full wave RF plasma codes. It is based on a meshless formulation in configuration space with adapted cloud of computational points (CCP) capability and using the hot plasma conductivity kernel to model the nonlocal plasma dielectric response. The conductivity kernel is calculated by numerically integrating the linearized Vlasov equation along unperturbed particle trajectories. Work has been done on the following calculations: 1) the conductivity kernel in hot plasmas, 2) a monitor function based on analytic solutions of the cold-plasma dispersion relation, 3) an adaptive CCP based on the monitor function, 4) stencils to approximate the wave equations on the CCP, 5) the solution to the full wave equations in the cold-plasma model in tokamak geometry for ECRH and ICRH range of frequencies, and 6) the solution to the wave equations using the calculated hot plasma conductivity kernel. We will present results on using a meshless formulation on adaptive CCP to solve the wave equations and on implementing the non-local hot plasma dielectric response to the wave equations. The presentation will include numerical results of wave propagation and absorption in the cold and hot tokamak plasma RF models, using DIII-D geometry and plasma parameters. Work is supported by the U.S. DOE SBIR program.
NASA Astrophysics Data System (ADS)
Diego Azcona, Juan; Barbés, Benigno; Wang, Lilie; Burguete, Javier
2016-01-01
This paper presents a method to obtain the pencil-beam kernels that characterize a megavoltage photon beam generated in a flattening filter free (FFF) linear accelerator (linac) by deconvolution from experimental measurements at different depths. The formalism is applied to perform independent dose calculations in modulated fields. In our previous work a formalism was developed for ideal flat fluences exiting the linac’s head. That framework could not deal with spatially varying energy fluences, so any deviation from the ideal flat fluence was treated as a perturbation. The present work addresses the necessity of implementing an exact analysis where any spatially varying fluence can be used such as those encountered in FFF beams. A major improvement introduced here is to handle the actual fluence in the deconvolution procedure. We studied the uncertainties associated to the kernel derivation with this method. Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from two linacs from different vendors, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water-equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50mm diameter circular field, collimated with a lead block. The 3D kernel for a FFF beam was obtained by deconvolution using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. Error uncertainty in the kernel derivation procedure was estimated to be within 0.2%. Eighteen modulated fields used clinically in different treatment localizations were irradiated at four measurement depths (total of fifty-four film measurements). Comparison through the gamma-index to their corresponding calculated absolute dose distributions showed a number of passing points (3%, 3mm) mostly above 99%. This new procedure is more reliable and robust than the previous one. Its ability to perform accurate independent dose calculations was demonstrated.
Oil point and mechanical behaviour of oil palm kernels in linear compression
NASA Astrophysics Data System (ADS)
Kabutey, Abraham; Herak, David; Choteborsky, Rostislav; Mizera, Čestmír; Sigalingging, Riswanti; Akangbe, Olaosebikan Layi
2017-07-01
The study described the oil point and mechanical properties of roasted and unroasted bulk oil palm kernels under compression loading. The literature information available is very limited. A universal compression testing machine and vessel diameter of 60 mm with a plunger were used by applying maximum force of 100 kN and speed ranging from 5 to 25 mm min-1. The initial pressing height of the bulk kernels was measured at 40 mm. The oil point was determined by a litmus test for each deformation level of 5, 10, 15, 20, and 25 mm at a minimum speed of 5 mmmin-1. The measured parameters were the deformation, deformation energy, oil yield, oil point strain and oil point pressure. Clearly, the roasted bulk kernels required less deformation energy compared to the unroasted kernels for recovering the kernel oil. However, both kernels were not permanently deformed. The average oil point strain was determined at 0.57. The study is an essential contribution to pursuing innovative methods for processing palm kernel oil in rural areas of developing countries.
Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun
2018-05-17
This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.
Implementation of radiation shielding calculation methods. Volume 2: Seminar/Workshop notes
NASA Technical Reports Server (NTRS)
Capo, M. A.; Disney, R. K.
1971-01-01
Detailed descriptions are presented of the input data for each of the MSFC computer codes applied to the analysis of a realistic nuclear propelled vehicle. The analytical techniques employed include cross section data, preparation, one and two dimensional discrete ordinates transport, point kernel, and single scatter methods.
A new concept of pencil beam dose calculation for 40-200 keV photons using analytical dose kernels.
Bartzsch, Stefan; Oelfke, Uwe
2013-11-01
The advent of widespread kV-cone beam computer tomography in image guided radiation therapy and special therapeutic application of keV photons, e.g., in microbeam radiation therapy (MRT) require accurate and fast dose calculations for photon beams with energies between 40 and 200 keV. Multiple photon scattering originating from Compton scattering and the strong dependence of the photoelectric cross section on the atomic number of the interacting tissue render these dose calculations by far more challenging than the ones established for corresponding MeV beams. That is why so far developed analytical models of kV photon dose calculations fail to provide the required accuracy and one has to rely on time consuming Monte Carlo simulation techniques. In this paper, the authors introduce a novel analytical approach for kV photon dose calculations with an accuracy that is almost comparable to the one of Monte Carlo simulations. First, analytical point dose and pencil beam kernels are derived for homogeneous media and compared to Monte Carlo simulations performed with the Geant4 toolkit. The dose contributions are systematically separated into contributions from the relevant orders of multiple photon scattering. Moreover, approximate scaling laws for the extension of the algorithm to inhomogeneous media are derived. The comparison of the analytically derived dose kernels in water showed an excellent agreement with the Monte Carlo method. Calculated values deviate less than 5% from Monte Carlo derived dose values, for doses above 1% of the maximum dose. The analytical structure of the kernels allows adaption to arbitrary materials and photon spectra in the given energy range of 40-200 keV. The presented analytical methods can be employed in a fast treatment planning system for MRT. In convolution based algorithms dose calculation times can be reduced to a few minutes.
NASA Astrophysics Data System (ADS)
Kitt, R.; Kalda, J.
2006-03-01
The question of optimal portfolio is addressed. The conventional Markowitz portfolio optimisation is discussed and the shortcomings due to non-Gaussian security returns are outlined. A method is proposed to minimise the likelihood of extreme non-Gaussian drawdowns of the portfolio value. The theory is called Leptokurtic, because it minimises the effects from “fat tails” of returns. The leptokurtic portfolio theory provides an optimal portfolio for investors, who define their risk-aversion as unwillingness to experience sharp drawdowns in asset prices. Two types of risks in asset returns are defined: a fluctuation risk, that has Gaussian distribution, and a drawdown risk, that deals with distribution tails. These risks are quantitatively measured by defining the “noise kernel” — an ellipsoidal cloud of points in the space of asset returns. The size of the ellipse is controlled with the threshold parameter: the larger the threshold parameter, the larger return are accepted for investors as normal fluctuations. The return vectors falling into the kernel are used for calculation of fluctuation risk. Analogously, the data points falling outside the kernel are used for the calculation of drawdown risks. As a result the portfolio optimisation problem becomes three-dimensional: in addition to the return, there are two types of risks involved. Optimal portfolio for drawdown-averse investors is the portfolio minimising variance outside the noise kernel. The theory has been tested with MSCI North America, Europe and Pacific total return stock indices.
NASA Astrophysics Data System (ADS)
Chmiel, Malgorzata; Roux, Philippe; Herrmann, Philippe; Rondeleux, Baptiste; Wathelet, Marc
2018-05-01
We investigated the construction of diffraction kernels for surface waves using two-point convolution and/or correlation from land active seismic data recorded in the context of exploration geophysics. The high density of controlled sources and receivers, combined with the application of the reciprocity principle, allows us to retrieve two-dimensional phase-oscillation diffraction kernels (DKs) of surface waves between any two source or receiver points in the medium at each frequency (up to 15 Hz, at least). These DKs are purely data-based as no model calculations and no synthetic data are needed. They naturally emerge from the interference patterns of the recorded wavefields projected on the dense array of sources and/or receivers. The DKs are used to obtain multi-mode dispersion relations of Rayleigh waves, from which near-surface shear velocity can be extracted. Using convolution versus correlation with a grid of active sources is an important step in understanding the physics of the retrieval of surface wave Green's functions. This provides the foundation for future studies based on noise sources or active sources with a sparse spatial distribution.
Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.
2013-01-01
Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507
Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, L.L.; Hendricks, J.S.
1983-01-01
The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays.
Graviton 1-loop partition function for 3-dimensional massive gravity
NASA Astrophysics Data System (ADS)
Gaberdiel, Matthias R.; Grumiller, Daniel; Vassilevich, Dmitri
2010-11-01
Thegraviton1-loop partition function in Euclidean topologically massivegravity (TMG) is calculated using heat kernel techniques. The partition function does not factorize holomorphically, and at the chiral point it has the structure expected from a logarithmic conformal field theory. This gives strong evidence for the proposal that the dual conformal field theory to TMG at the chiral point is indeed logarithmic. We also generalize our results to new massive gravity.
NASA Astrophysics Data System (ADS)
Voytishek, Anton V.; Shipilov, Nikolay M.
2017-11-01
In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.
Technical Note: Dose gradients and prescription isodose in orthovoltage stereotactic radiosurgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagerstrom, Jessica M., E-mail: fagerstrom@wisc.edu; Bender, Edward T.; Culberson, Wesley S.
Purpose: The purpose of this work is to examine the trade-off between prescription isodose and dose gradients in orthovoltage stereotactic radiosurgery. Methods: Point energy deposition kernels (EDKs) describing photon and electron transport were calculated using Monte Carlo methods. EDKs were generated from 10 to 250 keV, in 10 keV increments. The EDKs were converted to pencil beam kernels and used to calculate dose profiles through isocenter from a 4π isotropic delivery from all angles of circularly collimated beams. Monoenergetic beams and an orthovoltage polyenergetic spectrum were analyzed. The dose gradient index (DGI) is the ratio of the 50% prescription isodosemore » volume to the 100% prescription isodose volume and represents a metric by which dose gradients in stereotactic radiosurgery (SRS) may be evaluated. Results: Using the 4π dose profiles calculated using pencil beam kernels, the relationship between DGI and prescription isodose was examined for circular cones ranging from 4 to 18 mm in diameter and monoenergetic photon beams with energies ranging from 20 to 250 keV. Values were found to exist for prescription isodose that optimize DGI. Conclusions: The relationship between DGI and prescription isodose was found to be dependent on both field size and energy. Examining this trade-off is an important consideration for designing optimal SRS systems.« less
NASA Astrophysics Data System (ADS)
Bally, B.; Duguet, T.
2018-02-01
Background: State-of-the-art multi-reference energy density functional calculations require the computation of norm overlaps between different Bogoliubov quasiparticle many-body states. It is only recently that the efficient and unambiguous calculation of such norm kernels has become available under the form of Pfaffians [L. M. Robledo, Phys. Rev. C 79, 021302 (2009), 10.1103/PhysRevC.79.021302]. Recently developed particle-number-restored Bogoliubov coupled-cluster (PNR-BCC) and particle-number-restored Bogoliubov many-body perturbation (PNR-BMBPT) ab initio theories [T. Duguet and A. Signoracci, J. Phys. G 44, 015103 (2017), 10.1088/0954-3899/44/1/015103] make use of generalized norm kernels incorporating explicit many-body correlations. In PNR-BCC and PNR-BMBPT, the Bogoliubov states involved in the norm kernels differ specifically via a global gauge rotation. Purpose: The goal of this work is threefold. We wish (i) to propose and implement an alternative to the Pfaffian method to compute unambiguously the norm overlap between arbitrary Bogoliubov quasiparticle states, (ii) to extend the first point to explicitly correlated norm kernels, and (iii) to scrutinize the analytical content of the correlated norm kernels employed in PNR-BMBPT. Point (i) constitutes the purpose of the present paper while points (ii) and (iii) are addressed in a forthcoming paper. Methods: We generalize the method used in another work [T. Duguet and A. Signoracci, J. Phys. G 44, 015103 (2017), 10.1088/0954-3899/44/1/015103] in such a way that it is applicable to kernels involving arbitrary pairs of Bogoliubov states. The formalism is presently explicated in detail in the case of the uncorrelated overlap between arbitrary Bogoliubov states. The power of the method is numerically illustrated and benchmarked against known results on the basis of toy models of increasing complexity. Results: The norm overlap between arbitrary Bogoliubov product states is obtained under a closed-form expression allowing its computation without any phase ambiguity. The formula is physically intuitive, accurate, and versatile. It equally applies to norm overlaps between Bogoliubov states of even or odd number parity. Numerical applications illustrate these features and provide a transparent representation of the content of the norm overlaps. Conclusions: The complex norm overlap between arbitrary Bogoliubov states is computed, without any phase ambiguity, via elementary linear algebra operations. The method can be used in any configuration mixing of orthogonal and non-orthogonal product states. Furthermore, the closed-form expression extends naturally to correlated overlaps at play in PNR-BCC and PNR-BMBPT. As such, the straight overlap between Bogoliubov states is the zero-order reduction of more involved norm kernels to be studied in a forthcoming paper.
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
Out-of-Sample Extensions for Non-Parametric Kernel Methods.
Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang
2017-02-01
Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.
LoCoH: Non-parameteric kernel methods for constructing home ranges and utilization distributions
Getz, Wayne M.; Fortmann-Roe, Scott; Cross, Paul C.; Lyons, Andrew J.; Ryan, Sadie J.; Wilmers, Christopher C.
2007-01-01
Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: ‘‘fixed sphere-of-influence,’’ or r -LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an ‘‘adaptive sphere-of-influence,’’ or a -LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a ), and compare them to the original ‘‘fixed-number-of-points,’’ or k -LoCoH (all kernels constructed from k -1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a -LoCoH is generally superior to k - and r -LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, J; Followill, D; Howell, R
2015-06-15
Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less
Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA
NASA Astrophysics Data System (ADS)
Bordes, Julien; Incerti, Sébastien; Lampe, Nathanael; Bardiès, Manuel; Bordage, Marie-Claude
2017-05-01
When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (;option 2; and its improved version, ;option 4;). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as ;Geant4-DNA-CPA100;. In this study, ;Geant4-DNA-CPA100; was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (;option 2; and ;option 4;), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with ;Geant4-DNA-CPA100; - the first set using Geant4‧s default settings, and the second using CPA100‧s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA's existing models were always broader than those generated with ;Geant4-DNA-CPA100;. The discrepancies observed between the DPKs generated using Geant4-DNA's existing models and ;Geant4-DNA-CPA100; were caused solely by their different cross sections. The different scoring and interpolation methods used in CPA100 and Geant4 to calculate DPKs showed differences close to 3.0% near the source.
Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki
2016-03-01
The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference between double and triple Gaussian kernel models. The authors found that the difference between the two studied kernel models appeared at mid-depths and the accuracy of predicting the double Gaussian model deteriorated at the low-dose bump that appeared at mid-depths. When the authors employed the double Gaussian kernel model, the accuracy of calculations for the absolute dose at the center of the SOBP varied with irradiation conditions and the maximum difference was 3.4%. In contrast, the results obtained from calculations with the triple Gaussian kernel model indicated good agreement with the measurements within ±1.1%, regardless of the irradiation conditions. The difference between the results obtained with the two types of studied kernel models was distinct in the high energy region. The accuracy of calculations with the double Gaussian kernel model varied with the field size and SOBP width because the accuracy of prediction with the double Gaussian model was insufficient at the low-dose bump. The evaluation was only qualitative under limited volumetric irradiation conditions. Further accumulation of measured data would be needed to quantitatively comprehend what influence the double and triple Gaussian kernel models had on the accuracy of dose calculations.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang
2018-04-28
The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.
NASA Astrophysics Data System (ADS)
Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang
2018-04-01
The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.
A new discriminative kernel from probabilistic models.
Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert
2002-10-01
Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.
Surface and top-of-atmosphere radiative feedback kernels for CESM-CAM5
NASA Astrophysics Data System (ADS)
Pendergrass, Angeline G.; Conley, Andrew; Vitt, Francis M.
2018-02-01
Radiative kernels at the top of the atmosphere are useful for decomposing changes in atmospheric radiative fluxes due to feedbacks from atmosphere and surface temperature, water vapor, and surface albedo. Here we describe and validate radiative kernels calculated with the large-ensemble version of CAM5, CESM1.1.2, at the top of the atmosphere and the surface. Estimates of the radiative forcing from greenhouse gases and aerosols in RCP8.5 in the CESM large-ensemble simulations are also diagnosed. As an application, feedbacks are calculated for the CESM large ensemble. The kernels are freely available at https://doi.org/10.5065/D6F47MT6, and accompanying software can be downloaded from https://github.com/apendergrass/cam5-kernels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Botta, F; Di Dia, A; Pedroli, G
The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK),more » quantifying the energy deposition all around a point isotropic source, is often the one.Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10–3 MeV) and for beta emitting isotopes commonly used for therapy (89Sr, 90Y, 131I, 153Sm, 177Lu, 186Re, and 188Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8·RCSDA and 0.9·RCSDA for monoenergetic electrons (RCSDA being the continuous slowing down approximation range) and within 0.8·X90 and 0.9·X90 for isotopes (X90 being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9·RCSDA and 0.9·X90 for electrons and isotopes, respectively.Results: Concerning monoenergetic electrons, within 0.8·RCSDA (where 90%–97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The discrepancies between fluka and the other codes are of the same order of magnitude than those observed when comparing the other codes among them, which can be referred to the different simulation algorithms. When considering the beta spectra, discrepancies notably reduce: within 0.9·X90, fluka and penelope differ for less than 1% in water and less than 2% in bone with any of the isotopes here considered. Complete data of fluka DPKs are given as Supplementary Material as a tool to perform dosimetry by analytical point kernel convolution.Conclusions: fluka provides reliable results when transporting electrons in the low energy range, proving to be an adequate tool for nuclear medicine dosimetry.« less
Application of kernel method in fluorescence molecular tomography
NASA Astrophysics Data System (ADS)
Zhao, Yue; Baikejiang, Reheman; Li, Changqing
2017-02-01
Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.
NASA Astrophysics Data System (ADS)
Ghadiriyan Arani, M.; Pahlavani, P.; Effati, M.; Noori Alamooti, F.
2017-09-01
Today, one of the social problems influencing on the lives of many people is the road traffic crashes especially the highway ones. In this regard, this paper focuses on highway of capital and the most populous city in the U.S. state of Georgia and the ninth largest metropolitan area in the United States namely Atlanta. Geographically weighted regression and general centrality criteria are the aspects of traffic used for this article. In the first step, in order to estimate of crash intensity, it is needed to extract the dual graph from the status of streets and highways to use general centrality criteria. With the help of the graph produced, the criteria are: Degree, Pageranks, Random walk, Eccentricity, Closeness, Betweenness, Clustering coefficient, Eigenvector, and Straightness. The intensity of crash point is counted for every highway by dividing the number of crashes in that highway to the total number of crashes. Intensity of crash point is calculated for each highway. Then, criteria and crash point were normalized and the correlation between them was calculated to determine the criteria that are not dependent on each other. The proposed hybrid approach is a good way to regression issues because these effective measures result to a more desirable output. R2 values for geographically weighted regression using the Gaussian kernel was 0.539 and also 0.684 was obtained using a triple-core cube. The results showed that the triple-core cube kernel is better for modeling the crash intensity.
Surface and top-of-atmosphere radiative feedback kernels for CESM-CAM5
Pendergrass, Angeline G.; Conley, Andrew; Vitt, Francis M.
2018-02-21
Radiative kernels at the top of the atmosphere are useful for decomposing changes in atmospheric radiative fluxes due to feedbacks from atmosphere and surface temperature, water vapor, and surface albedo. Here we describe and validate radiative kernels calculated with the large-ensemble version of CAM5, CESM1.1.2, at the top of the atmosphere and the surface. Estimates of the radiative forcing from greenhouse gases and aerosols in RCP8.5 in the CESM large-ensemble simulations are also diagnosed. As an application, feedbacks are calculated for the CESM large ensemble. The kernels are freely available at https://doi.org/10.5065/D6F47MT6, and accompanying software can be downloaded from https://github.com/apendergrass/cam5-kernels.
Surface and top-of-atmosphere radiative feedback kernels for CESM-CAM5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pendergrass, Angeline G.; Conley, Andrew; Vitt, Francis M.
Radiative kernels at the top of the atmosphere are useful for decomposing changes in atmospheric radiative fluxes due to feedbacks from atmosphere and surface temperature, water vapor, and surface albedo. Here we describe and validate radiative kernels calculated with the large-ensemble version of CAM5, CESM1.1.2, at the top of the atmosphere and the surface. Estimates of the radiative forcing from greenhouse gases and aerosols in RCP8.5 in the CESM large-ensemble simulations are also diagnosed. As an application, feedbacks are calculated for the CESM large ensemble. The kernels are freely available at https://doi.org/10.5065/D6F47MT6, and accompanying software can be downloaded from https://github.com/apendergrass/cam5-kernels.
NASA Technical Reports Server (NTRS)
Bland, S. R.
1982-01-01
Finite difference methods for unsteady transonic flow frequency use simplified equations in which certain of the time dependent terms are omitted from the governing equations. Kernel functions are derived for two dimensional subsonic flow, and provide accurate solutions of the linearized potential equation with the same time dependent terms omitted. These solutions make possible a direct evaluation of the finite difference codes for the linear problem. Calculations with two of these low frequency kernel functions verify the accuracy of the LTRAN2 and HYTRAN2 finite difference codes. Comparisons of the low frequency kernel function results with the Possio kernel function solution of the complete linear equations indicate the adequacy of the HYTRAN approximation for frequencies in the range of interest for flutter calculations.
General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.
Marmarelis, V Z; Berger, T W
2005-07-01
This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M
Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results:more » A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.« less
SU-E-T-510: Calculation of High Resolution and Material-Specific Photon Energy Deposition Kernels.
Huang, J; Childress, N; Kry, S
2012-06-01
To calculate photon energy deposition kernels (EDKs) used for convolution/superposition dose calculation at a higher resolution than the original Mackie et al. 1988 kernels and to calculate material-specific kernels that describe how energy is transported and deposited by secondary particles when the incident photon interacts in a material other than water. The high resolution EDKs for various incident photon energies were generated using the EGSnrc user-code EDKnrc, which forces incident photons to interact at the center of a 60 cm radius sphere of water. The simulation geometry is essentially the same as the original Mackie calculation but with a greater number of scoring voxels (48 radial, 144 angular bins). For the material-specific EDKs, incident photons were forced to interact at the center of a 1 mm radius sphere of material (lung, cortical bone, silver, or titanium) surrounded by a 60 cm radius water sphere, using the original scoring voxel geometry implemented by Mackie et al. 1988 (24 radial, 48 angular bins). Our Monte Carlo-calculated high resolution EDKs showed excellent agreement with the Mackie kernels, with our kernels providing more information about energy deposition close to the interaction site. Furthermore, our EDKs resulted in smoother dose deposition functions due to the finer resolution and greater number of simulation histories. The material-specific EDK results show that the angular distribution of energy deposition is different for incident photons interacting in different materials. Calculated from the angular dose distribution for 300 keV incident photons, the expected polar angle for dose deposition (
2013-03-01
time (milliseconds) GFlops Comparison to GPU peak performance (%) Cascade Gaussian Filtering 13 45.19 6.3 Difference of Gaussian 0.512 152...values for the GPU-targeted actor implementations in terms of Giga Floating Point Operations Per Second ( GFLOPS ). Our GFLOPS calculation for an actor...kernels. The results for GFLOPS are provided in Table . The actors were implemented on an NVIDIA GTX260 GPU, which provides 715 GFLOPS as peak
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y M; Han, B; Xing, L
2016-06-15
Purpose: EPID-based patient-specific quality assurance provides verification of the planning setup and delivery process that phantomless QA and log-file based virtual dosimetry methods cannot achieve. We present a method for EPID-based QA utilizing spatially-variant EPID response kernels that allows for direct calculation of the entrance fluence and 3D phantom dose. Methods: An EPID dosimetry system was utilized for 3D dose reconstruction in a cylindrical phantom for the purposes of end-to-end QA. Monte Carlo (MC) methods were used to generate pixel-specific point-spread functions (PSFs) characterizing the spatially non-uniform EPID portal response in the presence of phantom scatter. The spatially-variant PSFs weremore » decomposed into spatially-invariant basis PSFs with the symmetric central-axis kernel as the primary basis kernel and off-axis representing orthogonal perturbations in pixel-space. This compact and accurate characterization enables the use of a modified Richardson-Lucy deconvolution algorithm to directly reconstruct entrance fluence from EPID images without iterative scatter subtraction. High-resolution phantom dose kernels were cogenerated in MC with the PSFs enabling direct recalculation of the resulting phantom dose by rapid forward convolution once the entrance fluence was calculated. A Delta4 QA phantom was used to validate the dose reconstructed in this approach. Results: The spatially-invariant representation of the EPID response accurately reproduced the entrance fluence with >99.5% fidelity with a simultaneous reduction of >60% in computational overhead. 3D dose for 10{sub 6} voxels was reconstructed for the entire phantom geometry. A 3D global gamma analysis demonstrated a >95% pass rate at 3%/3mm. Conclusion: Our approach demonstrates the capabilities of an EPID-based end-to-end QA methodology that is more efficient than traditional EPID dosimetry methods. Displacing the point of measurement external to the QA phantom reduces the necessary complexity of the phantom itself while offering a method that is highly scalable and inherently generalizable to rotational and trajectory based deliveries. This research was partially supported by Varian.« less
USDA-ARS?s Scientific Manuscript database
The ionome, or elemental profile, of a maize kernel represents at least two distinct ideas. First, the collection of elements within the kernel are food, feed and feedstocks for people, animals and industrial processes. Second, the ionome of the kernel represents a developmental end point that can s...
NASA Technical Reports Server (NTRS)
Jin, Zhonghai; Wielicki, Bruce A.; Loukachine, Constantin; Charlock, Thomas P.; Young, David; Noeel, Stefan
2011-01-01
The radiative kernel approach provides a simple way to separate the radiative response to different climate parameters and to decompose the feedback into radiative and climate response components. Using CERES/MODIS/Geostationary data, we calculated and analyzed the solar spectral reflectance kernels for various climate parameters on zonal, regional, and global spatial scales. The kernel linearity is tested. Errors in the kernel due to nonlinearity can vary strongly depending on climate parameter, wavelength, surface, and solar elevation; they are large in some absorption bands for some parameters but are negligible in most conditions. The spectral kernels are used to calculate the radiative responses to different climate parameter changes in different latitudes. The results show that the radiative response in high latitudes is sensitive to the coverage of snow and sea ice. The radiative response in low latitudes is contributed mainly by cloud property changes, especially cloud fraction and optical depth. The large cloud height effect is confined to absorption bands, while the cloud particle size effect is found mainly in the near infrared. The kernel approach, which is based on calculations using CERES retrievals, is then tested by direct comparison with spectral measurements from Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) (a different instrument on a different spacecraft). The monthly mean interannual variability of spectral reflectance based on the kernel technique is consistent with satellite observations over the ocean, but not over land, where both model and data have large uncertainty. RMS errors in kernel ]derived monthly global mean reflectance over the ocean compared to observations are about 0.001, and the sampling error is likely a major component.
NASA Astrophysics Data System (ADS)
Komatitsch, Dimitri; Xie, Zhinan; Bozdaǧ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen
2016-09-01
We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.
Generalization of the subsonic kernel function in the s-plane, with applications to flutter analysis
NASA Technical Reports Server (NTRS)
Cunningham, H. J.; Desmarais, R. N.
1984-01-01
A generalized subsonic unsteady aerodynamic kernel function, valid for both growing and decaying oscillatory motions, is developed and applied in a modified flutter analysis computer program to solve the boundaries of constant damping ratio as well as the flutter boundary. Rates of change of damping ratios with respect to dynamic pressure near flutter are substantially lower from the generalized-kernel-function calculations than from the conventional velocity-damping (V-g) calculation. A rational function approximation for aerodynamic forces used in control theory for s-plane analysis gave rather good agreement with kernel-function results, except for strongly damped motion at combinations of high (subsonic) Mach number and reduced frequency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patrick, Christopher E., E-mail: chripa@fysik.dtu.dk; Thygesen, Kristian S., E-mail: thygesen@fysik.dtu.dk
2015-09-14
We present calculations of the correlation energies of crystalline solids and isolated systems within the adiabatic-connection fluctuation-dissipation formulation of density-functional theory. We perform a quantitative comparison of a set of model exchange-correlation kernels originally derived for the homogeneous electron gas (HEG), including the recently introduced renormalized adiabatic local-density approximation (rALDA) and also kernels which (a) satisfy known exact limits of the HEG, (b) carry a frequency dependence, or (c) display a 1/k{sup 2} divergence for small wavevectors. After generalizing the kernels to inhomogeneous systems through a reciprocal-space averaging procedure, we calculate the lattice constants and bulk moduli of a testmore » set of 10 solids consisting of tetrahedrally bonded semiconductors (C, Si, SiC), ionic compounds (MgO, LiCl, LiF), and metals (Al, Na, Cu, Pd). We also consider the atomization energy of the H{sub 2} molecule. We compare the results calculated with different kernels to those obtained from the random-phase approximation (RPA) and to experimental measurements. We demonstrate that the model kernels correct the RPA’s tendency to overestimate the magnitude of the correlation energy whilst maintaining a high-accuracy description of structural properties.« less
78 FR 66649 - Spirotetramat; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-06
... regulation establishes tolerances for residues of spirotetramat in or on corn, sweet, kernel plus cob with... tolerance for residues of the insecticide spirotetramat in or on corn, sweet kernel plus cob with husks..., calculated as the stoichiometric equivalent of spirotetramat, in or on corn, sweet, kernel plus cob with...
[Spatial analysis of road traffic accidents with fatalities in Spain, 2008-2011].
Gómez-Barroso, Diana; López-Cuadrado, Teresa; Llácer, Alicia; Palmera Suárez, Rocío; Fernández-Cuenca, Rafael
2015-09-01
To estimate the areas of greatest density of road traffic accidents with fatalities at 24 hours per km(2)/year in Spain from 2008 to 2011, using a geographic information system. Accidents were geocodified using the road and kilometer points where they occurred. The average nearest neighbor was calculated to detect possible clusters and to obtain the bandwidth for kernel density estimation. A total of 4775 accidents were analyzed, of which 73.3% occurred on conventional roads. The estimated average distance between accidents was 1,242 meters, and the average expected distance was 10,738 meters. The nearest neighbor index was 0.11, indicating that there were aggregations of accidents in space. A map showing the kernel density was obtained with a resolution of 1 km(2), which identified the areas of highest density. This methodology allowed a better approximation to locating accident risks by taking into account kilometer points. The map shows areas where there was a greater density of accidents. This could be an advantage in decision-making by the relevant authorities. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.
GRAYSKY-A new gamma-ray skyshine code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witts, D.J.; Twardowski, T.; Watmough, M.H.
1993-01-01
This paper describes a new prototype gamma-ray skyshine code GRAYSKY (Gamma-RAY SKYshine) that has been developed at BNFL, as part of an industrially based master of science course, to overcome the problems encountered with SKYSHINEII and RANKERN. GRAYSKY is a point kernel code based on the use of a skyshine response function. The scattering within source or shield materials is accounted for by the use of buildup factors. This is an approximate method of solution but one that has been shown to produce results that are acceptable for dose rate predictions on operating plants. The novel features of GRAYSKY aremore » as follows: 1. The code is fully integrated with a semianalytical point kernel shielding code, currently under development at BNFL, which offers powerful solid-body modeling capabilities. 2. The geometry modeling also allows the skyshine response function to be used in a manner that accounts for the shielding of air-scattered radiation. 3. Skyshine buildup factors calculated using the skyshine response function have been used as well as dose buildup factors.« less
Chaikh, Abdulhamid; Balosso, Jacques
2016-12-01
To apply the statistical bootstrap analysis and dosimetric criteria's to assess the change of prescribed dose (PD) for lung cancer to maintain the same clinical results when using new generations of dose calculation algorithms. Nine lung cancer cases were studied. For each patient, three treatment plans were generated using exactly the same beams arrangements. In plan 1, the dose was calculated using pencil beam convolution (PBC) algorithm turning on heterogeneity correction with modified batho (PBC-MB). In plan 2, the dose was calculated using anisotropic analytical algorithm (AAA) and the same PD, as plan 1. In plan 3, the dose was calculated using AAA with monitor units (MUs) obtained from PBC-MB, as input. The dosimetric criteria's include MUs, delivered dose at isocentre (Diso) and calculated dose to 95% of the target volume (D95). The bootstrap method was used to assess the significance of the dose differences and to accurately estimate the 95% confidence interval (95% CI). Wilcoxon and Spearman's rank tests were used to calculate P values and the correlation coefficient (ρ). Statistically significant for dose difference was found using point kernel model. A good correlation was observed between both algorithms types, with ρ>0.9. Using AAA instead of PBC-MB, an adjustment of the PD in the isocentre is suggested. For a given set of patients, we assessed the need to readjust the PD for lung cancer using dosimetric indices and bootstrap statistical method. Thus, if the goal is to keep on with the same clinical results, the PD for lung tumors has to be adjusted with AAA. According to our simulation we suggest to readjust the PD by 5% and an optimization for beam arrangements to better protect the organs at risks (OARs).
Jdpd: an open java simulation kernel for molecular fragment dissipative particle dynamics.
van den Broek, Karina; Kuhn, Hubert; Zielesny, Achim
2018-05-21
Jdpd is an open Java simulation kernel for Molecular Fragment Dissipative Particle Dynamics with parallelizable force calculation, efficient caching options and fast property calculations. It is characterized by an interface and factory-pattern driven design for simple code changes and may help to avoid problems of polyglot programming. Detailed input/output communication, parallelization and process control as well as internal logging capabilities for debugging purposes are supported. The new kernel may be utilized in different simulation environments ranging from flexible scripting solutions up to fully integrated "all-in-one" simulation systems.
Reconstruction of noisy and blurred images using blur kernel
NASA Astrophysics Data System (ADS)
Ellappan, Vijayan; Chopra, Vishal
2017-11-01
Blur is a common in so many digital images. Blur can be caused by motion of the camera and scene object. In this work we proposed a new method for deblurring images. This work uses sparse representation to identify the blur kernel. By analyzing the image coordinates Using coarse and fine, we fetch the kernel based image coordinates and according to that observation we get the motion angle of the shaken or blurred image. Then we calculate the length of the motion kernel using radon transformation and Fourier for the length calculation of the image and we use Lucy Richardson algorithm which is also called NON-Blind(NBID) Algorithm for more clean and less noisy image output. All these operation will be performed in MATLAB IDE.
Application of the matrix exponential kernel
NASA Technical Reports Server (NTRS)
Rohach, A. F.
1972-01-01
A point matrix kernel for radiation transport, developed by the transmission matrix method, has been used to develop buildup factors and energy spectra through slab layers of different materials for a point isotropic source. Combinations of lead-water slabs were chosen for examples because of the extreme differences in shielding properties of these two materials.
Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain
ERIC Educational Resources Information Center
Hannagan, Thomas; Grainger, Jonathan
2012-01-01
It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jakel, Scholkopf, & Wichmann, 2009). We point out that "String kernels," initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal…
NASA Astrophysics Data System (ADS)
Kidon, Lyran; Wilner, Eli Y.; Rabani, Eran
2015-12-01
The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima-Zwanzig-Mori time-convolution (TC) and the other on the Tokuyama-Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called "memory kernel" or "generator," going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operator in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green's function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.
Hentschinski, M; Kusina, A; Kutak, K; Serino, M
2018-01-01
We calculate the transverse momentum dependent gluon-to-gluon splitting function within [Formula: see text]-factorization, generalizing the framework employed in the calculation of the quark splitting functions in Hautmann et al. (Nucl Phys B 865:54-66, arXiv:1205.1759, 2012), Gituliar et al. (JHEP 01:181, arXiv:1511.08439, 2016), Hentschinski et al. (Phys Rev D 94(11):114013, arXiv:1607.01507, 2016) and demonstrate at the same time the consistency of the extended formalism with previous results. While existing versions of [Formula: see text] factorized evolution equations contain already a gluon-to-gluon splitting function i.e. the leading order Balitsky-Fadin-Kuraev-Lipatov (BFKL) kernel or the Ciafaloni-Catani-Fiorani-Marchesini (CCFM) kernel, the obtained splitting function has the important property that it reduces both to the leading order BFKL kernel in the high energy limit, to the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) gluon-to-gluon splitting function in the collinear limit as well as to the CCFM kernel in the soft limit. At the same time we demonstrate that this splitting kernel can be obtained from a direct calculation of the QCD Feynman diagrams, based on a combined implementation of the Curci-Furmanski-Petronzio formalism for the calculation of the collinear splitting functions and the framework of high energy factorization.
NASA Astrophysics Data System (ADS)
Hsieh, M.; Zhao, L.; Ma, K.
2010-12-01
Finite-frequency approach enables seismic tomography to fully utilize the spatial and temporal distributions of the seismic wavefield to improve resolution. In achieving this goal, one of the most important tasks is to compute efficiently and accurately the (Fréchet) sensitivity kernels of finite-frequency seismic observables such as traveltime and amplitude to the perturbations of model parameters. In scattering-integral approach, the Fréchet kernels are expressed in terms of the strain Green tensors (SGTs), and a pre-established SGT database is necessary to achieve practical efficiency for a three-dimensional reference model in which the SGTs must be calculated numerically. Methods for computing Fréchet kernels for seismic velocities have long been established. In this study, we develop algorithms based on the finite-difference method for calculating Fréchet kernels for the quality factor Qμ and seismic boundary topography. Kernels for the quality factor can be obtained in a way similar to those for seismic velocities with the help of the Hilbert transform. The effects of seismic velocities and quality factor on either traveltime or amplitude are coupled. Kernels for boundary topography involve spatial gradient of the SGTs and they also exhibit interesting finite-frequency characteristics. Examples of quality factor and boundary topography kernels will be shown for a realistic model for the Taiwan region with three-dimensional velocity variation as well as surface and Moho discontinuity topography.
Quality changes in macadamia kernel between harvest and farm-gate.
Walton, David A; Wallace, Helen M
2011-02-01
Macadamia integrifolia, Macadamia tetraphylla and their hybrids are cultivated for their edible kernels. After harvest, nuts-in-shell are partially dried on-farm and sorted to eliminate poor-quality kernels before consignment to a processor. During these operations, kernel quality may be lost. In this study, macadamia nuts-in-shell were sampled at five points of an on-farm postharvest handling chain from dehusking to the final storage silo to assess quality loss prior to consignment. Shoulder damage, weight of pieces and unsound kernel were assessed for raw kernels, and colour, mottled colour and surface damage for roasted kernels. Shoulder damage, weight of pieces and unsound kernel for raw kernels increased significantly between the dehusker and the final silo. Roasted kernels displayed a significant increase in dark colour, mottled colour and surface damage during on-farm handling. Significant loss of macadamia kernel quality occurred on a commercial farm during sorting and storage of nuts-in-shell before nuts were consigned to a processor. Nuts-in-shell should be dried as quickly as possible and on-farm handling minimised to maintain optimum kernel quality. 2010 Society of Chemical Industry.
Forced Ignition Study Based On Wavelet Method
NASA Astrophysics Data System (ADS)
Martelli, E.; Valorani, M.; Paolucci, S.; Zikoski, Z.
2011-05-01
The control of ignition in a rocket engine is a critical problem for combustion chamber design. Therefore it is essential to fully understand the mechanism of ignition during its earliest stages. In this paper the characteristics of flame kernel formation and initial propagation in a hydrogen-argon-oxygen mixing layer are studied using 2D direct numerical simulations with detailed chemistry and transport properties. The flame kernel is initiated by adding an energy deposition source term in the energy equation. The effect of unsteady strain rate is studied by imposing a 2D turbulence velocity field, which is initialized by means of a synthetic field. An adaptive wavelet method, based on interpolating wavelets is used in this study to solve the compressible reactive Navier- Stokes equations. This method provides an alternative means to refine the computational grid points according to local demands of the physical solution. The present simulations show that in the very early instants the kernel perturbed by the turbulent field is characterized by an increased burning area and a slightly increased rad- ical formation. In addition, the calculations show that the wavelet technique yields a significant reduction in the number of degrees of freedom necessary to achieve a pre- scribed solution accuracy.
Bayne, Michael G; Scher, Jeremy A; Ellis, Benjamin H; Chakraborty, Arindam
2018-05-21
Electron-hole or quasiparticle representation plays a central role in describing electronic excitations in many-electron systems. For charge-neutral excitation, the electron-hole interaction kernel is the quantity of interest for calculating important excitation properties such as optical gap, optical spectra, electron-hole recombination and electron-hole binding energies. The electron-hole interaction kernel can be formally derived from the density-density correlation function using both Green's function and TDDFT formalism. The accurate determination of the electron-hole interaction kernel remains a significant challenge for precise calculations of optical properties in the GW+BSE formalism. From the TDDFT perspective, the electron-hole interaction kernel has been viewed as a path to systematic development of frequency-dependent exchange-correlation functionals. Traditional approaches, such as MBPT formalism, use unoccupied states (which are defined with respect to Fermi vacuum) to construct the electron-hole interaction kernel. However, the inclusion of unoccupied states has long been recognized as the leading computational bottleneck that limits the application of this approach for larger finite systems. In this work, an alternative derivation that avoids using unoccupied states to construct the electron-hole interaction kernel is presented. The central idea of this approach is to use explicitly correlated geminal functions for treating electron-electron correlation for both ground and excited state wave functions. Using this ansatz, it is derived using both diagrammatic and algebraic techniques that the electron-hole interaction kernel can be expressed only in terms of linked closed-loop diagrams. It is proved that the cancellation of unlinked diagrams is a consequence of linked-cluster theorem in real-space representation. The electron-hole interaction kernel derived in this work was used to calculate excitation energies in many-electron systems and results were found to be in good agreement with the EOM-CCSD and GW+BSE methods. The numerical results highlight the effectiveness of the developed method for overcoming the computational barrier of accurately determining the electron-hole interaction kernel to applications of large finite systems such as quantum dots and nanorods.
Alternative Derivations for the Poisson Integral Formula
ERIC Educational Resources Information Center
Chen, J. T.; Wu, C. S.
2006-01-01
Poisson integral formula is revisited. The kernel in the Poisson integral formula can be derived in a series form through the direct BEM free of the concept of image point by using the null-field integral equation in conjunction with the degenerate kernels. The degenerate kernels for the closed-form Green's function and the series form of Poisson…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kieselmann, J; Bartzsch, S; Oelfke, U
Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less
NASA Astrophysics Data System (ADS)
Haryanto, B.; Bukit, R. Br; Situmeang, E. M.; Christina, E. P.; Pandiangan, F.
2018-02-01
The purpose of this study was to determine the performance, productivity and feasibility of the operation of palm kernel processing plant based on Energy Productivity Ratio (EPR). EPR is expressed as the ratio of output to input energy and by-product. Palm Kernel plan is process in palm kernel to become palm kernel oil. The procedure started from collecting data needed as energy input such as: palm kernel prices, energy demand and depreciation of the factory. The energy output and its by-product comprise the whole production price such as: palm kernel oil price and the remaining products such as shells and pulp price. Calculation the equality of energy of palm kernel oil is to analyze the value of Energy Productivity Ratio (EPR) bases on processing capacity per year. The investigation has been done in Kernel Oil Processing Plant PT-X at Sumatera Utara plantation. The value of EPR was 1.54 (EPR > 1), which indicated that the processing of palm kernel into palm kernel oil is feasible to be operated based on the energy productivity.
Three-dimensional waveform sensitivity kernels
NASA Astrophysics Data System (ADS)
Marquering, Henk; Nolet, Guust; Dahlen, F. A.
1998-03-01
The sensitivity of intermediate-period (~10-100s) seismic waveforms to the lateral heterogeneity of the Earth is computed using an efficient technique based upon surface-wave mode coupling. This formulation yields a general, fully fledged 3-D relationship between data and model without imposing smoothness constraints on the lateral heterogeneity. The calculations are based upon the Born approximation, which yields a linear relation between data and model. The linear relation ensures fast forward calculations and makes the formulation suitable for inversion schemes; however, higher-order effects such as wave-front healing are neglected. By including up to 20 surface-wave modes, we obtain Fréchet, or sensitivity, kernels for waveforms in the time frame that starts at the S arrival and which includes direct and surface-reflected body waves. These 3-D sensitivity kernels provide new insights into seismic-wave propagation, and suggest that there may be stringent limitations on the validity of ray-theoretical interpretations. Even recently developed 2-D formulations, which ignore structure out of the source-receiver plane, differ substantially from our 3-D treatment. We infer that smoothness constraints on heterogeneity, required to justify the use of ray techniques, are unlikely to hold in realistic earth models. This puts the use of ray-theoretical techniques into question for the interpretation of intermediate-period seismic data. The computed 3-D sensitivity kernels display a number of phenomena that are counter-intuitive from a ray-geometrical point of view: (1) body waves exhibit significant sensitivity to structure up to 500km away from the source-receiver minor arc; (2) significant near-surface sensitivity above the two turning points of the SS wave is observed; (3) the later part of the SS wave packet is most sensitive to structure away from the source-receiver path; (4) the sensitivity of the higher-frequency part of the fundamental surface-wave mode is wider than for its faster, lower-frequency part; (5) delayed body waves may considerably influence fundamental Rayleigh and Love waveforms. The strong sensitivity of waveforms to crustal structure due to fundamental-mode-to-body-wave scattering precludes the use of phase-velocity filters to model body-wave arrivals. Results from the 3-D formulation suggest that the use of 2-D and 1-D techniques for the interpretation of intermediate-period waveforms should seriously be reconsidered.
A comparison of skyshine computational methods.
Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J
2005-01-01
A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.
Kernel-Based Sensor Fusion With Application to Audio-Visual Voice Activity Detection
NASA Astrophysics Data System (ADS)
Dov, David; Talmon, Ronen; Cohen, Israel
2016-12-01
In this paper, we address the problem of multiple view data fusion in the presence of noise and interferences. Recent studies have approached this problem using kernel methods, by relying particularly on a product of kernels constructed separately for each view. From a graph theory point of view, we analyze this fusion approach in a discrete setting. More specifically, based on a statistical model for the connectivity between data points, we propose an algorithm for the selection of the kernel bandwidth, a parameter, which, as we show, has important implications on the robustness of this fusion approach to interferences. Then, we consider the fusion of audio-visual speech signals measured by a single microphone and by a video camera pointed to the face of the speaker. Specifically, we address the task of voice activity detection, i.e., the detection of speech and non-speech segments, in the presence of structured interferences such as keyboard taps and office noise. We propose an algorithm for voice activity detection based on the audio-visual signal. Simulation results show that the proposed algorithm outperforms competing fusion and voice activity detection approaches. In addition, we demonstrate that a proper selection of the kernel bandwidth indeed leads to improved performance.
NASA Astrophysics Data System (ADS)
Green, David L.; Berry, Lee A.; Simpson, Adam B.; Younkin, Timothy R.
2018-04-01
We present the KINETIC-J code, a computational kernel for evaluating the linearized Vlasov equation with application to calculating the kinetic plasma response (current) to an applied time harmonic wave electric field. This code addresses the need for a configuration space evaluation of the plasma current to enable kinetic full-wave solvers for waves in hot plasmas to move beyond the limitations of the traditional Fourier spectral methods. We benchmark the kernel via comparison with the standard k →-space forms of the hot plasma conductivity tensor.
Preliminary scattering kernels for ethane and triphenylmethane at cryogenic temperatures
NASA Astrophysics Data System (ADS)
Cantargi, F.; Granada, J. R.; Damián, J. I. Márquez
2017-09-01
Two potential cold moderator materials were studied: ethane and triphenylmethane. The first one, ethane (C2H6), is an organic compound which is very interesting from the neutronic point of view, in some respects better than liquid methane to produce subthermal neutrons, not only because it remains in liquid phase through a wider temperature range (Tf = 90.4 K, Tb = 184.6 K), but also because of its high protonic density together with its frequency spectrum with a low rotational energy band. Another material, Triphenylmethane is an hydrocarbon with formula C19H16 which has already been proposed as a good candidate for a cold moderator. Following one of the main research topics of the Neutron Physics Department of Centro Atómico Bariloche, we present here two ways to estimate the frequency spectrum which is needed to feed the NJOY nuclear data processing system in order to generate the scattering law of each desired material. For ethane, computer simulations of molecular dynamics were done, while for triphenylmethane existing experimental and calculated data were used to produce a new scattering kernel. With these models, cross section libraries were generated, and applied to neutron spectra calculation.
Aerodynamics Via Acoustics: Application of Acoustic Formulas for Aerodynamic Calculations
NASA Technical Reports Server (NTRS)
Farassat, F.; Myers, M. K.
1986-01-01
Prediction of aerodynamic loads on bodies in arbitrary motion is considered from an acoustic point of view, i.e., in a frame of reference fixed in the undisturbed medium. An inhomogeneous wave equation which governs the disturbance pressure is constructed and solved formally using generalized function theory. When the observer is located on the moving body surface there results a singular linear integral equation for surface pressure. Two different methods for obtaining such equations are discussed. Both steady and unsteady aerodynamic calculations are considered. Two examples are presented, the more important being an application to propeller aerodynamics. Of particular interest for numerical applications is the analytical behavior of the kernel functions in the various integral equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, S; Takayanagi, T; Fujii, Y
2014-06-15
Purpose: To present the validity of our beam modeling with double and triple Gaussian dose kernels for spot scanning proton beams in Nagoya Proton Therapy Center. This study investigates the conformance between the measurements and calculation results in absolute dose with two types of beam kernel. Methods: A dose kernel is one of the important input data required for the treatment planning software. The dose kernel is the 3D dose distribution of an infinitesimal pencil beam of protons in water and consists of integral depth doses and lateral distributions. We have adopted double and triple Gaussian model as lateral distributionmore » in order to take account of the large angle scattering due to nuclear reaction by fitting simulated inwater lateral dose profile for needle proton beam at various depths. The fitted parameters were interpolated as a function of depth in water and were stored as a separate look-up table for the each beam energy. The process of beam modeling is based on the method of MDACC [X.R.Zhu 2013]. Results: From the comparison results between the absolute doses calculated by double Gaussian model and those measured at the center of SOBP, the difference is increased up to 3.5% in the high-energy region because the large angle scattering due to nuclear reaction is not sufficiently considered at intermediate depths in the double Gaussian model. In case of employing triple Gaussian dose kernels, the measured absolute dose at the center of SOBP agrees with calculation within ±1% regardless of the SOBP width and maximum range. Conclusion: We have demonstrated the beam modeling results of dose distribution employing double and triple Gaussian dose kernel. Treatment planning system with the triple Gaussian dose kernel has been successfully verified and applied to the patient treatment with a spot scanning technique in Nagoya Proton Therapy Center.« less
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
NASA Astrophysics Data System (ADS)
Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.
2017-12-01
Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.
MPACT Subgroup Self-Shielding Efficiency Improvements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stimpson, Shane; Liu, Yuxuan; Collins, Benjamin S.
Recent developments to improve the efficiency of the MOC solvers in MPACT have yielded effective kernels that loop over several energy groups at once, rather that looping over one group at a time. These kernels have produced roughly a 2x speedup on the MOC sweeping time during eigenvalue calculation. However, the self-shielding subgroup calculation had not been reevaluated to take advantage of these new kernels, which typically requires substantial solve time. The improvements covered in this report start by integrating the multigroup kernel concepts into the subgroup calculation, which are then used as the basis for further extensions. The nextmore » improvement that is covered is what is currently being termed as “Lumped Parameter MOC”. Because the subgroup calculation is a purely fixed source problem and multiple sweeps are performed only to update the boundary angular fluxes, the sweep procedure can be condensed to allow for the instantaneous propagation of the flux across a spatial domain, without the need to sweep along all segments in a ray. Once the boundary angular fluxes are considered to be converged, an additional sweep that will tally the scalar flux is completed. The last improvement that is investigated is the possible reduction of the number of azimuthal angles per octant in the shielding sweep. Typically 16 azimuthal angles per octant are used for self-shielding and eigenvalue calculations, but it is possible that the self-shielding sweeps are less sensitive to the number of angles than the full eigenvalue calculation.« less
Noise kernels of stochastic gravity in conformally-flat spacetimes
NASA Astrophysics Data System (ADS)
Cho, H. T.; Hu, B. L.
2015-03-01
The central object in the theory of semiclassical stochastic gravity is the noise kernel, which is the symmetric two point correlation function of the stress-energy tensor. Using the corresponding Wightman functions in Minkowski, Einstein and open Einstein spaces, we construct the noise kernels of a conformally coupled scalar field in these spacetimes. From them we show that the noise kernels in conformally-flat spacetimes, including the Friedmann-Robertson-Walker universes, can be obtained in closed analytic forms by using a combination of conformal and coordinate transformations.
Miller, Nathan D; Haase, Nicholas J; Lee, Jonghyun; Kaeppler, Shawn M; de Leon, Natalia; Spalding, Edgar P
2017-01-01
Grain yield of the maize plant depends on the sizes, shapes, and numbers of ears and the kernels they bear. An automated pipeline that can measure these components of yield from easily-obtained digital images is needed to advance our understanding of this globally important crop. Here we present three custom algorithms designed to compute such yield components automatically from digital images acquired by a low-cost platform. One algorithm determines the average space each kernel occupies along the cob axis using a sliding-window Fourier transform analysis of image intensity features. A second counts individual kernels removed from ears, including those in clusters. A third measures each kernel's major and minor axis after a Bayesian analysis of contour points identifies the kernel tip. Dimensionless ear and kernel shape traits that may interrelate yield components are measured by principal components analysis of contour point sets. Increased objectivity and speed compared to typical manual methods are achieved without loss of accuracy as evidenced by high correlations with ground truth measurements and simulated data. Millimeter-scale differences among ear, cob, and kernel traits that ranged more than 2.5-fold across a diverse group of inbred maize lines were resolved. This system for measuring maize ear, cob, and kernel attributes is being used by multiple research groups as an automated Web service running on community high-throughput computing and distributed data storage infrastructure. Users may create their own workflow using the source code that is staged for download on a public repository. © 2016 The Authors. The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.
Small convolution kernels for high-fidelity image restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1991-01-01
An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.
SU-E-T-439: An Improved Formula of Scatter-To-Primary Ratio for Photon Dose Calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, T
2014-06-01
Purpose: Scatter-to-primary ratio (SPR) is an important dosimetric quantity that describes the contribution from the scatter photons in an external photon beam. The purpose of this study is to develop an improved analytical formula to describe SPR as a function of circular field size (r) and depth (d) using Monte Carlo (MC) simulation. Methods: MC simulation was performed for Mohan photon spectra (Co-60, 4, 6, 10, 15, 23 MV) using EGSNRC code. Point-spread scatter dose kernels in water are generated. The scatter-to-primary ratio (SPR) is also calculated using MC simulation as a function of field size for circular field sizemore » with radius r and depth d. The doses from forward scatter and backscatter photons are calculated using a convolution of the point-spread scatter dose kernel and by accounting for scatter photons contributing to dose before (z'd) reaching the depth of interest, d, where z' is the location of scatter photons, respectively. The depth dependence of the ratio of the forward scatter and backscatter doses is determined as a function of depth and field size. Results: We are able to improve the existing 3-parameter (a, w, d0) empirical formula for SPR by introducing depth dependence for one of the parameter d0, which becomes 0 for deeper depths. The depth dependence of d0 can be directly calculated as a ratio of backscatter-to-forward scatter doses for otherwise the same field and depth. With the improved empirical formula, we can fit SPR for all megavoltage photon beams to within 2%. Existing 3-parameter formula cannot fit SPR data for Co-60 to better than 3.1%. Conclusion: An improved empirical formula is developed to fit SPR for all megavoltage photon energies to within 2%.« less
Travel-time sensitivity kernels in long-range propagation.
Skarsoulis, E K; Cornuelle, B D; Dzieciuch, M A
2009-11-01
Wave-theoretic travel-time sensitivity kernels (TSKs) are calculated in two-dimensional (2D) and three-dimensional (3D) environments and their behavior with increasing propagation range is studied and compared to that of ray-theoretic TSKs and corresponding Fresnel-volumes. The differences between the 2D and 3D TSKs average out when horizontal or cross-range marginals are considered, which indicates that they are not important in the case of range-independent sound-speed perturbations or perturbations of large scale compared to the lateral TSK extent. With increasing range, the wave-theoretic TSKs expand in the horizontal cross-range direction, their cross-range extent being comparable to that of the corresponding free-space Fresnel zone, whereas they remain bounded in the vertical. Vertical travel-time sensitivity kernels (VTSKs)-one-dimensional kernels describing the effect of horizontally uniform sound-speed changes on travel-times-are calculated analytically using a perturbation approach, and also numerically, as horizontal marginals of the corresponding TSKs. Good agreement between analytical and numerical VTSKs, as well as between 2D and 3D VTSKs, is found. As an alternative method to obtain wave-theoretic sensitivity kernels, the parabolic approximation is used; the resulting TSKs and VTSKs are in good agreement with normal-mode results. With increasing range, the wave-theoretic VTSKs approach the corresponding ray-theoretic sensitivity kernels.
Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment
NASA Astrophysics Data System (ADS)
Nigg, D. W.; Wheeler, F. J.
1981-01-01
A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and the capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.
Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nigg, D.W.; Wheeler, F.J.
1981-01-01
A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and themore » capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.« less
Wu, Xin; Koslowski, Axel; Thiel, Walter
2012-07-10
In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.
Ranking Support Vector Machine with Kernel Approximation
Dou, Yong
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Richardson-Lucy deblurring for the star scene under a thinning motion path
NASA Astrophysics Data System (ADS)
Su, Laili; Shao, Xiaopeng; Wang, Lin; Wang, Haixin; Huang, Yining
2015-05-01
This paper puts emphasis on how to model and correct image blur that arises from a camera's ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera's path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera's ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.
Quantifying the sensitivity of post-glacial sea level change to laterally varying viscosity
NASA Astrophysics Data System (ADS)
Crawford, Ophelia; Al-Attar, David; Tromp, Jeroen; Mitrovica, Jerry X.; Austermann, Jacqueline; Lau, Harriet C. P.
2018-05-01
We present a method for calculating the derivatives of measurements of glacial isostatic adjustment (GIA) with respect to the viscosity structure of the Earth and the ice sheet history. These derivatives, or kernels, quantify the linearised sensitivity of measurements to the underlying model parameters. The adjoint method is used to enable efficient calculation of theoretically exact sensitivity kernels within laterally heterogeneous earth models that can have a range of linear or non-linear viscoelastic rheologies. We first present a new approach to calculate GIA in the time domain, which, in contrast to the more usual formulation in the Laplace domain, is well suited to continuously varying earth models and to the use of the adjoint method. Benchmarking results show excellent agreement between our formulation and previous methods. We illustrate the potential applications of the kernels calculated in this way through a range of numerical calculations relative to a spherically symmetric background model. The complex spatial patterns of the sensitivities are not intuitive, and this is the first time that such effects are quantified in an efficient and accurate manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Liu, B; Liang, B
Purpose: Current CyberKnife treatment planning system (TPS) provided two dose calculation algorithms: Ray-tracing and Monte Carlo. Ray-tracing algorithm is fast, but less accurate, and also can’t handle irregular fields since a multi-leaf collimator system was recently introduced to CyberKnife M6 system. Monte Carlo method has well-known accuracy, but the current version still takes a long time to finish dose calculations. The purpose of this paper is to develop a GPU-based fast C/S dose engine for CyberKnife system to achieve both accuracy and efficiency. Methods: The TERMA distribution from a poly-energetic source was calculated based on beam’s eye view coordinate system,more » which is GPU friendly and has linear complexity. The dose distribution was then computed by inversely collecting the energy depositions from all TERMA points along 192 collapsed-cone directions. EGSnrc user code was used to pre-calculate energy deposition kernels (EDKs) for a series of mono-energy photons The energy spectrum was reconstructed based on measured tissue maximum ratio (TMR) curve, the TERMA averaged cumulative kernels was then calculated. Beam hardening parameters and intensity profiles were optimized based on measurement data from CyberKnife system. Results: The difference between measured and calculated TMR are less than 1% for all collimators except in the build-up regions. The calculated profiles also showed good agreements with the measured doses within 1% except in the penumbra regions. The developed C/S dose engine was also used to evaluate four clinical CyberKnife treatment plans, the results showed a better dose calculation accuracy than Ray-tracing algorithm compared with Monte Carlo method for heterogeneous cases. For the dose calculation time, it takes about several seconds for one beam depends on collimator size and dose calculation grids. Conclusion: A GPU-based C/S dose engine has been developed for CyberKnife system, which was proven to be efficient and accurate for clinical purpose, and can be easily implemented in TPS.« less
Approaches to reducing photon dose calculation errors near metal implants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Jessie Y.; Followill, David S.; Howell, Reb
Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well asmore » two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact reduction methods investigated, the authors found that O-MAR was the most consistent method, resulting in either improved dose calculation accuracy (dental case) or little impact on calculation accuracy (spine case). GSI was unsuccessful at reducing the severe artifacts caused by dental fillings and had very little impact on calculation accuracy. GSI with MARS on the other hand gave mixed results, sometimes introducing metal distortion and increasing calculation errors (titanium rectangular implant and titanium spinal hardware) but other times very successfully reducing artifacts (Cerrobend rectangular implant and dental fillings). Conclusions: Though successful at improving dose calculation accuracy upstream of metal implants, metal kernels were not found to substantially improve accuracy for clinical cases. Of the commercial artifact reduction methods investigated, O-MAR was found to be the most consistent candidate for all-purpose CT simulation imaging. The MARS algorithm for GSI should be used with caution for titanium implants, larger implants, and implants located near heterogeneities as it can distort the size and shape of implants and increase calculation errors.« less
Exact combinatorial approach to finite coagulating systems
NASA Astrophysics Data System (ADS)
Fronczak, Agata; Chmiel, Anna; Fronczak, Piotr
2018-02-01
This paper outlines an exact combinatorial approach to finite coagulating systems. In this approach, cluster sizes and time are discrete and the binary aggregation alone governs the time evolution of the systems. By considering the growth histories of all possible clusters, an exact expression is derived for the probability of a coagulating system with an arbitrary kernel being found in a given cluster configuration when monodisperse initial conditions are applied. Then this probability is used to calculate the time-dependent distribution for the number of clusters of a given size, the average number of such clusters, and that average's standard deviation. The correctness of our general expressions is proved based on the (analytical and numerical) results obtained for systems with the constant kernel. In addition, the results obtained are compared with the results arising from the solutions to the mean-field Smoluchowski coagulation equation, indicating its weak points. The paper closes with a brief discussion on the extensibility to other systems of the approach presented herein, emphasizing the issue of arbitrary initial conditions.
NASA Astrophysics Data System (ADS)
Yu, Y.; Shen, Y.; Chen, Y. J.
2015-12-01
By using ray theory in conjunction with the Born approximation, Dahlen et al. [2000] computed 3-D sensitivity kernels for finite-frequency seismic traveltimes. A series of studies have been conducted based on this theory to model the mantle velocity structure [e.g., Hung et al., 2004; Montelli et al., 2004; Ren and Shen, 2008; Yang et al., 2009; Liang et al., 2011; Tang et al., 2014]. One of the simplifications in the calculation of the kernels is the paraxial assumption, which may not be strictly valid near the receiver, the region of interest in regional teleseismic tomography. In this study, we improve the accuracy of traveltime sensitivity kernels of the first P arrival by eliminating the paraxial approximation. For calculation efficiency, the traveltime table built by the Fast Marching Method (FMM) is used to calculate both the wave vector and the geometrical spreading at every grid in the whole volume. The improved kernels maintain the sign, but with different amplitudes at different locations. We also find that when the directivity of the scattered wave is being taken into consideration, the differential sensitivity kernel of traveltimes measured at the vertical and radial component of the same receiver concentrates beneath the receiver, which can be used to invert for the structure inside the Earth. Compared with conventional teleseismic tomography, which uses the differential traveltimes between two stations in an array, this method is not affected by instrument response and timing errors, and reduces the uncertainty caused by the finite dimension of the model in regional tomography. In addition, the cross-dependence of P traveltimes to S-wave velocity anomaly is significant and sensitive to the structure beneath the receiver. So with the component-differential finite-frequency sensitivity kernel, the anomaly of both P-wave and S-wave velocity and Vp/Vs ratio can be achieved at the same time.
Different kernel functions due to rainfall response from borehole strainmeter in Taiwan
NASA Astrophysics Data System (ADS)
Yen Chen, Chih; Hu, Jyr Ching; LIu, Chi Ching
2014-05-01
In order to realize reasons inducing earthquakes, project of monitoring of the fault activity using 3-component Gladwin Tensor Strainmeter (GTSM) has been initiated since 2003 in Taiwan, which is one of the most active seismic regions in the world. Observed strain contains several different effects within including barometric, tidal, groundwater, precipitation, tectonics, seismic and other irregular noise. After removing the response of tides and air pressure on strain, we still can find some anomalies highly related to the rainfall in short time in days. The strain response induced by rainfall can be separated into two parts as observation in groundwater, slow response and quick response, respectively. Quick response reflects the strain responding to the load of falling water drops on the ground surface. A kernel function shows the continual response induced by unit precipitation water in time domain. We split the quick response from data removing tidal and barometric response, and then calculate the kernel function by use of deconvolution method. More, an average kernel function was calculated to reduce the noise level. There are five of the sites installed by CGS Taiwan were selected to calculate kernel functions for individual sites. The results show there may be different on rainfall response in different environmental settings. In the case of stations site on gentle terrain, kernel function for each site shows the similar trend, it rises quickly to maximum in 1 to 2 hrs, and then goes down near to zero gently in period of 2-3 days. But in the case of sites settled side by the rivers, there will be 2nd peak of function when collected water in the catchment flows along by the sites related to the hydrograph of creeks. More, landslides will occur in some sites in hazard of landslide with more rainfall stored on, just like DARB in ChiaYi. The curve of kernel function will be controlled by landslides and debris flows.
Exploring microwave resonant multi-point ignition using high-speed schlieren imaging
NASA Astrophysics Data System (ADS)
Liu, Cheng; Zhang, Guixin; Xie, Hong; Deng, Lei; Wang, Zhi
2018-03-01
Microwave plasma offers a potential method to achieve rapid combustion in a high-speed combustor. In this paper, microwave resonant multi-point ignition and its control method have been studied via high-speed schlieren imaging. The experiment was conducted with the microwave resonant ignition system and the schlieren optical system. The microwave pulse in 2.45 GHz with 2 ms width and 3 kW peak power was employed as an ignition energy source to produce initial flame kernels in the combustion chamber. A reflective schlieren method was designed to illustrate the flame development process with a high-speed camera. The bottom of the combustion chamber was made of a quartz glass coated with indium tin oxide, which ensures sufficient microwave reflection and light penetration. Ignition experiments were conducted at 2 bars of stoichiometric methane-air mixtures. Schlieren images show that flame kernels were generated at more than one location simultaneously and flame propagated with different speeds in different flame kernels. Ignition kernels were discussed in three types according to their appearances. Pressure curves and combustion duration also show that multi-point ignition plays a significant role in accelerating combustion.
Exploring microwave resonant multi-point ignition using high-speed schlieren imaging.
Liu, Cheng; Zhang, Guixin; Xie, Hong; Deng, Lei; Wang, Zhi
2018-03-01
Microwave plasma offers a potential method to achieve rapid combustion in a high-speed combustor. In this paper, microwave resonant multi-point ignition and its control method have been studied via high-speed schlieren imaging. The experiment was conducted with the microwave resonant ignition system and the schlieren optical system. The microwave pulse in 2.45 GHz with 2 ms width and 3 kW peak power was employed as an ignition energy source to produce initial flame kernels in the combustion chamber. A reflective schlieren method was designed to illustrate the flame development process with a high-speed camera. The bottom of the combustion chamber was made of a quartz glass coated with indium tin oxide, which ensures sufficient microwave reflection and light penetration. Ignition experiments were conducted at 2 bars of stoichiometric methane-air mixtures. Schlieren images show that flame kernels were generated at more than one location simultaneously and flame propagated with different speeds in different flame kernels. Ignition kernels were discussed in three types according to their appearances. Pressure curves and combustion duration also show that multi-point ignition plays a significant role in accelerating combustion.
NASA Astrophysics Data System (ADS)
Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen
2014-05-01
We present applications of a new clustering method for fault network reconstruction based on the spatial distribution of seismicity. Unlike common approaches that start from the simplest large scale and gradually increase the complexity trying to explain the small scales, our method uses a bottom-up approach, by an initial sampling of the small scales and then reducing the complexity. The new approach also exploits the location uncertainty associated with each event in order to obtain a more accurate representation of the spatial probability distribution of the seismicity. For a given dataset, we first construct an agglomerative hierarchical cluster (AHC) tree based on Ward's minimum variance linkage. Such a tree starts out with one cluster and progressively branches out into an increasing number of clusters. To atomize the structure into its constitutive protoclusters, we initialize a Gaussian Mixture Modeling (GMM) at a given level of the hierarchical clustering tree. We then let the GMM converge using an Expectation Maximization (EM) algorithm. The kernels that become ill defined (less than 4 points) at the end of the EM are discarded. By incrementing the number of initialization clusters (by atomizing at increasingly populated levels of the AHC tree) and repeating the procedure above, we are able to determine the maximum number of Gaussian kernels the structure can hold. The kernels in this configuration constitute our protoclusters. In this setting, merging of any pair will lessen the likelihood (calculated over the pdf of the kernels) but in turn will reduce the model's complexity. The information loss/gain of any possible merging can thus be quantified based on the Minimum Description Length (MDL) principle. Similar to an inter-distance matrix, where the matrix element di,j gives the distance between points i and j, we can construct a MDL gain/loss matrix where mi,j gives the information gain/loss resulting from the merging of kernels i and j. Based on this matrix, merging events resulting in MDL gain are performed in descending order until no gainful merging is possible anymore. We envision that the results of this study could lead to a better understanding of the complex interactions within the Californian fault system and hopefully use the acquired insights for earthquake forecasting.
Li, Lishuang; Zhang, Panpan; Zheng, Tianfu; Zhang, Hongying; Jiang, Zhenchao; Huang, Degen
2014-01-01
Protein-Protein Interaction (PPI) extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT) by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH). We evaluate our method with Support Vector Machine (SVM) and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information.
Novel characterization method of impedance cardiography signals using time-frequency distributions.
Escrivá Muñoz, Jesús; Pan, Y; Ge, S; Jensen, E W; Vallverdú, M
2018-03-16
The purpose of this document is to describe a methodology to select the most adequate time-frequency distribution (TFD) kernel for the characterization of impedance cardiography signals (ICG). The predominant ICG beat was extracted from a patient and was synthetized using time-frequency variant Fourier approximations. These synthetized signals were used to optimize several TFD kernels according to a performance maximization. The optimized kernels were tested for noise resistance on a clinical database. The resulting optimized TFD kernels are presented with their performance calculated using newly proposed methods. The procedure explained in this work showcases a new method to select an appropriate kernel for ICG signals and compares the performance of different time-frequency kernels found in the literature for the case of ICG signals. We conclude that, for ICG signals, the performance (P) of the spectrogram with either Hanning or Hamming windows (P = 0.780) and the extended modified beta distribution (P = 0.765) provided similar results, higher than the rest of analyzed kernels. Graphical abstract Flowchart for the optimization of time-frequency distribution kernels for impedance cardiography signals.
Developments of Finite-Frequency Seismic Theory and Applications to Regional Tomographic Imaging
2009-01-31
banana -doughnut” sensitivity kernels of teleseismic body waves to image the crust and mantle beneath eastern Eurasia. We have collected and processed...In this project, we use the “ banana -doughnut” sensitivity kernels of teleseismic body waves to image the crust and mantle beneath eastern Eurasia...replaced body-wave ray paths with “ banana -doughnut” sensitivity kernels calculated in 1D (Dahlen et al., 2000; Hung et al., 2000; Zhao et al., 2000
Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction
NASA Astrophysics Data System (ADS)
Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc
2018-02-01
Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.
NASA Astrophysics Data System (ADS)
Wang, N.; Shen, Y.; Yang, D.; Bao, X.; Li, J.; Zhang, W.
2017-12-01
Accurate and efficient forward modeling methods are important for high resolution full waveform inversion. Compared with the elastic case, solving anelastic wave equation requires more computational time, because of the need to compute additional material-independent anelastic functions. A numerical scheme with a large Courant-Friedrichs-Lewy (CFL) condition number enables us to use a large time step to simulate wave propagation, which improves computational efficiency. In this work, we apply the fourth-order strong stability preserving Runge-Kutta method with an optimal CFL coeffiecient to solve the anelastic wave equation. We use a fourth order DRP/opt MacCormack scheme for the spatial discretization, and we approximate the rheological behaviors of the Earth by using the generalized Maxwell body model. With a larger CFL condition number, we find that the computational efficient is significantly improved compared with the traditional fourth-order Runge-Kutta method. Then, we apply the scattering-integral method for calculating travel time and amplitude sensitivity kernels with respect to velocity and attenuation structures. For each source, we carry out one forward simulation and save the time-dependent strain tensor. For each station, we carry out three `backward' simulations for the three components and save the corresponding strain tensors. The sensitivity kernels at each point in the medium are the convolution of the two sets of the strain tensors. Finally, we show several synthetic tests to verify the effectiveness of the strong stability preserving Runge-Kutta method in generating accurate synthetics in full waveform modeling, and in generating accurate strain tensors for calculating sensitivity kernels at regional and global scales.
Improved response functions for gamma-ray skyshine analyses
NASA Astrophysics Data System (ADS)
Shultis, J. K.; Faw, R. E.; Deng, X.
1992-09-01
A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study, the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15 MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This re-evaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results are compared to previous calculations and benchmark data.
Improved response functions for gamma-ray skyshine analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Deng, X.
1992-09-01
A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15more » MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This reevaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results compared to previous calculations and benchmark data.« less
On the Floating Point Performance of the i860 Microprocessor
NASA Technical Reports Server (NTRS)
Lee, King; Kutler, Paul (Technical Monitor)
1997-01-01
The i860 microprocessor is a pipelined processor that can deliver two double precision floating point results every clock. It is being used in the Touchstone project to develop a teraflop computer by the year 2000. With such high computational capabilities it was expected that memory bandwidth would limit performance on many kernels. Measured performance of three kernels showed performance is less than what memory bandwidth limitations would predict. This paper develops a model that explains the discrepancy in terms of memory latencies and points to some problems involved in moving data from memory to the arithmetic pipelines.
Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko
2017-07-10
This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.
Monte Carlo calculations of energy deposition distributions of electrons below 20 keV in protein.
Tan, Zhenyu; Liu, Wei
2014-05-01
The distributions of energy depositions of electrons in semi-infinite bulk protein and the radial dose distributions of point-isotropic mono-energetic electron sources [i.e., the so-called dose point kernel (DPK)] in protein have been systematically calculated in the energy range below 20 keV, based on Monte Carlo methods. The ranges of electrons have been evaluated by extrapolating two calculated distributions, respectively, and the evaluated ranges of electrons are compared with the electron mean path length in protein which has been calculated by using electron inelastic cross sections described in this work in the continuous-slowing-down approximation. It has been found that for a given energy, the electron mean path length is smaller than the electron range evaluated from DPK, but it is large compared to the electron range obtained from the energy deposition distributions of electrons in semi-infinite bulk protein. The energy dependences of the extrapolated electron ranges based on the two investigated distributions are given, respectively, in a power-law form. In addition, the DPK in protein has also been compared with that in liquid water. An evident difference between the two DPKs is observed. The calculations presented in this work may be useful in studies of radiation effects on proteins.
A flexible new method for 3D measurement based on multi-view image sequences
NASA Astrophysics Data System (ADS)
Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu
2016-11-01
Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.
el-Adawy, T A; Rahma, E H; el-Badawey, A A; Gomaa, M A; Lásztity, R; Sarkadi, L
1994-01-01
Detoxification of apricot kernels by soaking in distilled water and ammonium hydroxide for 30 h at 47 degrees C decreased the total protein, non-protein nitrogen, total ash, glucose, sucrose, minerals, non-essential amino acids, polar amino acids, acidic amino acids, aromatic amino acids, antinutritional factors, hydrocyanic acid, tannins and phytic acid. On the other hand, removal of toxic and bitter compounds from apricot kernels increased the relative content of crude fibre, starch, total essential amino acids. Higher in-vitro protein digestibility and biological value was also observed. Generally, the detoxified apricot kernels were nutritionally well balanced. Utilization and incorporation of detoxified apricot kernel flours in food products is completely safe from the toxicity point of view.
Computer program for supersonic Kernel-function flutter analysis of thin lifting surfaces
NASA Technical Reports Server (NTRS)
Cunningham, H. J.
1974-01-01
This report describes a computer program (program D2180) that has been prepared to implement the analysis described in (N71-10866) for calculating the aerodynamic forces on a class of harmonically oscillating planar lifting surfaces in supersonic potential flow. The planforms treated are the delta and modified-delta (arrowhead) planforms with subsonic leading and supersonic trailing edges, and (essentially) pointed tips. The resulting aerodynamic forces are applied in a Galerkin modal flutter analysis. The required input data are the flow and planform parameters including deflection-mode data, modal frequencies, and generalized masses.
NASA Technical Reports Server (NTRS)
Capo, M. A.; Disney, R. K.
1971-01-01
The work performed in the following areas is summarized: (1) Analysis of Realistic nuclear-propelled vehicle was analyzed using the Marshall Space Flight Center computer code package. This code package includes one and two dimensional discrete ordinate transport, point kernel, and single scatter techniques, as well as cross section preparation and data processing codes, (2) Techniques were developed to improve the automated data transfer in the coupled computation method of the computer code package and improve the utilization of this code package on the Univac-1108 computer system. (3) The MSFC master data libraries were updated.
Consistent Pl Analysis of Aqueous Uranium-235 Critical Assemblies
NASA Technical Reports Server (NTRS)
Fieno, Daniel
1961-01-01
The lethargy-dependent equations of the consistent Pl approximation to the Boltzmann transport equation for slowing down neutrons have been used as the basis of an IBM 704 computer program. Some of the effects included are (1) linearly anisotropic center of mass elastic scattering, (2) heavy element inelastic scattering based on the evaporation model of the nucleus, and (3) optional variation of the buckling with lethargy. The microscopic cross-section data developed for this program covered 473 lethargy points from lethargy u = 0 (10 Mev) to u = 19.8 (0.025 ev). The value of the fission neutron age in water calculated here is 26.5 square centimeters; this value is to be compared with the recent experimental value given as 27.86 square centimeters. The Fourier transform of the slowing-down kernel for water to indium resonance energy calculated here compared well with the Fourier transform of the kernel for water as measured by Hill, Roberts, and Fitch. This method of calculation has been applied to uranyl fluoride - water solution critical assemblies. Theoretical results established for both unreflected and fully reflected critical assemblies have been compared with available experimental data. The theoretical buckling curve derived as a function of the hydrogen to uranium-235 atom concentration for an energy-independent extrapolation distance was successful in predicting the critical heights of various unreflected cylindrical assemblies. The critical dimensions of fully water-reflected cylindrical assemblies were reasonably well predicted using the theoretical buckling curve and reflector savings for equivalent spherical assemblies.
Heavy and Heavy-Light Mesons in the Covariant Spectator Theory
NASA Astrophysics Data System (ADS)
Stadler, Alfred; Leitão, Sofia; Peña, M. T.; Biernat, Elmar P.
2018-05-01
The masses and vertex functions of heavy and heavy-light mesons, described as quark-antiquark bound states, are calculated with the Covariant Spectator Theory (CST). We use a kernel with an adjustable mixture of Lorentz scalar, pseudoscalar, and vector linear confining interaction, together with a one-gluon-exchange kernel. A series of fits to the heavy and heavy-light meson spectrum were calculated, and we discuss what conclusions can be drawn from it, especially about the Lorentz structure of the kernel. We also apply the Brodsky-Huang-Lepage prescription to express the CST wave functions for heavy quarkonia in terms of light-front variables. They agree remarkably well with light-front wave functions obtained in the Hamiltonian basis light-front quantization approach, even in excited states.
Characterization of a maximum-likelihood nonparametric density estimator of kernel type
NASA Technical Reports Server (NTRS)
Geman, S.; Mcclure, D. E.
1982-01-01
Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).
NASA Astrophysics Data System (ADS)
Hawes, D. H.; Langley, R. S.
2018-01-01
Random excitation of mechanical systems occurs in a wide variety of structures and, in some applications, calculation of the power dissipated by such a system will be of interest. In this paper, using the Wiener series, a general methodology is developed for calculating the power dissipated by a general nonlinear multi-degree-of freedom oscillatory system excited by random Gaussian base motion of any spectrum. The Wiener series method is most commonly applied to systems with white noise inputs, but can be extended to encompass a general non-white input. From the extended series a simple expression for the power dissipated can be derived in terms of the first term, or kernel, of the series and the spectrum of the input. Calculation of the first kernel can be performed either via numerical simulations or from experimental data and a useful property of the kernel, namely that the integral over its frequency domain representation is proportional to the oscillating mass, is derived. The resulting equations offer a simple conceptual analysis of the power flow in nonlinear randomly excited systems and hence assist the design of any system where power dissipation is a consideration. The results are validated both numerically and experimentally using a base-excited cantilever beam with a nonlinear restoring force produced by magnets.
A framework for optimal kernel-based manifold embedding of medical image data.
Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma
2015-04-01
Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.
gkmSVM: an R package for gapped-kmer SVM
Ghandi, Mahmoud; Mohammad-Noori, Morteza; Ghareghani, Narges; Lee, Dongwon; Garraway, Levi; Beer, Michael A.
2016-01-01
Summary: We present a new R package for training gapped-kmer SVM classifiers for DNA and protein sequences. We describe an improved algorithm for kernel matrix calculation that speeds run time by about 2 to 5-fold over our original gkmSVM algorithm. This package supports several sequence kernels, including: gkmSVM, kmer-SVM, mismatch kernel and wildcard kernel. Availability and Implementation: gkmSVM package is freely available through the Comprehensive R Archive Network (CRAN), for Linux, Mac OS and Windows platforms. The C ++ implementation is available at www.beerlab.org/gkmsvm Contact: mghandi@gmail.com or mbeer@jhu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153639
New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.
Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto
2017-06-21
We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.
Quantum kernel applications in medicinal chemistry.
Huang, Lulu; Massa, Lou
2012-07-01
Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.
Kernel-PCA data integration with enhanced interpretability
2014-01-01
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge. PMID:25032747
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
New KF-PP-SVM classification method for EEG in brain-computer interfaces.
Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian
2014-01-01
Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.
Effective dose rate coefficients for exposure to contaminated soil
Veinot, Kenneth G.; Eckerman, Keith F.; Bellamy, Michael B.; ...
2017-05-10
The Oak Ridge National Laboratory Center for Radiation Protection Knowledge has undertaken calculations related to various environmental exposure scenarios. A previous paper reported the results for submersion in radioactive air and immersion in water using age-specific mathematical phantoms. This paper presents age-specific effective dose rate coefficients derived using stylized mathematical phantoms for exposure to contaminated soils. Dose rate coefficients for photon, electron, and positrons of discrete energies were calculated and folded with emissions of 1252 radionuclides addressed in ICRP Publication 107 to determine equivalent and effective dose rate coefficients. The MCNP6 radiation transport code was used for organ dose ratemore » calculations for photons and the contribution of electrons to skin dose rate was derived using point-kernels. Bremsstrahlung and annihilation photons of positron emission were evaluated as discrete photons. As a result, the coefficients calculated in this work compare favorably to those reported in the US Federal Guidance Report 12 as well as by other authors who employed voxel phantoms for similar exposure scenarios.« less
Effective dose rate coefficients for exposure to contaminated soil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veinot, Kenneth G.; Eckerman, Keith F.; Bellamy, Michael B.
The Oak Ridge National Laboratory Center for Radiation Protection Knowledge has undertaken calculations related to various environmental exposure scenarios. A previous paper reported the results for submersion in radioactive air and immersion in water using age-specific mathematical phantoms. This paper presents age-specific effective dose rate coefficients derived using stylized mathematical phantoms for exposure to contaminated soils. Dose rate coefficients for photon, electron, and positrons of discrete energies were calculated and folded with emissions of 1252 radionuclides addressed in ICRP Publication 107 to determine equivalent and effective dose rate coefficients. The MCNP6 radiation transport code was used for organ dose ratemore » calculations for photons and the contribution of electrons to skin dose rate was derived using point-kernels. Bremsstrahlung and annihilation photons of positron emission were evaluated as discrete photons. As a result, the coefficients calculated in this work compare favorably to those reported in the US Federal Guidance Report 12 as well as by other authors who employed voxel phantoms for similar exposure scenarios.« less
Features and flaws of a contact interaction treatment of the kaon
NASA Astrophysics Data System (ADS)
Chen, Chen; Chang, Lei; Roberts, Craig D.; Schmidt, Sebastian M.; Wan, Shaolong; Wilson, David J.
2013-04-01
Elastic and semileptonic transition form factors for the kaon and pion are calculated using the leading order in a global-symmetry-preserving truncation of the Dyson-Schwinger equations and a momentum-independent form for the associated kernels in the gap and Bethe-Salpeter equations. The computed form factors are compared both with those obtained using the same truncation but an interaction that preserves the one-loop renormalization-group behavior of QCD and with data. The comparisons show that in connection with observables revealed by probes with |Q2|≲M2, where M≈0.4GeV is an infrared value of the dressed-quark mass, results obtained using a symmetry-preserving regularization of the contact interaction are not realistically distinguishable from those produced by more sophisticated kernels, and available data on kaon form factors do not extend into the domain whereupon one could distinguish among the interactions. The situation differs if one includes the domain Q2>M2. Thereupon, a fully consistent treatment of the contact interaction produces form factors that are typically harder than those obtained with QCD renormalization-group-improved kernels. Among other things also described are a Ward identity for the inhomogeneous scalar vertex, similarity between the charge distribution of a dressed u quark in the K+ and that of the dressed u quark in the π+, and reflections upon the point whereat one might begin to see perturbative behavior in the pion form factor. Interpolations of the form factors are provided, which should assist in working to chart the interaction between light quarks by explicating the impact on hadron properties of differing assumptions about the behavior of the Bethe-Salpeter kernel.
Flood, Jessica S; Porphyre, Thibaud; Tildesley, Michael J; Woolhouse, Mark E J
2013-10-08
When modelling infectious diseases, accurately capturing the pattern of dissemination through space is key to providing optimal recommendations for control. Mathematical models of disease spread in livestock, such as for foot-and-mouth disease (FMD), have done this by incorporating a transmission kernel which describes the decay in transmission rate with increasing Euclidean distance from an infected premises (IP). However, this assumes a homogenous landscape, and is based on the distance between point locations of farms. Indeed, underlying the spatial pattern of spread are the contact networks involved in transmission. Accordingly, area-weighted tessellation around farm point locations has been used to approximate field-contiguity and simulate the effect of contiguous premises (CP) culling for FMD. Here, geographic data were used to determine contiguity based on distance between premises' fields and presence of landscape features for two sample areas in Scotland. Sensitivity, positive predictive value, and the True Skill Statistic (TSS) were calculated to determine how point distance measures and area-weighted tessellation compared to the 'gold standard' of the map-based measures in identifying CPs. In addition, the mean degree and density of the different contact networks were calculated. Utilising point distances <1 km and <5 km as a measure for contiguity resulted in poor discrimination between map-based CPs/non-CPs (TSS 0.279-0.344 and 0.385-0.400, respectively). Point distance <1 km missed a high proportion of map-based CPs; <5 km point distance picked up a high proportion of map-based non-CPs as CPs. Area-weighted tessellation performed best, with reasonable discrimination between map-based CPs/non-CPs (TSS 0.617-0.737) and comparable mean degree and density. Landscape features altered network properties considerably when taken into account. The farming landscape is not homogeneous. Basing contiguity on geographic locations of field boundaries and including landscape features known to affect transmission into FMD models are likely to improve individual farm-level accuracy of spatial predictions in the event of future outbreaks. If a substantial proportion of FMD transmission events are by contiguous spread, and CPs should be assigned an elevated relative transmission rate, the shape of the kernel could be significantly altered since ability to discriminate between map-based CPs and non-CPs is different over different Euclidean distances.
SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Tian, Z; Song, T
Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less
NASA Astrophysics Data System (ADS)
Xu, Ye; van Beek, Edwin J.; McLennan, Geoffrey; Guo, Junfeng; Sonka, Milan; Hoffman, Eric
2006-03-01
In this study we utilize our texture characterization software (3-D AMFM) to characterize interstitial lung diseases (including emphysema) based on MDCT generated volumetric data using 3-dimensional texture features. We have sought to test whether the scanner and reconstruction filter (kernel) type affect the classification of lung diseases using the 3-D AMFM. We collected MDCT images in three subject groups: emphysema (n=9), interstitial pulmonary fibrosis (IPF) (n=10), and normal non-smokers (n=9). In each group, images were scanned either on a Siemens Sensation 16 or 64-slice scanner, (B50f or B30 recon. kernel) or a Philips 4-slice scanner (B recon. kernel). A total of 1516 volumes of interest (VOIs; 21x21 pixels in plane) were marked by two chest imaging experts using the Iowa Pulmonary Analysis Software Suite (PASS). We calculated 24 volumetric features. Bayesian methods were used for classification. Images from different scanners/kernels were combined in all possible combinations to test how robust the tissue classification was relative to the differences in image characteristics. We used 10-fold cross validation for testing the result. Sensitivity, specificity and accuracy were calculated. One-way Analysis of Variances (ANOVA) was used to compare the classification result between the various combinations of scanner and reconstruction kernel types. This study yielded a sensitivity of 94%, 91%, 97%, and 93% for emphysema, ground-glass, honeycombing, and normal non-smoker patterns respectively using a mixture of all three subject groups. The specificity for these characterizations was 97%, 99%, 99%, and 98%, respectively. The F test result of ANOVA shows there is no significant difference (p <0.05) between different combinations of data with respect to scanner and convolution kernel type. Since different MDCT and reconstruction kernel types did not show significant differences in regards to the classification result, this study suggests that the 3-D AMFM can be generally introduced.
Implementation of kernels on the Maestro processor
NASA Astrophysics Data System (ADS)
Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.
Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.
NASA Astrophysics Data System (ADS)
Giap, Huan Bosco
Accurate calculation of absorbed dose to target tumors and normal tissues in the body is an important requirement for establishing fundamental dose-response relationships for radioimmunotherapy. Two major obstacles have been the difficulty in obtaining an accurate patient-specific 3-D activity map in-vivo and calculating the resulting absorbed dose. This study investigated a methodology for 3-D internal dosimetry, which integrates the 3-D biodistribution of the radionuclide acquired from SPECT with a dose-point kernel convolution technique to provide the 3-D distribution of absorbed dose. Accurate SPECT images were reconstructed with appropriate methods for noise filtering, attenuation correction, and Compton scatter correction. The SPECT images were converted into activity maps using a calibration phantom. The activity map was convolved with an ^{131}I dose-point kernel using a 3-D fast Fourier transform to yield a 3-D distribution of absorbed dose. The 3-D absorbed dose map was then processed to provide the absorbed dose distribution in regions of interest. This methodology can provide heterogeneous distributions of absorbed dose in volumes of any size and shape with nonuniform distributions of activity. Comparison of the activities quantitated by our SPECT methodology to true activities in an Alderson abdominal phantom (with spleen, liver, and spherical tumor) yielded errors of -16.3% to 4.4%. Volume quantitation errors ranged from -4.0 to 5.9% for volumes greater than 88 ml. The percentage differences of the average absorbed dose rates calculated by this methodology and the MIRD S-values were 9.1% for liver, 13.7% for spleen, and 0.9% for the tumor. Good agreement (percent differences were less than 8%) was found between the absorbed dose due to penetrating radiation calculated from this methodology and TLD measurement. More accurate estimates of the 3 -D distribution of absorbed dose can be used as a guide in specifying the minimum activity to be administered to patients to deliver a prescribed absorbed dose to tumor without exceeding the toxicity limits of normal tissues.
A self-calibrated angularly continuous 2D GRAPPA kernel for propeller trajectories
Skare, Stefan; Newbould, Rexford D; Nordell, Anders; Holdsworth, Samantha J; Bammer, Roland
2008-01-01
The k-space readout of propeller-type sequences may be accelerated by the use of parallel imaging (PI). For PROPELLER, the main benefits are reduced blurring due to T2 decay and SAR reduction, while for EPI-based propeller acquisitions such as Turbo-PROP and SAP-EPI, the faster k-space traversal alleviates geometric distortions. In this work, the feasibility of calculating a 2D GRAPPA kernel on only the undersampled propeller blades themselves is explored, using the matching orthogonal undersampled blade. It is shown that the GRAPPA kernel varies slowly across blades, therefore an angularly continuous 2D GRAPPA kernel is proposed, in which the angular variation of the weights is parameterized. This new angularly continuous kernel formulation greatly increases the numerical stability of the GRAPPA weight estimation, allowing the generation of fully sampled diagnostic quality images using only the undersampled propeller data. PMID:19025911
Selecting good regions to deblur via relative total variation
NASA Astrophysics Data System (ADS)
Li, Lerenhan; Yan, Hao; Fan, Zhihua; Zheng, Hanqing; Gao, Changxin; Sang, Nong
2018-03-01
Image deblurring is to estimate the blur kernel and to restore the latent image. It is usually divided into two stage, including kernel estimation and image restoration. In kernel estimation, selecting a good region that contains structure information is helpful to the accuracy of estimated kernel. Good region to deblur is usually expert-chosen or in a trial-anderror way. In this paper, we apply a metric named relative total variation (RTV) to discriminate the structure regions from smooth and texture. Given a blurry image, we first calculate the RTV of each pixel to determine whether it is the pixel in structure region, after which, we sample the image in an overlapping way. At last, the sampled region that contains the most structure pixels is the best region to deblur. Both qualitative and quantitative experiments show that our proposed method can help to estimate the kernel accurately.
Norlida, H M; Md Ali, A R; Muhadhir, I
1996-01-01
Palm oil (PO ; iodin value = 52), palm stearin (POs1; i.v. = 32 and POs2; i.v. = 40) and palm kernel oil (PKO; i.v. = 17) were blended in ternary systems. The blends were then studied for their physical properties such as melting point (m.p.), solid fat content (SFC), and cooling curve. Results showed that palm stearin increased the blends melting point while palm kernel oil reduced it. To produce table margarine with melting point (m.p.) below 40 degrees C, the POs1 should be added at level of < or = 16%, while POs2 at level of < or = 20%. At 10 degrees C, eutectic interaction occur between PO and PKO which reach their maximum at about 60:40 blending ratio. Within the eutectic region, to maintain the SFC at 10 degrees C to be < or = 50%, POs1 may be added at level of < or = 7%, while POs2 at level of < or = 12%. The addition of palm stearin increased the blends solidification Tmin and Tmax values, while PKO reduced them. Blends which contained high amount of palm stearin showed melting point and cooling curves quite similar to that of pastry margarine.
Semisupervised kernel marginal Fisher analysis for face recognition.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
2013-01-01
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
gkmSVM: an R package for gapped-kmer SVM.
Ghandi, Mahmoud; Mohammad-Noori, Morteza; Ghareghani, Narges; Lee, Dongwon; Garraway, Levi; Beer, Michael A
2016-07-15
We present a new R package for training gapped-kmer SVM classifiers for DNA and protein sequences. We describe an improved algorithm for kernel matrix calculation that speeds run time by about 2 to 5-fold over our original gkmSVM algorithm. This package supports several sequence kernels, including: gkmSVM, kmer-SVM, mismatch kernel and wildcard kernel. gkmSVM package is freely available through the Comprehensive R Archive Network (CRAN), for Linux, Mac OS and Windows platforms. The C ++ implementation is available at www.beerlab.org/gkmsvm mghandi@gmail.com or mbeer@jhu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K
2017-10-17
Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p < 0.001) when compared with images reconstructed using the bone-sharpening kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p < 0.001, and 18.2%, p < 0.001, respectively) when compared with the image reconstructed by the bone-sharpening kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
Image quality of mixed convolution kernel in thoracic computed tomography.
Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar
2016-11-01
The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.
Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.
Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe
2018-02-19
Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.
Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy
NASA Astrophysics Data System (ADS)
Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan
2018-02-01
Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.
Mathematical inference in one point microrheology
NASA Astrophysics Data System (ADS)
Hohenegger, Christel; McKinley, Scott
2016-11-01
Pioneered by the work of Mason and Weitz, one point passive microrheology has been successfully applied to obtaining estimates of the loss and storage modulus of viscoelastic fluids when the mean-square displacement obeys a local power law. Using numerical simulations of a fluctuating viscoelastic fluid model, we study the problem of recovering the mechanical parameters of the fluid's memory kernel using statistical inference like mean-square displacements and increment auto-correlation functions. Seeking a better understanding of the influence of the assumptions made in the inversion process, we mathematically quantify the uncertainty in traditional one point microrheology for simulated data and demonstrate that a large family of memory kernels yields the same statistical signature. We consider both simulated data obtained from a full viscoelastic fluid simulation of the unsteady Stokes equations with fluctuations and from a Generalized Langevin Equation of the particle's motion described by the same memory kernel. From the theory of inverse problems, we propose an alternative method that can be used to recover information about the loss and storage modulus and discuss its limitations and uncertainties. NSF-DMS 1412998.
NASA Astrophysics Data System (ADS)
Lindemer, T. B.; Voit, S. L.; Silva, C. M.; Besmann, T. M.; Hunt, R. D.
2014-05-01
The US Department of Energy is developing a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with uranium nitride (UN) kernels with diameters near 825 μm. This effort explores factors involved in the conversion of uranium oxide-carbon microspheres into UN kernels. An analysis of previous studies with sufficient experimental details is provided. Thermodynamic calculations were made to predict pressures of carbon monoxide and other relevant gases for several reactions that can be involved in the conversion of uranium oxides and carbides into UN. Uranium oxide-carbon microspheres were heated in a microbalance with an attached mass spectrometer to determine details of calcining and carbothermic conversion in argon, nitrogen, and vacuum. A model was derived from experiments on the vacuum conversion to uranium oxide-carbide kernels. UN-containing kernels were fabricated using this vacuum conversion as part of the overall process. Carbonitride kernels of ∼89% of theoretical density were produced along with several observations concerning the different stages of the process.
Weighted Feature Gaussian Kernel SVM for Emotion Recognition
Jia, Qingxuan
2016-01-01
Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443
A fast and objective multidimensional kernel density estimation method: fastKDE
O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...
2016-03-07
Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less
A shortest-path graph kernel for estimating gene product semantic similarity.
Alvarez, Marco A; Qi, Xiaojun; Yan, Changhui
2011-07-29
Existing methods for calculating semantic similarity between gene products using the Gene Ontology (GO) often rely on external resources, which are not part of the ontology. Consequently, changes in these external resources like biased term distribution caused by shifting of hot research topics, will affect the calculation of semantic similarity. One way to avoid this problem is to use semantic methods that are "intrinsic" to the ontology, i.e. independent of external knowledge. We present a shortest-path graph kernel (spgk) method that relies exclusively on the GO and its structure. In spgk, a gene product is represented by an induced subgraph of the GO, which consists of all the GO terms annotating it. Then a shortest-path graph kernel is used to compute the similarity between two graphs. In a comprehensive evaluation using a benchmark dataset, spgk compares favorably with other methods that depend on external resources. Compared with simUI, a method that is also intrinsic to GO, spgk achieves slightly better results on the benchmark dataset. Statistical tests show that the improvement is significant when the resolution and EC similarity correlation coefficient are used to measure the performance, but is insignificant when the Pfam similarity correlation coefficient is used. Spgk uses a graph kernel method in polynomial time to exploit the structure of the GO to calculate semantic similarity between gene products. It provides an alternative to both methods that use external resources and "intrinsic" methods with comparable performance.
Nawaz, Malik A; Gaiani, Claire; Fukai, Shu; Bhandari, Bhesh
2016-12-01
The objectives of this study were to evaluate the ability of X-ray photoelectron spectroscopy (XPS) to differentiate rice macromolecules and to calculate the surface composition of rice kernels and flours. The uncooked kernels and flours surface composition of the two selected rice varieties, Thadokkham-11 (TDK11) and Doongara (DG) demonstrated an over-expression of lipids and proteins and an under-expression of starch compared to the bulk composition. The results of the study showed that XPS was able to differentiate rice polysaccharides (mainly starch), proteins and lipids in uncooked rice kernels and flours. Nevertheless, it was unable to distinguish components in cooked rice samples possibly due to complex interactions between gelatinized starch, denatured proteins and lipids. High resolution imaging methods (Scanning Electron Microscopy and Confocal Laser Scanning Microscopy) were employed to obtain complementary information about the properties and location of starch, proteins and lipids in rice kernels and flours. Copyright © 2016. Published by Elsevier Ltd.
On the interpretation of kernels - Computer simulation of responses to impulse pairs
NASA Technical Reports Server (NTRS)
Hung, G.; Stark, L.; Eykhoff, P.
1983-01-01
A method is presented for the use of a unit impulse response and responses to impulse pairs of variable separation in the calculation of the second-degree kernels of a quadratic system. A quadratic system may be built from simple linear terms of known dynamics and a multiplier. Computer simulation results on quadratic systems with building elements of various time constants indicate reasonably that the larger time constant term before multiplication dominates in the envelope of the off-diagonal kernel curves as these move perpendicular to and away from the main diagonal. The smaller time constant term before multiplication combines with the effect of the time constant after multiplication to dominate in the kernel curves in the direction of the second-degree impulse response, i.e., parallel to the main diagonal. Such types of insight may be helpful in recognizing essential aspects of (second-degree) kernels; they may be used in simplifying the model structure and, perhaps, add to the physical/physiological understanding of the underlying processes.
A solution to coupled Dyson-Schwinger equations for gluons and ghosts in Landau gauge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Smekal, L.; Alkofer, R.; Hauck, A.
1998-07-20
A truncation scheme for the Dyson-Schwinger equations of QCD in Landau gauge is presented which implements the Slavnov-Taylor identities for the 3-point vertex functions. Neglecting contributions from 4-point correlations such as the 4-gluon vertex function and irreducible scattering kernels, a closed system of equations for the propagators is obtained. For the pure gauge theory without quarks this system of equations for the propagators of gluons and ghosts is solved in an approximation which allows for an analytic discussion of its solutions in the infrared: The gluon propagator is shown to vanish for small spacelike momenta whereas the ghost propagator ismore » found to be infrared enhanced. The running coupling of the non-perturbative subtraction scheme approaches an infrared stable fixed point at a critical value of the coupling alpha c of approx. 9.5. The gluon propagator is shown to have no Lehmann representation. The results for the propagators obtained here compare favorably with recent lattice calculations.« less
Scanning Apollo Flight Films and Reconstructing CSM Trajectories
NASA Astrophysics Data System (ADS)
Speyerer, E.; Robinson, M. S.; Grunsfeld, J. M.; Locke, S. D.; White, M.
2006-12-01
Over thirty years ago, the astronauts of the Apollo program made the journey from the Earth to the Moon and back. To record their historic voyages and collect scientific observations many thousands of photographs were acquired with handheld and automated cameras. After returning to Earth, these films were developed and stored at the film archive at Johnson Space Center (JSC), where they still reside. Due to the historical significance of the original flight films typically only duplicate (2nd or 3rd generation) film products are studied and used to make prints. To allow full access to the original flight films for both researchers and the general public, JSC and Arizona State University are scanning and creating an online digital archive. A Leica photogrammetric scanner is being used to insure geometric and radiometric fidelity. Scanning resolution will preserve the grain of the film. Color frames are being scanned and archived as 48 bit pixels to insure capture of the full dynamic range of the film (16 bit for BW). The raw scans will consist of 70 Terabytes of data (10,000 BW Hasselblad, 10,000 color Hasselblad, 10,000 Metric frames, 4500 Pan frames, 620 35mm frames counts; are estimates). All the scanned films will be made available for download through a searchable database. Special tools are being developed to locate images based on various search parameters. To geolocate metric and panoramic frames acquired during Apollos 15\\-17, prototype SPICE kernels are being generated from existing photographic support data by entering state vectors and timestamps from multiple points throughout each orbit into the NAIF toolkit to create a type 9 Spacecraft and Planet Ephemeris Kernel (SPK), a nadir pointing C\\- matrix Kernel (CK), and a Spacecraft Clock Kernel (SCLK). These SPICE kernels, in addition to the Instrument Kernel (IK) and Frames Kernel (FK) that also under development, will be archived along with the scanned images. From the generated kernels, several IDL programs have been designed to display orbital tracks, produce footprint plots, and create image projections. Using the output from these SPICE based programs enables accurate geolocating of SIM bay photography as well as providing potential data from lunar gravitational studies.
A new discrete dipole kernel for quantitative susceptibility mapping.
Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian
2018-09-01
Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.
Fruit position within the canopy affects kernel lipid composition of hazelnuts.
Pannico, Antonio; Cirillo, Chiara; Giaccone, Matteo; Scognamiglio, Pasquale; Romano, Raffaele; Caporaso, Nicola; Sacchi, Raffaele; Basile, Boris
2017-11-01
The aim of this research was to study the variability in kernel composition within the canopy of hazelnut trees. Kernel fresh and dry weight increased linearly with fruit height above the ground. Fat content decreased, while protein and ash content increased, from the bottom to the top layers of the canopy. The level of unsaturation of fatty acids decreased from the bottom to the top of the canopy. Thus, the kernels located in the bottom layers of the canopy appear to be more interesting from a nutritional point of view, but their lipids may be more exposed to oxidation. The content of different phytosterols increased progressively from bottom to top canopy layers. Most of these effects correlated with the pattern in light distribution inside the canopy. The results of this study indicate that fruit position within the canopy is an important factor in determining hazelnut kernel growth and composition. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Brownian motion of a nano-colloidal particle: the role of the solvent.
Torres-Carbajal, Alexis; Herrera-Velarde, Salvador; Castañeda-Priego, Ramón
2015-07-15
Brownian motion is a feature of colloidal particles immersed in a liquid-like environment. Usually, it can be described by means of the generalised Langevin equation (GLE) within the framework of the Mori theory. In principle, all quantities that appear in the GLE can be calculated from the molecular information of the whole system, i.e., colloids and solvent molecules. In this work, by means of extensive Molecular Dynamics simulations, we study the effects of the microscopic details and the thermodynamic state of the solvent on the movement of a single nano-colloid. In particular, we consider a two-dimensional model system in which the mass and size of the colloid are two and one orders of magnitude, respectively, larger than the ones associated with the solvent molecules. The latter ones interact via a Lennard-Jones-type potential to tune the nature of the solvent, i.e., it can be either repulsive or attractive. We choose the linear momentum of the Brownian particle as the observable of interest in order to fully describe the Brownian motion within the Mori framework. We particularly focus on the colloid diffusion at different solvent densities and two temperature regimes: high and low (near the critical point) temperatures. To reach our goal, we have rewritten the GLE as a second kind Volterra integral in order to compute the memory kernel in real space. With this kernel, we evaluate the momentum-fluctuating force correlation function, which is of particular relevance since it allows us to establish when the stationarity condition has been reached. Our findings show that even at high temperatures, the details of the attractive interaction potential among solvent molecules induce important changes in the colloid dynamics. Additionally, near the critical point, the dynamical scenario becomes more complex; all the correlation functions decay slowly in an extended time window, however, the memory kernel seems to be only a function of the solvent density. Thus, the explicit inclusion of the solvent in the description of Brownian motion allows us to better understand the behaviour of the memory kernel at those thermodynamic states near the critical region without any further approximation. This information is useful to elaborate more realistic descriptions of Brownian motion that take into account the particular details of the host medium.
Three-Dimensional Sensitivity Kernels of Z/H Amplitude Ratios of Surface and Body Waves
NASA Astrophysics Data System (ADS)
Bao, X.; Shen, Y.
2017-12-01
The ellipticity of Rayleigh wave particle motion, or Z/H amplitude ratio, has received increasing attention in inversion for shallow Earth structures. Previous studies of the Z/H ratio assumed one-dimensional (1D) velocity structures beneath the receiver, ignoring the effects of three-dimensional (3D) heterogeneities on wave amplitudes. This simplification may introduce bias in the resulting models. Here we present 3D sensitivity kernels of the Z/H ratio to Vs, Vp, and density perturbations, based on finite-difference modeling of wave propagation in 3D structures and the scattering-integral method. Our full-wave approach overcomes two main issues in previous studies of Rayleigh wave ellipticity: (1) the finite-frequency effects of wave propagation in 3D Earth structures, and (2) isolation of the fundamental mode Rayleigh waves from Rayleigh wave overtones and converted Love waves. In contrast to the 1D depth sensitivity kernels in previous studies, our 3D sensitivity kernels exhibit patterns that vary with azimuths and distances to the receiver. The laterally-summed 3D sensitivity kernels and 1D depth sensitivity kernels, based on the same homogeneous reference model, are nearly identical with small differences that are attributable to the single period of the 1D kernels and a finite period range of the 3D kernels. We further verify the 3D sensitivity kernels by comparing the predictions from the kernels with the measurements from numerical simulations of wave propagation for models with various small-scale perturbations. We also calculate and verify the amplitude kernels for P waves. This study shows that both Rayleigh and body wave Z/H ratios provide vertical and lateral constraints on the structure near the receiver. With seismic arrays, the 3D kernels afford a powerful tool to use the Z/H ratios to obtain accurate and high-resolution Earth models.
Optimal focal-plane restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1989-01-01
Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.
Many Molecular Properties from One Kernel in Chemical Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole
We introduce property-independent kernels for machine learning modeling of arbitrarily many molecular properties. The kernels encode molecular structures for training sets of varying size, as well as similarity measures sufficiently diffuse in chemical space to sample over all training molecules. Corresponding molecular reference properties provided, they enable the instantaneous generation of ML models which can systematically be improved through the addition of more data. This idea is exemplified for single kernel based modeling of internal energy, enthalpy, free energy, heat capacity, polarizability, electronic spread, zero-point vibrational energy, energies of frontier orbitals, HOMOLUMO gap, and the highest fundamental vibrational wavenumber. Modelsmore » of these properties are trained and tested using 112 kilo organic molecules of similar size. Resulting models are discussed as well as the kernels’ use for generating and using other property models.« less
NASA Technical Reports Server (NTRS)
Nonhebel, H. M.; Bandurski, R. S. (Principal Investigator)
1986-01-01
Oxindole-3-acetic acid is the principal catabolite of indole-3-acetic acid in Zea mays seedlings. In this paper measurements of the turnover of oxindole-3-acetic acid are presented and used to calculate the rate of indole-3-acetic acid oxidation. [3H]Oxindole-3-acetic acid was applied to the endosperm of Zea mays seedlings and allowed to equilibrate for 24 h before the start of the experiment. The subsequent decrease in its specific activity was used to calculate the turnover rate. The average half-life of oxindole-3-acetic acid in the shoots was found to be 30 h while that in the kernels had an average half-life of 35h. Using previously published values of the pool sizes of oxindole-3-acetic acid in shoots and kernels from seedlings of the same age and variety, and grown under the same conditions, the rate of indole-3-acetic acid oxidation was calculated to be 1.1 pmol plant-1 h-1 in the shoots and 7.1 pmol plant-1 h-1 in the kernels.
DenInv3D: a geophysical software for three-dimensional density inversion of gravity field data
NASA Astrophysics Data System (ADS)
Tian, Yu; Ke, Xiaoping; Wang, Yong
2018-04-01
This paper presents a three-dimensional density inversion software called DenInv3D that operates on gravity and gravity gradient data. The software performs inversion modelling, kernel function calculation, and inversion calculations using the improved preconditioned conjugate gradient (PCG) algorithm. In the PCG algorithm, due to the uncertainty of empirical parameters, such as the Lagrange multiplier, we use the inflection point of the L-curve as the regularisation parameter. The software can construct unequally spaced grids and perform inversions using such grids, which enables changing the resolution of the inversion results at different depths. Through inversion of airborne gradiometry data on the Australian Kauring test site, we discovered that anomalous blocks of different sizes are present within the study area in addition to the central anomalies. The software of DenInv3D can be downloaded from http://159.226.162.30.
NASA Astrophysics Data System (ADS)
Zhang, Yunlu; Yan, Lei; Liou, Frank
2018-05-01
The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Botta, F.; Mairani, A.; Battistoni, G.
Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernelmore » (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10{sup -3} MeV) and for beta emitting isotopes commonly used for therapy ({sup 89}Sr, {sup 90}Y, {sup 131}I, {sup 153}Sm, {sup 177}Lu, {sup 186}Re, and {sup 188}Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8{center_dot}R{sub CSDA} and 0.9{center_dot}R{sub CSDA} for monoenergetic electrons (R{sub CSDA} being the continuous slowing down approximation range) and within 0.8{center_dot}X{sub 90} and 0.9{center_dot}X{sub 90} for isotopes (X{sub 90} being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9{center_dot}R{sub CSDA} and 0.9{center_dot}X{sub 90} for electrons and isotopes, respectively. Results: Concerning monoenergetic electrons, within 0.8{center_dot}R{sub CSDA} (where 90%-97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The discrepancies between fluka and the other codes are of the same order of magnitude than those observed when comparing the other codes among them, which can be referred to the different simulation algorithms. When considering the beta spectra, discrepancies notably reduce: within 0.9{center_dot}X{sub 90}, fluka and penelope differ for less than 1% in water and less than 2% in bone with any of the isotopes here considered. Complete data of fluka DPKs are given as Supplementary Material as a tool to perform dosimetry by analytical point kernel convolution. Conclusions: fluka provides reliable results when transporting electrons in the low energy range, proving to be an adequate tool for nuclear medicine dosimetry.« less
Time delay and distance measurement
NASA Technical Reports Server (NTRS)
Abshire, James B. (Inventor); Sun, Xiaoli (Inventor)
2011-01-01
A method for measuring time delay and distance may include providing an electromagnetic radiation carrier frequency and modulating one or more of amplitude, phase, frequency, polarization, and pointing angle of the carrier frequency with a return to zero (RZ) pseudo random noise (PN) code. The RZ PN code may have a constant bit period and a pulse duration that is less than the bit period. A receiver may detect the electromagnetic radiation and calculate the scattering profile versus time (or range) by computing a cross correlation function between the recorded received signal and a three-state RZ PN code kernel in the receiver. The method also may be used for pulse delay time (i.e., PPM) communications.
Determination of aflatoxin risk components for in-shell Brazil nuts.
Vargas, E A; dos Santos, E A; Whitaker, T B; Slate, A B
2011-09-01
A study was conducted on the risk from aflatoxins associated with the kernels and shells of Brazil nuts. Samples were collected from processing plants in Amazonia, Brazil. A total of 54 test samples (40 kg) were taken from 13 in-shell Brazil nut lots ready for market. Each in-shell sample was shelled and the kernels and shells were sorted in five fractions: good kernels, rotten kernels, good shells with kernel residue, good shells without kernel residue, and rotten shells, and analysed for aflatoxins. The kernel:shell ratio mass (w/w) was 50.2/49.8%. The Brazil nut shell was found to be contaminated with aflatoxin. Rotten nuts were found to be a high-risk fraction for aflatoxin in in-shell Brazil nut lots. Rotten nuts contributed only 4.2% of the sample mass (kg), but contributed 76.6% of the total aflatoxin mass (µg) in the in-shell test sample. The highest correlations were found between the aflatoxin concentration in in-shell Brazil nuts samples and the aflatoxin concentration in all defective fractions (R(2)=0.97). The aflatoxin mass of all defective fractions (R(2)=0.90) as well as that of the rotten nut (R(2)=0.88) were also strongly correlated with the aflatoxin concentration of the in-shell test samples. Process factors of 0.17, 0.16 and 0.24 were respectively calculated to estimate the aflatoxin concentration in the good kernels (edible) and good nuts by measuring the aflatoxin concentration in the in-shell test sample and in all kernels, respectively. © 2011 Taylor & Francis
Numerical method for solving the nonlinear four-point boundary value problems
NASA Astrophysics Data System (ADS)
Lin, Yingzhen; Lin, Jinnan
2010-12-01
In this paper, a new reproducing kernel space is constructed skillfully in order to solve a class of nonlinear four-point boundary value problems. The exact solution of the linear problem can be expressed in the form of series and the approximate solution of the nonlinear problem is given by the iterative formula. Compared with known investigations, the advantages of our method are that the representation of exact solution is obtained in a new reproducing kernel Hilbert space and accuracy of numerical computation is higher. Meanwhile we present the convergent theorem, complexity analysis and error estimation. The performance of the new method is illustrated with several numerical examples.
Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards
2013-01-01
Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less
Pressure estimation from single-snapshot tomographic PIV in a turbulent boundary layer
NASA Astrophysics Data System (ADS)
Schneiders, Jan F. G.; Pröbsting, Stefan; Dwight, Richard P.; van Oudheusden, Bas W.; Scarano, Fulvio
2016-04-01
A method is proposed to determine the instantaneous pressure field from a single tomographic PIV velocity snapshot and is applied to a flat-plate turbulent boundary layer. The main concept behind the single-snapshot pressure evaluation method is to approximate the flow acceleration using the vorticity transport equation. The vorticity field calculated from the measured instantaneous velocity is advanced over a single integration time step using the vortex-in-cell (VIC) technique to update the vorticity field, after which the temporal derivative and material derivative of velocity are evaluated. The pressure in the measurement volume is subsequently evaluated by solving a Poisson equation. The procedure is validated considering data from a turbulent boundary layer experiment, obtained with time-resolved tomographic PIV at 10 kHz, where an independent surface pressure fluctuation measurement is made by a microphone. The cross-correlation coefficient of the surface pressure fluctuations calculated by the single-snapshot pressure method with respect to the microphone measurements is calculated and compared to that obtained using time-resolved pressure-from-PIV, which is regarded as benchmark. The single-snapshot procedure returns a cross-correlation comparable to the best result obtained by time-resolved PIV, which uses a nine-point time kernel. When the kernel of the time-resolved approach is reduced to three measurements, the single-snapshot method yields approximately 30 % higher correlation. Use of the method should be cautioned when the contributions to fluctuating pressure from outside the measurement volume are significant. The study illustrates the potential for simplifying the hardware configurations (e.g. high-speed PIV or dual PIV) required to determine instantaneous pressure from tomographic PIV.
Kwon, Oh-Hyun; Crnovrsanin, Tarik; Ma, Kwan-Liu
2018-01-01
Using different methods for laying out a graph can lead to very different visual appearances, with which the viewer perceives different information. Selecting a "good" layout method is thus important for visualizing a graph. The selection can be highly subjective and dependent on the given task. A common approach to selecting a good layout is to use aesthetic criteria and visual inspection. However, fully calculating various layouts and their associated aesthetic metrics is computationally expensive. In this paper, we present a machine learning approach to large graph visualization based on computing the topological similarity of graphs using graph kernels. For a given graph, our approach can show what the graph would look like in different layouts and estimate their corresponding aesthetic metrics. An important contribution of our work is the development of a new framework to design graph kernels. Our experimental study shows that our estimation calculation is considerably faster than computing the actual layouts and their aesthetic metrics. Also, our graph kernels outperform the state-of-the-art ones in both time and accuracy. In addition, we conducted a user study to demonstrate that the topological similarity computed with our graph kernel matches perceptual similarity assessed by human users.
Electron beam lithographic modeling assisted by artificial intelligence technology
NASA Astrophysics Data System (ADS)
Nakayamada, Noriaki; Nishimura, Rieko; Miura, Satoru; Nomura, Haruyuki; Kamikubo, Takashi
2017-07-01
We propose a new concept of tuning a point-spread function (a "kernel" function) in the modeling of electron beam lithography using the machine learning scheme. Normally in the work of artificial intelligence, the researchers focus on the output results from a neural network, such as success ratio in image recognition or improved production yield, etc. In this work, we put more focus on the weights connecting the nodes in a convolutional neural network, which are naturally the fractions of a point-spread function, and take out those weighted fractions after learning to be utilized as a tuned kernel. Proof-of-concept of the kernel tuning has been demonstrated using the examples of proximity effect correction with 2-layer network, and charging effect correction with 3-layer network. This type of new tuning method can be beneficial to give researchers more insights to come up with a better model, yet it might be too early to be deployed to production to give better critical dimension (CD) and positional accuracy almost instantly.
Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment
NASA Astrophysics Data System (ADS)
Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty
2017-12-01
Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.
Bao, Ande; Zhao, Xia; Phillips, William T; Woolley, F Ross; Otto, Randal A; Goins, Beth; Hevezi, James M
2005-01-01
Radioimmunotherapy of hematopoeitic cancers and micrometastases has been shown to have significant therapeutic benefit. The treatment of solid tumors with radionuclide therapy has been less successful. Previous investigations of intratumoral activity distribution and studies on intratumoral drug delivery suggest that a probable reason for the disappointing results in solid tumor treatment is nonuniform intratumoral distribution coupled with restricted intratumoral drug penetrance, thus inhibiting antineoplastic agents from reaching the tumor's center. This paper describes a nonuniform intratumoral activity distribution identified by limited radiolabeled tracer diffusion from tumor surface to tumor center. This activity was simulated using techniques that allowed the absorbed dose distributions to be estimated using different intratumoral diffusion capabilities and calculated for tumors of varying diameters. The influences of these absorbed dose distributions on solid tumor radionuclide therapy are also discussed. The absorbed dose distribution was calculated using the dose point kernel method that provided for the application of a three-dimensional (3D) convolution between a dose rate kernel function and an activity distribution function. These functions were incorporated into 3D matrices with voxels measuring 0.10 x 0.10 x 0.10 mm3. At this point fast Fourier transform (FFT) and multiplication in frequency domain followed by inverse FFT (iFFT) were used to effect this phase of the dose calculation process. The absorbed dose distribution for tumors of 1, 3, 5, 10, and 15 mm in diameter were studied. Using the therapeutic radionuclides of 131I, 186Re, 188Re, and 90Y, the total average dose, center dose, and surface dose for each of the different tumor diameters were reported. The absorbed dose in the nearby normal tissue was also evaluated. When the tumor diameters exceed 15 mm, a much lower tumor center dose is delivered compared with tumors between 3 and 5 mm in diameter. Based on these findings, the use of higher beta-energy radionuclides, such as 188Re and 90Y is more effective in delivering a higher absorbed dose to the tumor center at tumor diameters around 10 mm.
Multiple kernel SVR based on the MRE for remote sensing water depth fusion detection
NASA Astrophysics Data System (ADS)
Wang, Jinjin; Ma, Yi; Zhang, Jingyu
2018-03-01
Remote sensing has an important means of water depth detection in coastal shallow waters and reefs. Support vector regression (SVR) is a machine learning method which is widely used in data regression. In this paper, SVR is used to remote sensing multispectral bathymetry. Aiming at the problem that the single-kernel SVR method has a large error in shallow water depth inversion, the mean relative error (MRE) of different water depth is retrieved as a decision fusion factor with single kernel SVR method, a multi kernel SVR fusion method based on the MRE is put forward. And taking the North Island of the Xisha Islands in China as an experimentation area, the comparison experiments with the single kernel SVR method and the traditional multi-bands bathymetric method are carried out. The results show that: 1) In range of 0 to 25 meters, the mean absolute error(MAE)of the multi kernel SVR fusion method is 1.5m,the MRE is 13.2%; 2) Compared to the 4 single kernel SVR method, the MRE of the fusion method reduced 1.2% (1.9%) 3.4% (1.8%), and compared to traditional multi-bands method, the MRE reduced 1.9%; 3) In 0-5m depth section, compared to the single kernel method and the multi-bands method, the MRE of fusion method reduced 13.5% to 44.4%, and the distribution of points is more concentrated relative to y=x.
Effects of sample size on KERNEL home range estimates
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
1999-01-01
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
Generation of a novel phase-space-based cylindrical dose kernel for IMRT optimization.
Zhong, Hualiang; Chetty, Indrin J
2012-05-01
Improving dose calculation accuracy is crucial in intensity-modulated radiation therapy (IMRT). We have developed a method for generating a phase-space-based dose kernel for IMRT planning of lung cancer patients. Particle transport in the linear accelerator treatment head of a 21EX, 6 MV photon beam (Varian Medical Systems, Palo Alto, CA) was simulated using the EGSnrc/BEAMnrc code system. The phase space information was recorded under the secondary jaws. Each particle in the phase space file was associated with a beamlet whose index was calculated and saved in the particle's LATCH variable. The DOSXYZnrc code was modified to accumulate the energy deposited by each particle based on its beamlet index. Furthermore, the central axis of each beamlet was calculated from the orientation of all the particles in this beamlet. A cylinder was then defined around the central axis so that only the energy deposited within the cylinder was counted. A look-up table was established for each cylinder during the tallying process. The efficiency and accuracy of the cylindrical beamlet energy deposition approach was evaluated using a treatment plan developed on a simulated lung phantom. Profile and percentage depth doses computed in a water phantom for an open, square field size were within 1.5% of measurements. Dose optimized with the cylindrical dose kernel was found to be within 0.6% of that computed with the nontruncated 3D kernel. The cylindrical truncation reduced optimization time by approximately 80%. A method for generating a phase-space-based dose kernel, using a truncated cylinder for scoring dose, in beamlet-based optimization of lung treatment planning was developed and found to be in good agreement with the standard, nontruncated scoring approach. Compared to previous techniques, our method significantly reduces computational time and memory requirements, which may be useful for Monte-Carlo-based 4D IMRT or IMAT treatment planning.
Transient and asymptotic behaviour of the binary breakage problem
NASA Astrophysics Data System (ADS)
Mantzaris, Nikos V.
2005-06-01
The general binary breakage problem with power-law breakage functions and two families of symmetric and asymmetric breakage kernels is studied in this work. A useful transformation leads to an equation that predicts self-similar solutions in its asymptotic limit and offers explicit knowledge of the mean size and particle density at each point in dimensionless time. A novel moving boundary algorithm in the transformed coordinate system is developed, allowing the accurate prediction of the full transient behaviour of the system from the initial condition up to the point where self-similarity is achieved, and beyond if necessary. The numerical algorithm is very rapid and its results are in excellent agreement with known analytical solutions. In the case of the symmetric breakage kernels only unimodal, self-similar number density functions are obtained asymptotically for all parameter values and independent of the initial conditions, while in the case of asymmetric breakage kernels, bimodality appears for high degrees of asymmetry and sharp breakage functions. For symmetric and discrete breakage kernels, self-similarity is not achieved. The solution exhibits sustained oscillations with amplitude that depends on the initial condition and the sharpness of the breakage mechanism, while the period is always fixed and equal to ln 2 with respect to dimensionless time.
Tan, Stéphanie; Soulez, Gilles; Diez Martinez, Patricia; Larrivée, Sandra; Stevens, Louis-Mathieu; Goussard, Yves; Mansour, Samer; Chartrand-Lefebvre, Carl
2016-01-01
Metallic artifacts can result in an artificial thickening of the coronary stent wall which can significantly impair computed tomography (CT) imaging in patients with coronary stents. The objective of this study is to assess in vivo visualization of coronary stent wall and lumen with an edge-enhancing CT reconstruction kernel, as compared to a standard kernel. This is a prospective cross-sectional study involving the assessment of 71 coronary stents (24 patients), with blinded observers. After 256-slice CT angiography, image reconstruction was done with medium-smooth and edge-enhancing kernels. Stent wall thickness was measured with both orthogonal and circumference methods, averaging thickness from diameter and circumference measurements, respectively. Image quality was assessed quantitatively using objective parameters (noise, signal to noise (SNR) and contrast to noise (CNR) ratios), as well as visually using a 5-point Likert scale. Stent wall thickness was decreased with the edge-enhancing kernel in comparison to the standard kernel, either with the orthogonal (0.97 ± 0.02 versus 1.09 ± 0.03 mm, respectively; p<0.001) or the circumference method (1.13 ± 0.02 versus 1.21 ± 0.02 mm, respectively; p = 0.001). The edge-enhancing kernel generated less overestimation from nominal thickness compared to the standard kernel, both with the orthogonal (0.89 ± 0.19 versus 1.00 ± 0.26 mm, respectively; p<0.001) and the circumference (1.06 ± 0.26 versus 1.13 ± 0.31 mm, respectively; p = 0.005) methods. The edge-enhancing kernel was associated with lower SNR and CNR, as well as higher background noise (all p < 0.001), in comparison to the medium-smooth kernel. Stent visual scores were higher with the edge-enhancing kernel (p<0.001). In vivo 256-slice CT assessment of coronary stents shows that the edge-enhancing CT reconstruction kernel generates thinner stent walls, less overestimation from nominal thickness, and better image quality scores than the standard kernel.
NASA Astrophysics Data System (ADS)
Antoni, Rodolphe; Bourgois, Laurent
2017-12-01
In this work, the calculation of specific dose distribution in water is evaluated in MCNP6.1 with the regular condensed history algorithm the "detailed electron energy-loss straggling logic" and the new electrons transport algorithm proposed the "single event algorithm". Dose Point Kernel (DPK) is calculated with monoenergetic electrons of 50, 100, 500, 1000 and 3000 keV for different scoring cells dimensions. A comparison between MCNP6 results and well-validated codes for electron-dosimetry, i.e., EGSnrc or Penelope, is performed. When the detailed electron energy-loss straggling logic is used with default setting (down to the cut-off energy 1 keV), we infer that the depth of the dose peak increases with decreasing thickness of the scoring cell, largely due to combined step-size and boundary crossing artifacts. This finding is less prominent for 500 keV, 1 MeV and 3 MeV dose profile. With an appropriate number of sub-steps (ESTEP value in MCNP6), the dose-peak shift is almost complete absent to 50 keV and 100 keV electrons. However, the dose-peak is more prominent compared to EGSnrc and the absorbed dose tends to be underestimated at greater depths, meaning that boundaries crossing artifact are still occurring while step-size artifacts are greatly reduced. When the single-event mode is used for the whole transport, we observe the good agreement of reference and calculated profile for 50 and 100 keV electrons. Remaining artifacts are fully vanished, showing a possible transport treatment for energies less than a hundred of keV and accordance with reference for whatever scoring cell dimension, even if the single event method initially intended to support electron transport at energies below 1 keV. Conversely, results for 500 keV, 1 MeV and 3 MeV undergo a dramatic discrepancy with reference curves. These poor results and so the current unreliability of the method is for a part due to inappropriate elastic cross section treatment from the ENDF/B-VI.8 library in those energy ranges. Accordingly, special care has to be taken in setting choice for calculating electron dose distribution with MCNP6, in particular with regards to dosimetry or nuclear medicine applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindemer, Terrence; Voit, Stewart L; Silva, Chinthaka M
2014-01-01
The U.S. Department of Energy is considering a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with large, dense uranium nitride (UN) kernels. This effort explores many factors involved in using gel-derived uranium oxide-carbon microspheres to make large UN kernels. Analysis of recent studies with sufficient experimental details is provided. Extensive thermodynamic calculations are used to predict carbon monoxide and other pressures for several different reactions that may be involved in conversion of uranium oxides and carbides to UN. Experimentally, the method for making themore » gel-derived microspheres is described. These were used in a microbalance with an attached mass spectrometer to determine details of carbothermic conversion in argon, nitrogen, or vacuum. A quantitative model is derived from experiments for vacuum conversion to an uranium oxide-carbide kernel.« less
Nonparametric entropy estimation using kernel densities.
Lake, Douglas E
2009-01-01
The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.
A Experimental Study of the Growth of Laser Spark and Electric Spark Ignited Flame Kernels.
NASA Astrophysics Data System (ADS)
Ho, Chi Ming
1995-01-01
Better ignition sources are constantly in demand for enhancing the spark ignition in practical applications such as automotive and liquid rocket engines. In response to this practical challenge, the present experimental study was conducted with the major objective to obtain a better understanding on how spark formation and hence spark characteristics affect the flame kernel growth. Two laser sparks and one electric spark were studied in air, propane-air, propane -air-nitrogen, methane-air, and methane-oxygen mixtures that were initially at ambient pressure and temperature. The growth of the kernels was monitored by imaging the kernels with shadowgraph systems, and by imaging the planar laser -induced fluorescence of the hydroxyl radicals inside the kernels. Characteristic dimensions and kernel structures were obtained from these images. Since different energy transfer mechanisms are involved in the formation of a laser spark as compared to that of an electric spark; a laser spark is insensitive to changes in mixture ratio and mixture type, while an electric spark is sensitive to changes in both. The detailed structures of the kernels in air and propane-air mixtures primarily depend on the spark characteristics. But the combustion heat released rapidly in methane-oxygen mixtures significantly modifies the kernel structure. Uneven spark energy distribution causes remarkably asymmetric kernel structure. The breakdown energy of a spark creates a blast wave that shows good agreement with the numerical point blast solution, and a succeeding complex spark-induced flow that agrees reasonably well with a simple puff model. The transient growth rates of the propane-air, propane-air -nitrogen, and methane-air flame kernels can be interpreted in terms of spark effects, flame stretch, and preferential diffusion. For a given mixture, a spark with higher breakdown energy produces a greater and longer-lasting enhancing effect on the kernel growth rate. By comparing the growth rates of the appropriate mixtures, the positive and negative effects of preferential diffusion and flame stretch on the developing flame are clearly demonstrated.
Observation of a 3D Magnetic Null Point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, P.; Falco, M.; Guglielmino, S. L.
2017-03-10
We describe high-resolution observations of a GOES B-class flare characterized by a circular ribbon at the chromospheric level, corresponding to the network at the photospheric level. We interpret the flare as a consequence of a magnetic reconnection event that occurred at a three-dimensional (3D) coronal null point located above the supergranular cell. The potential field extrapolation of the photospheric magnetic field indicates that the circular chromospheric ribbon is cospatial with the fan footpoints, while the ribbons of the inner and outer spines look like compact kernels. We found new interesting observational aspects that need to be explained by models: (1)more » a loop corresponding to the outer spine became brighter a few minutes before the onset of the flare; (2) the circular ribbon was formed by several adjacent compact kernels characterized by a size of 1″–2″; (3) the kernels with a stronger intensity emission were located at the outer footpoint of the darker filaments, departing radially from the center of the supergranular cell; (4) these kernels started to brighten sequentially in clockwise direction; and (5) the site of the 3D null point and the shape of the outer spine were detected by RHESSI in the low-energy channel between 6.0 and 12.0 keV. Taking into account all these features and the length scales of the magnetic systems involved in the event, we argue that the low intensity of the flare may be ascribed to the low amount of magnetic flux and to its symmetric configuration.« less
NASA Astrophysics Data System (ADS)
Hart, Vern; Burrow, Damon; Li, X. Allen
2017-08-01
A systematic method is presented for determining optimal parameters in variable-kernel deformable image registration of cone beam CT and CT images, in order to improve accuracy and convergence for potential use in online adaptive radiotherapy. Assessed conditions included the noise constant (symmetric force demons), the kernel reduction rate, the kernel reduction percentage, and the kernel adjustment criteria. Four such parameters were tested in conjunction with reductions of 5, 10, 15, 20, 30, and 40%. Noise constants ranged from 1.0 to 1.9 for pelvic images in ten prostate cancer patients. A total of 516 tests were performed and assessed using the structural similarity index. Registration accuracy was plotted as a function of iteration number and a least-squares regression line was calculated, which implied an average improvement of 0.0236% per iteration. This baseline was used to determine if a given set of parameters under- or over-performed. The most accurate parameters within this range were applied to contoured images. The mean Dice similarity coefficient was calculated for bladder, prostate, and rectum with mean values of 98.26%, 97.58%, and 96.73%, respectively; corresponding to improvements of 2.3%, 9.8%, and 1.2% over previously reported values for the same organ contours. This graphical approach to registration analysis could aid in determining optimal parameters for Demons-based algorithms. It also establishes expectation values for convergence rates and could serve as an indicator of non-physical warping, which often occurred in cases >0.6% from the regression line.
Introducing etch kernels for efficient pattern sampling and etch bias prediction
NASA Astrophysics Data System (ADS)
Weisbuch, François; Lutich, Andrey; Schatz, Jirka
2018-01-01
Successful patterning requires good control of the photolithography and etch processes. While compact litho models, mainly based on rigorous physics, can predict very well the contours printed in photoresist, pure empirical etch models are less accurate and more unstable. Compact etch models are based on geometrical kernels to compute the litho-etch biases that measure the distance between litho and etch contours. The definition of the kernels, as well as the choice of calibration patterns, is critical to get a robust etch model. This work proposes to define a set of independent and anisotropic etch kernels-"internal, external, curvature, Gaussian, z_profile"-designed to represent the finest details of the resist geometry to characterize precisely the etch bias at any point along a resist contour. By evaluating the etch kernels on various structures, it is possible to map their etch signatures in a multidimensional space and analyze them to find an optimal sampling of structures. The etch kernels evaluated on these structures were combined with experimental etch bias derived from scanning electron microscope contours to train artificial neural networks to predict etch bias. The method applied to contact and line/space layers shows an improvement in etch model prediction accuracy over standard etch model. This work emphasizes the importance of the etch kernel definition to characterize and predict complex etch effects.
Proteome analysis of the almond kernel (Prunus dulcis).
Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu
2016-08-01
Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Little, C L; Jemmott, W; Surman-Lee, S; Hucklesby, L; de Pinnal, E
2009-04-01
There is little published information on the prevalence of Salmonella in edible nut kernels. A study in early 2008 of edible roasted nut kernels on retail sale in England was undertaken to assess the microbiological safety of this product. A total of 727 nut kernel samples of different varieties were examined. Overall, Salmonella and Escherichia coli were detected from 0.2 and 0.4% of edible roasted nut kernels. Of the nut varieties examined, Salmonella Havana was detected from 1 (4.0%) sample of pistachio nuts, indicating a risk to health. The United Kingdom Food Standards Agency was immediately informed, and full investigations were undertaken. Further examination established the contamination to be associated with the pistachio kernels and not the partly opened shells. Salmonella was not detected in other varieties tested (almonds, Brazils, cashews, hazelnuts, macadamia, peanuts, pecans, pine nuts, and walnuts). E. coli was found at low levels (range of 3.6 to 4/g) in walnuts (1.4%), almonds (1.2%), and Brazils (0.5%). The presence of Salmonella is unacceptable in edible nut kernels. Prevention of microbial contamination in these products lies in the application of good agricultural, manufacturing, and storage practices together with a hazard analysis and critical control points system that encompass all stages of production, processing, and distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal
Open Computing Language (OpenCL) is a high-level language that enables software programmers to explore Field Programmable Gate Arrays (FPGAs) for application acceleration. The Intel FPGA software development kit (SDK) for OpenCL allows a user to specify applications at a high level and explore the performance of low-level hardware acceleration. In this report, we present the FPGA performance and power consumption results of the single-precision floating-point vector add OpenCL kernel using the Intel FPGA SDK for OpenCL on the Nallatech 385A FPGA board. The board features an Arria 10 FPGA. We evaluate the FPGA implementations using the compute unit duplication andmore » kernel vectorization optimization techniques. On the Nallatech 385A FPGA board, the maximum compute kernel bandwidth we achieve is 25.8 GB/s, approximately 76% of the peak memory bandwidth. The power consumption of the FPGA device when running the kernels ranges from 29W to 42W.« less
Hyperspectral imaging for detection of black tip damage in wheat kernels
NASA Astrophysics Data System (ADS)
Delwiche, Stephen R.; Yang, I.-Chang; Kim, Moon S.
2009-05-01
A feasibility study was conducted on the use of hyperspectral imaging to differentiate sound wheat kernels from those with the fungal condition called black point or black tip. Individual kernels of hard red spring wheat were loaded in indented slots on a blackened machined aluminum plate. Damage conditions, determined by official (USDA) inspection, were either sound (no damage) or damaged by the black tip condition alone. Hyperspectral imaging was separately performed under modes of reflectance from white light illumination and fluorescence from UV light (~380 nm) illumination. By cursory inspection of wavelength images, one fluorescence wavelength (531 nm) was selected for image processing and classification analysis. Results indicated that with this one wavelength alone, classification accuracy can be as high as 95% when kernels are oriented with their dorsal side toward the camera. It is suggested that improvement in classification can be made through the inclusion of multiple wavelength images.
Sensitivity Kernels for the Cross-Convolution Measure: Eliminate the Source in Waveform Tomography
NASA Astrophysics Data System (ADS)
Menke, W. H.
2017-12-01
We use the adjoint method to derive sensitivity kernels for the cross-convolution measure, a goodness-of-fit criterion that is applicable to seismic data containing closely-spaced multiple arrivals, such as reverberating compressional waves and split shear waves. In addition to a general formulation, specific expressions for sensitivity with respect to density, Lamé parameter and shear modulus are derived for a isotropic elastic solid. As is typical of adjoint methods, the kernels depend upon an adjoint field, the source of which, in this case, is the reference displacement field, pre-multiplied by a matrix of cross-correlations of components of the observed field. We use a numerical simulation to evaluate the resolving power of a topographic inversion that employs the cross-convolution measure. The estimated resolving kernel shows is point-like, indicating that the cross-convolution measure will perform well in waveform tomography settings.
Wilson Dslash Kernel From Lattice QCD Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show themore » technique gives excellent performance on regular Xeon Architecture as well.« less
Kernel-Phase Interferometry for Super-Resolution Detection of Faint Companions
NASA Astrophysics Data System (ADS)
Factor, Samuel M.; Kraus, Adam L.
2017-01-01
Direct detection of close in companions (exoplanets or binary systems) is notoriously difficult. While coronagraphs and point spread function (PSF) subtraction can be used to reduce contrast and dig out signals of companions under the PSF, there are still significant limitations in separation and contrast. Non-redundant aperture masking (NRM) interferometry can be used to detect companions well inside the PSF of a diffraction limited image, though the mask discards ˜95% of the light gathered by the telescope and thus the technique is severely flux limited. Kernel-phase analysis applies interferometric techniques similar to NRM to a diffraction limited image utilizing the full aperture. Instead of non-redundant closure-phases, kernel-phases are constructed from a grid of points on the full aperture, simulating a redundant interferometer. I have developed my own faint companion detection pipeline which utilizes an Bayesian analysis of kernel-phases. I have used this pipeline to search for new companions in archival images from HST/NICMOS in order to constrain planet and binary formation models at separations inaccessible to previous techniques. Using this method, it is possible to detect a companion well within the classical λ/D Rayleigh diffraction limit using a fraction of the telescope time as NRM. This technique can easily be applied to archival data as no mask is needed and will thus make the detection of close in companions cheap and simple as no additional observations are needed. Since the James Webb Space Telescope (JWST) will be able to perform NRM observations, further development and characterization of kernel-phase analysis will allow efficient use of highly competitive JWST telescope time.
Kernel-Phase Interferometry for Super-Resolution Detection of Faint Companions
NASA Astrophysics Data System (ADS)
Factor, Samuel
2016-10-01
Direct detection of close in companions (binary systems or exoplanets) is notoriously difficult. While chronagraphs and point spread function (PSF) subtraction can be used to reduce contrast and dig out signals of companions under the PSF, there are still significant limitations in separation and contrast. While non-redundant aperture masking (NRM) interferometry can be used to detect companions well inside the PSF of a diffraction limited image, the mask discards 95% of the light gathered by the telescope and thus the technique is severely flux limited. Kernel-phase analysis applies interferometric techniques similar to NRM though utilizing the full aperture. Instead of closure-phases, kernel-phases are constructed from a grid of points on the full aperture, simulating a redundant interferometer. I propose to develop my own faint companion detection pipeline which utilizes an MCMC analysis of kernel-phases. I will search for new companions in archival images from NIC1 and ACS/HRC in order to constrain binary and planet formation models at separations inaccessible to previous techniques. Using this method, it is possible to detect a companion well within the classical l/D Rayleigh diffraction limit using a fraction of the telescope time as NRM. This technique can easily be applied to archival data as no mask is needed and will thus make the detection of close in companions cheap and simple as no additional observations are needed. Since the James Webb Space Telescope (JWST) will be able to perform NRM observations, further development and characterization of kernel-phase analysis will allow efficient use of highly competitive JWST telescope time.
The Dispersal and Persistence of Invasive Marine Species
NASA Astrophysics Data System (ADS)
Glick, E. R.; Pringle, J.
2007-12-01
The spread of invasive marine species is a continuing problem throughout the world, though not entirely understood. Why do some species invade more easily than the rest? How are the range limits of these species set? Recent research (Byers & Pringle 2006, Pringle & Wares 2007) has produced retention criteria that determine whether a coastal species with a benthic adult stage and planktonic larvae can be retained within its range and invade in the direction opposite that of the mean current experienced by the larvae (i.e. upstream). These results however, are only accurate for Gaussian dispersal kernels. For kernels whose kurtosis differs from a Gaussian's, the retention criteria becomes increasingly inaccurate as the mean current increases. Using recent results of Lutscher (2006), we find an improved retention criterion which is much more accurate for non- Gaussian dispersal kernels. The importance of considering non-Gaussian kernels is illustrated for a number of commonly used dispersal kernels, and the relevance of these calculations is illustrated by considering the northward limit of invasion of Hemigrapsus sanguineus, an important invader in the Gulf of Maine.
Geographically weighted regression model on poverty indicator
NASA Astrophysics Data System (ADS)
Slamet, I.; Nugroho, N. F. T. A.; Muslich
2017-12-01
In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.
Multiple kernel learning in protein-protein interaction extraction from biomedical literature.
Yang, Zhihao; Tang, Nan; Zhang, Xiao; Lin, Hongfei; Li, Yanpeng; Yang, Zhiwei
2011-03-01
Knowledge about protein-protein interactions (PPIs) unveils the molecular mechanisms of biological processes. The volume and content of published biomedical literature on protein interactions is expanding rapidly, making it increasingly difficult for interaction database administrators, responsible for content input and maintenance to detect and manually update protein interaction information. The objective of this work is to develop an effective approach to automatic extraction of PPI information from biomedical literature. We present a weighted multiple kernel learning-based approach for automatic PPI extraction from biomedical literature. The approach combines the following kernels: feature-based, tree, graph and part-of-speech (POS) path. In particular, we extend the shortest path-enclosed tree (SPT) and dependency path tree to capture richer contextual information. Our experimental results show that the combination of SPT and dependency path tree extensions contributes to the improvement of performance by almost 0.7 percentage units in F-score and 2 percentage units in area under the receiver operating characteristics curve (AUC). Combining two or more appropriately weighed individual will further improve the performance. Both on the individual corpus and cross-corpus evaluation our combined kernel can achieve state-of-the-art performance with respect to comparable evaluations, with 64.41% F-score and 88.46% AUC on the AImed corpus. As different kernels calculate the similarity between two sentences from different aspects. Our combined kernel can reduce the risk of missing important features. More specifically, we use a weighted linear combination of individual kernels instead of assigning the same weight to each individual kernel, thus allowing the introduction of each kernel to incrementally contribute to the performance improvement. In addition, SPT and dependency path tree extensions can improve the performance by including richer context information. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Lan, C. E.; Lamar, J. E.
1977-01-01
A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.
NASA Astrophysics Data System (ADS)
Falzone, Nadia; Lee, Boon Q.; Fernández-Varea, José M.; Kartsonaki, Christiana; Stuchbery, Andrew E.; Kibédi, Tibor; Vallis, Katherine A.
2017-03-01
The aim of this study was to investigate the impact of decay data provided by the newly developed stochastic atomic relaxation model BrIccEmis on dose point kernels (DPKs - radial dose distribution around a unit point source) and S-values (absorbed dose per unit cumulated activity) of 14 Auger electron (AE) emitting radionuclides, namely 67Ga, 80mBr, 89Zr, 90Nb, 99mTc, 111In, 117mSn, 119Sb, 123I, 124I, 125I, 135La, 195mPt and 201Tl. Radiation spectra were based on the nuclear decay data from the medical internal radiation dose (MIRD) RADTABS program and the BrIccEmis code, assuming both an isolated-atom and condensed-phase approach. DPKs were simulated with the PENELOPE Monte Carlo (MC) code using event-by-event electron and photon transport. S-values for concentric spherical cells of various sizes were derived from these DPKs using appropriate geometric reduction factors. The number of Auger and Coster-Kronig (CK) electrons and x-ray photons released per nuclear decay (yield) from MIRD-RADTABS were consistently higher than those calculated using BrIccEmis. DPKs for the electron spectra from BrIccEmis were considerably different from MIRD-RADTABS in the first few hundred nanometres from a point source where most of the Auger electrons are stopped. S-values were, however, not significantly impacted as the differences in DPKs in the sub-micrometre dimension were quickly diminished in larger dimensions. Overestimation in the total AE energy output by MIRD-RADTABS leads to higher predicted energy deposition by AE emitting radionuclides, especially in the immediate vicinity of the decaying radionuclides. This should be taken into account when MIRD-RADTABS data are used to simulate biological damage at nanoscale dimensions.
Effect of Local TOF Kernel Miscalibrations on Contrast-Noise in TOF PET
NASA Astrophysics Data System (ADS)
Clementel, Enrico; Mollet, Pieter; Vandenberghe, Stefaan
2013-06-01
TOF PET imaging requires specific calibrations: accurate characterization of the system timing resolution and timing offset is required to achieve the full potential image quality. Current system models used in image reconstruction assume a spatially uniform timing resolution kernel. Furthermore, although the timing offset errors are often pre-corrected, this correction becomes less accurate with the time since, especially in older scanners, the timing offsets are often calibrated only during the installation, as the procedure is time-consuming. In this study, we investigate and compare the effects of local mismatch of timing resolution when a uniform kernel is applied to systems with local variations in timing resolution and the effects of uncorrected time offset errors on image quality. A ring-like phantom was acquired on a Philips Gemini TF scanner and timing histograms were obtained from coincidence events to measure timing resolution along all sets of LORs crossing the scanner center. In addition, multiple acquisitions of a cylindrical phantom, 20 cm in diameter with spherical inserts, and a point source were simulated. A location-dependent timing resolution was simulated, with a median value of 500 ps and increasingly large local variations, and timing offset errors ranging from 0 to 350 ps were also simulated. Images were reconstructed with TOF MLEM with a uniform kernel corresponding to the effective timing resolution of the data, as well as with purposefully mismatched kernels. To CRC vs noise curves were measured over the simulated cylinder realizations, while the simulated point source was processed to generate timing histograms of the data. Results show that timing resolution is not uniform over the FOV of the considered scanner. The simulated phantom data indicate that CRC is moderately reduced in data sets with locally varying timing resolution reconstructed with a uniform kernel, while still performing better than non-TOF reconstruction. On the other hand, uncorrected offset errors in our setup have a larger potential for decreasing image quality and can lead to a reduction of CRC of up to 15% and an increase in the measured timing resolution kernel up to 40%. However, in realistic conditions in frequently calibrated systems, using a larger effective timing kernel in image reconstruction can compensate uncorrected offset errors.
Hruska, Zuzana; Yao, Haibo; Kincaid, Russell; Brown, Robert L; Bhatnagar, Deepak; Cleveland, Thomas E
2017-01-01
Non-invasive, easy to use and cost-effective technology offers a valuable alternative for rapid detection of carcinogenic fungal metabolites, namely aflatoxins, in commodities. One relatively recent development in this area is the use of spectral technology. Fluorescence hyperspectral imaging, in particular, offers a potential rapid and non-invasive method for detecting the presence of aflatoxins in maize infected with the toxigenic fungus Aspergillus flavus . Earlier studies have shown that whole maize kernels contaminated with aflatoxins exhibit different spectral signatures from uncontaminated kernels based on the external fluorescence emission of the whole kernels. Here, the effect of time on the internal fluorescence spectral emissions from cross-sections of kernels infected with toxigenic and atoxigenic A. flavus , were examined in order to elucidate the interaction between the fluorescence signals emitted by some aflatoxin contaminated maize kernels and the fungal invasion resulting in the production of aflatoxins. First, the difference in internal fluorescence emissions between cross-sections of kernels incubated in toxigenic and atoxigenic inoculum was assessed. Kernels were inoculated with each strain for 5, 7, and 9 days before cross-sectioning and imaging. There were 270 kernels (540 halves) imaged, including controls. Second, in a different set of kernels (15 kernels/group; 135 total), the germ of each kernel was separated from the endosperm to determine the major areas of aflatoxin accumulation and progression over nine growth days. Kernels were inoculated with toxigenic and atoxigenic fungal strains for 5, 7, and 9 days before the endosperm and germ were separated, followed by fluorescence hyperspectral imaging and chemical aflatoxin determination. A marked difference in fluorescence intensity was shown between the toxigenic and atoxigenic strains on day nine post-inoculation, which may be a useful indicator of the location of aflatoxin contamination. This finding suggests that both, the fluorescence peak shift and intensity as well as timing, may be essential in distinguishing toxigenic and atoxigenic fungi based on spectral features. Results also reveal a possible preferential difference in the internal colonization of maize kernels between the toxigenic and atoxigenic strains of A. flavus suggesting a potential window for differentiating the strains based on fluorescence spectra at specific time points.
Hruska, Zuzana; Yao, Haibo; Kincaid, Russell; Brown, Robert L.; Bhatnagar, Deepak; Cleveland, Thomas E.
2017-01-01
Non-invasive, easy to use and cost-effective technology offers a valuable alternative for rapid detection of carcinogenic fungal metabolites, namely aflatoxins, in commodities. One relatively recent development in this area is the use of spectral technology. Fluorescence hyperspectral imaging, in particular, offers a potential rapid and non-invasive method for detecting the presence of aflatoxins in maize infected with the toxigenic fungus Aspergillus flavus. Earlier studies have shown that whole maize kernels contaminated with aflatoxins exhibit different spectral signatures from uncontaminated kernels based on the external fluorescence emission of the whole kernels. Here, the effect of time on the internal fluorescence spectral emissions from cross-sections of kernels infected with toxigenic and atoxigenic A. flavus, were examined in order to elucidate the interaction between the fluorescence signals emitted by some aflatoxin contaminated maize kernels and the fungal invasion resulting in the production of aflatoxins. First, the difference in internal fluorescence emissions between cross-sections of kernels incubated in toxigenic and atoxigenic inoculum was assessed. Kernels were inoculated with each strain for 5, 7, and 9 days before cross-sectioning and imaging. There were 270 kernels (540 halves) imaged, including controls. Second, in a different set of kernels (15 kernels/group; 135 total), the germ of each kernel was separated from the endosperm to determine the major areas of aflatoxin accumulation and progression over nine growth days. Kernels were inoculated with toxigenic and atoxigenic fungal strains for 5, 7, and 9 days before the endosperm and germ were separated, followed by fluorescence hyperspectral imaging and chemical aflatoxin determination. A marked difference in fluorescence intensity was shown between the toxigenic and atoxigenic strains on day nine post-inoculation, which may be a useful indicator of the location of aflatoxin contamination. This finding suggests that both, the fluorescence peak shift and intensity as well as timing, may be essential in distinguishing toxigenic and atoxigenic fungi based on spectral features. Results also reveal a possible preferential difference in the internal colonization of maize kernels between the toxigenic and atoxigenic strains of A. flavus suggesting a potential window for differentiating the strains based on fluorescence spectra at specific time points. PMID:28966606
Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F
2012-01-01
Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.
3D local feature BKD to extract road information from mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Liu, Yuan; Dong, Zhen; Liang, Fuxun; Li, Bijun; Peng, Xiangyang
2017-08-01
Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise.
SU-F-T-672: A Novel Kernel-Based Dose Engine for KeV Photon Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reinhart, M; Fast, M F; Nill, S
2016-06-15
Purpose: Mimicking state-of-the-art patient radiotherapy with high precision irradiators for small animals allows advanced dose-effect studies and radiobiological investigations. One example is the implementation of pre-clinical IMRT-like irradiations, which requires the development of inverse planning for keV photon beams. As a first step, we present a novel kernel-based dose calculation engine for keV x-rays with explicit consideration of energy and material dependencies. Methods: We follow a superposition-convolution approach adapted to keV x-rays, based on previously published work on micro-beam therapy. In small animal radiotherapy, we assume local energy deposition at the photon interaction point, since the electron ranges in tissuemore » are of the same order of magnitude as the voxel size. This allows us to use photon-only kernel sets generated by MC simulations, which are pre-calculated for six energy windows and ten base materials. We validate our stand-alone dose engine against Geant4 MC simulations for various beam configurations in water, slab phantoms with bone and lung inserts, and on a mouse CT with (0.275mm)3 voxels. Results: We observe good agreement for all cases. For field sizes of 1mm{sup 2} to 1cm{sup 2} in water, the depth dose curves agree within 1% (mean), with the largest deviations in the first voxel (4%) and at depths>5cm (<2.5%). The out-of-field doses at 1cm depth agree within 8% (mean) for all but the smallest field size. In slab geometries, the mean agreement was within 3%, with maximum deviations of 8% at water-bone interfaces. The γ-index (1mm/1%) passing rate for a single-field mouse irradiation is 71%. Conclusion: The presented dose engine yields an accurate representation of keV-photon doses suitable for inverse treatment planning for IMRT. It has the potential to become a significantly faster yet sufficiently accurate alternative to full MC simulations. Further investigations will focus on energy sampling as well as calculation times. Research at ICR is also supported by Cancer Research UK under Programme C33589/A19727 and NHS funding to the NIHR Biomedical Research Centre at RMH and ICR. MFF is supported by Cancer Research UK under Programme C33589/A19908.« less
NASA Astrophysics Data System (ADS)
Tehrany, Mahyat Shafapour; Pradhan, Biswajeet; Jebur, Mustafa Neamah
2014-05-01
Flood is one of the most devastating natural disasters that occur frequently in Terengganu, Malaysia. Recently, ensemble based techniques are getting extremely popular in flood modeling. In this paper, weights-of-evidence (WoE) model was utilized first, to assess the impact of classes of each conditioning factor on flooding through bivariate statistical analysis (BSA). Then, these factors were reclassified using the acquired weights and entered into the support vector machine (SVM) model to evaluate the correlation between flood occurrence and each conditioning factor. Through this integration, the weak point of WoE can be solved and the performance of the SVM will be enhanced. The spatial database included flood inventory, slope, stream power index (SPI), topographic wetness index (TWI), altitude, curvature, distance from the river, geology, rainfall, land use/cover (LULC), and soil type. Four kernel types of SVM (linear kernel (LN), polynomial kernel (PL), radial basis function kernel (RBF), and sigmoid kernel (SIG)) were used to investigate the performance of each kernel type. The efficiency of the new ensemble WoE and SVM method was tested using area under curve (AUC) which measured the prediction and success rates. The validation results proved the strength and efficiency of the ensemble method over the individual methods. The best results were obtained from RBF kernel when compared with the other kernel types. Success rate and prediction rate for ensemble WoE and RBF-SVM method were 96.48% and 95.67% respectively. The proposed ensemble flood susceptibility mapping method could assist researchers and local governments in flood mitigation strategies.
Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen
2014-09-01
For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Pearson correlation estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.
Finite-frequency structural sensitivities of short-period compressional body waves
NASA Astrophysics Data System (ADS)
Fuji, Nobuaki; Chevrot, Sébastien; Zhao, Li; Geller, Robert J.; Kawai, Kenji
2012-07-01
We present an extension of the method recently introduced by Zhao & Chevrot for calculating Fréchet kernels from a precomputed database of strain Green's tensors by normal mode summation. The extension involves two aspects: (1) we compute the strain Green's tensors using the Direct Solution Method, which allows us to go up to frequencies as high as 1 Hz; and (2) we develop a spatial interpolation scheme so that the Green's tensors can be computed with a relatively coarse grid, thus improving the efficiency in the computation of the sensitivity kernels. The only requirement is that the Green's tensors be computed with a fine enough spatial sampling rate to avoid spatial aliasing. The Green's tensors can then be interpolated to any location inside the Earth, avoiding the need to store and retrieve strain Green's tensors for a fine sampling grid. The interpolation scheme not only significantly reduces the CPU time required to calculate the Green's tensor database and the disk space to store it, but also enhances the efficiency in computing the kernels by reducing the number of I/O operations needed to retrieve the Green's tensors. Our new implementation allows us to calculate sensitivity kernels for high-frequency teleseismic body waves with very modest computational resources such as a laptop. We illustrate the potential of our approach for seismic tomography by computing traveltime and amplitude sensitivity kernels for high frequency P, PKP and Pdiff phases. A comparison of our PKP kernels with those computed by asymptotic ray theory clearly shows the limits of the latter. With ray theory, it is not possible to model waves diffracted by internal discontinuities such as the core-mantle boundary, and it is also difficult to compute amplitudes for paths close to the B-caustic of the PKP phase. We also compute waveform partial derivatives for different parts of the seismic wavefield, a key ingredient for high resolution imaging by waveform inversion. Our computations of partial derivatives in the time window where PcP precursors are commonly observed show that the distribution of sensitivity is complex and counter-intuitive, with a large contribution from the mid-mantle region. This clearly emphasizes the need to use accurate and complete partial derivatives in waveform inversion.
Spatial patterns of aflatoxin levels in relation to ear-feeding insect damage in pre-harvest corn.
Ni, Xinzhi; Wilson, Jeffrey P; Buntin, G David; Guo, Baozhu; Krakowsky, Matthew D; Lee, R Dewey; Cottrell, Ted E; Scully, Brian T; Huffaker, Alisa; Schmelz, Eric A
2011-07-01
Key impediments to increased corn yield and quality in the southeastern US coastal plain region are damage by ear-feeding insects and aflatoxin contamination caused by infection of Aspergillus flavus. Key ear-feeding insects are corn earworm, Helicoverpa zea, fall armyworm, Spodoptera frugiperda, maize weevil, Sitophilus zeamais, and brown stink bug, Euschistus servus. In 2006 and 2007, aflatoxin contamination and insect damage were sampled before harvest in three 0.4-hectare corn fields using a grid sampling method. The feeding damage by each of ear/kernel-feeding insects (i.e., corn earworm/fall armyworm damage on the silk/cob, and discoloration of corn kernels by stink bugs), and maize weevil population were assessed at each grid point with five ears. The spatial distribution pattern of aflatoxin contamination was also assessed using the corn samples collected at each sampling point. Aflatoxin level was correlated to the number of maize weevils and stink bug-discolored kernels, but not closely correlated to either husk coverage or corn earworm damage. Contour maps of the maize weevil populations, stink bug-damaged kernels, and aflatoxin levels exhibited an aggregated distribution pattern with a strong edge effect on all three parameters. The separation of silk- and cob-feeding insects from kernel-feeding insects, as well as chewing (i.e., the corn earworm and maize weevil) and piercing-sucking insects (i.e., the stink bugs) and their damage in relation to aflatoxin accumulation is economically important. Both theoretic and applied ramifications of this study were discussed by proposing a hypothesis on the underlying mechanisms of the aggregated distribution patterns and strong edge effect of insect damage and aflatoxin contamination, and by discussing possible management tactics for aflatoxin reduction by proper management of kernel-feeding insects. Future directions on basic and applied research related to aflatoxin contamination are also discussed.
Spatial Patterns of Aflatoxin Levels in Relation to Ear-Feeding Insect Damage in Pre-Harvest Corn
Ni, Xinzhi; Wilson, Jeffrey P.; Buntin, G. David; Guo, Baozhu; Krakowsky, Matthew D.; Lee, R. Dewey; Cottrell, Ted E.; Scully, Brian T.; Huffaker, Alisa; Schmelz, Eric A.
2011-01-01
Key impediments to increased corn yield and quality in the southeastern US coastal plain region are damage by ear-feeding insects and aflatoxin contamination caused by infection of Aspergillus flavus. Key ear-feeding insects are corn earworm, Helicoverpa zea, fall armyworm, Spodoptera frugiperda, maize weevil, Sitophilus zeamais, and brown stink bug, Euschistus servus. In 2006 and 2007, aflatoxin contamination and insect damage were sampled before harvest in three 0.4-hectare corn fields using a grid sampling method. The feeding damage by each of ear/kernel-feeding insects (i.e., corn earworm/fall armyworm damage on the silk/cob, and discoloration of corn kernels by stink bugs), and maize weevil population were assessed at each grid point with five ears. The spatial distribution pattern of aflatoxin contamination was also assessed using the corn samples collected at each sampling point. Aflatoxin level was correlated to the number of maize weevils and stink bug-discolored kernels, but not closely correlated to either husk coverage or corn earworm damage. Contour maps of the maize weevil populations, stink bug-damaged kernels, and aflatoxin levels exhibited an aggregated distribution pattern with a strong edge effect on all three parameters. The separation of silk- and cob-feeding insects from kernel-feeding insects, as well as chewing (i.e., the corn earworm and maize weevil) and piercing-sucking insects (i.e., the stink bugs) and their damage in relation to aflatoxin accumulation is economically important. Both theoretic and applied ramifications of this study were discussed by proposing a hypothesis on the underlying mechanisms of the aggregated distribution patterns and strong edge effect of insect damage and aflatoxin contamination, and by discussing possible management tactics for aflatoxin reduction by proper management of kernel-feeding insects. Future directions on basic and applied research related to aflatoxin contamination are also discussed. PMID:22069748
Efficient protein structure search using indexing methods
2013-01-01
Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively. PMID:23691543
Efficient protein structure search using indexing methods.
Kim, Sungchul; Sael, Lee; Yu, Hwanjo
2013-01-01
Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively.
The effect of relatedness and pack size on territory overlap in African wild dogs.
Jackson, Craig R; Groom, Rosemary J; Jordan, Neil R; McNutt, J Weldon
2017-01-01
Spacing patterns mediate competitive interactions between conspecifics, ultimately increasing fitness. The degree of territorial overlap between neighbouring African wild dog ( Lycaon pictus ) packs varies greatly, yet the role of factors potentially affecting the degree of overlap, such as relatedness and pack size, remain unclear. We used movement data from 21 wild dog packs to calculate the extent of territory overlap (20 dyads). On average, unrelated neighbouring packs had low levels of overlap restricted to the peripheral regions of their 95% utilisation kernels. Related neighbours had significantly greater levels of peripheral overlap. Only one unrelated dyad included overlap between 75%-75% kernels, but no 50%-50% kernels overlapped. However, eight of 12 related dyads overlapped between their respective 75% kernels and six between the frequented 50% kernels. Overlap between these more frequented kernels confers a heightened likelihood of encounter, as the mean utilisation intensity per unit area within the 50% kernels was 4.93 times greater than in the 95% kernels, and 2.34 times greater than in the 75% kernels. Related packs spent significantly more time in their 95% kernel overlap zones than did unrelated packs. Pack size appeared to have little effect on overlap between related dyads, yet among unrelated neighbours larger packs tended to overlap more onto smaller packs' territories. However, the true effect is unclear given that the model's confidence intervals overlapped zero. Evidence suggests that costly intraspecific aggression is greatly reduced between related packs. Consequently, the tendency for dispersing individuals to establish territories alongside relatives, where intensively utilised portions of ranges regularly overlap, may extend kin selection and inclusive fitness benefits from the intra-pack to inter-pack level. This natural spacing system can affect survival parameters and the carrying capacity of protected areas, having important management implications for intensively managed populations of this endangered species.
Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods
NASA Astrophysics Data System (ADS)
Liu, Qinya; Tromp, Jeroen
2008-07-01
We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.
Refinement of Methods for Evaluation of Near-Hypersingular Integrals in BEM Formulations
NASA Technical Reports Server (NTRS)
Fink, Patricia W.; Khayat, Michael A.; Wilton, Donald R.
2006-01-01
In this paper, we present advances in singularity cancellation techniques applied to integrals in BEM formulations that are nearly hypersingular. Significant advances have been made recently in singularity cancellation techniques applied to 1 R type kernels [M. Khayat, D. Wilton, IEEE Trans. Antennas and Prop., 53, pp. 3180-3190, 2005], as well as to the gradients of these kernels [P. Fink, D. Wilton, and M. Khayat, Proc. ICEAA, pp. 861-864, Torino, Italy, 2005] on curved subdomains. In these approaches, the source triangle is divided into three tangent subtriangles with a common vertex at the normal projection of the observation point onto the source element or the extended surface containing it. The geometry of a typical tangent subtriangle and its local rectangular coordinate system with origin at the projected observation point is shown in Fig. 1. Whereas singularity cancellation techniques for 1 R type kernels are now nearing maturity, the efficient handling of near-hypersingular kernels still needs attention. For example, in the gradient reference above, techniques are presented for computing the normal component of the gradient relative to the plane containing the tangent subtriangle. These techniques, summarized in the transformations in Table 1, are applied at the sub-triangle level and correspond particularly to the case in which the normal projection of the observation point lies within the boundary of the source element. They are found to be highly efficient as z approaches zero. Here, we extend the approach to cover two instances not previously addressed. First, we consider the case in which the normal projection of the observation point lies external to the source element. For such cases, we find that simple modifications to the transformations of Table 1 permit significant savings in computational cost. Second, we present techniques that permit accurate computation of the tangential components of the gradient; i.e., tangent to the plane containing the source element.
Seismic Imaging of VTI, HTI and TTI based on Adjoint Methods
NASA Astrophysics Data System (ADS)
Rusmanugroho, H.; Tromp, J.
2014-12-01
Recent studies show that isotropic seismic imaging based on adjoint method reduces low-frequency artifact caused by diving waves, which commonly occur in two-wave wave-equation migration, such as Reverse Time Migration (RTM). Here, we derive new expressions of sensitivity kernels for Vertical Transverse Isotropy (VTI) using the Thomsen parameters (ɛ, δ, γ) plus the P-, and S-wave speeds (α, β) as well as via the Chen & Tromp (GJI 2005) parameters (A, C, N, L, F). For Horizontal Transverse Isotropy (HTI), these parameters depend on an azimuthal angle φ, where the tilt angle θ is equivalent to 90°, and for Tilted Transverse Isotropy (TTI), these parameters depend on both the azimuth and tilt angles. We calculate sensitivity kernels for each of these two approaches. Individual kernels ("images") are numerically constructed based on the interaction between the regular and adjoint wavefields in smoothed models which are in practice estimated through Full-Waveform Inversion (FWI). The final image is obtained as a result of summing all shots, which are well distributed to sample the target model properly. The impedance kernel, which is a sum of sensitivity kernels of density and the Thomsen or Chen & Tromp parameters, looks crisp and promising for seismic imaging. The other kernels suffer from low-frequency artifacts, similar to traditional seismic imaging conditions. However, all sensitivity kernels are important for estimating the gradient of the misfit function, which, in combination with a standard gradient-based inversion algorithm, is used to minimize the objective function in FWI.
Cao, Ana; Santiago, Rogelio; Ramos, Antonio J; Souto, Xosé C; Aguín, Olga; Malvar, Rosa Ana; Butrón, Ana
2014-05-02
In northwestern Spain, where weather is rainy and mild throughout the year, Fusarium verticillioides is the most prevalent fungus in kernels and a significant risk of fumonisin contamination has been exposed. In this study, detailed information about environmental and maize genotypic factors affecting F. verticillioides infection, fungal growth and fumonisin content in maize kernels was obtained in order to establish control points to reduce fumonisin contamination. Evaluations were conducted in a total of 36 environments and factorial regression analyses were performed to determine the contribution of each factor to variability among environments, genotypes, and genotype × environment interactions for F. verticillioides infection, fungal growth and fumonisin content. Flowering and kernel drying were the most critical periods throughout the growing season for F. verticillioides infection and fumonisin contamination. Around flowering, wetter and cooler conditions limited F. verticillioides infection and growth, and high temperatures increased fumonisin contents. During kernel drying, increased damaged kernels favored fungal growth, and higher ear damage by corn borers and hard rainfall favored fumonisin accumulation. Later planting dates and especially earlier harvest dates reduced the risk of fumonisin contamination, possibly due to reduced incidence of insects and accumulation of rainfall during the kernel drying period. The use of maize varieties resistant to Sitotroga cerealella, with good husk coverage and non-excessive pericarp thickness could also be useful to reduce fumonisin contamination of maize kernels. Copyright © 2014 Elsevier B.V. All rights reserved.
Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.
Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D
2016-04-01
Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the kernel fraction was redried at 60°C for 48 h in a forced-air oven and dry sieved to determine GMPS and surface area. Linear relationships between CSPS from WPCS (n=80) and kernel fraction GMPS, surface area, and proportion passing through the 4.75-mm screen were poor. Strong quadratic relationships between proportion of kernel fraction passing through the 4.75-mm screen and kernel fraction GMPS and surface area were observed. These findings suggest that hydrodynamic separation and dry sieving of the kernel fraction may provide a better assessment of kernel breakage in WPCS than CSPS. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.
Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less
Shen, Jiajian; Liu, Wei; Stoker, Joshua; Ding, Xiaoning; Anand, Aman; Hu, Yanle; Herman, Michael G; Bues, Martin
2016-12-01
To find an efficient method to configure the proton fluence for a commercial proton pencil beam scanning (PBS) treatment planning system (TPS). An in-water dose kernel was developed to mimic the dose kernel of the pencil beam convolution superposition algorithm, which is part of the commercial proton beam therapy planning software, eclipse™ (Varian Medical Systems, Palo Alto, CA). The field size factor (FSF) was calculated based on the spot profile reconstructed by the in-house dose kernel. The workflow of using FSFs to find the desirable proton fluence is presented. The in-house derived spot profile and FSF were validated by a direct comparison with those calculated by the eclipse TPS. The validation included 420 comparisons of the FSFs from 14 proton energies, various field sizes from 2 to 20 cm and various depths from 20% to 80% of proton range. The relative in-water lateral profiles between the in-house calculation and the eclipse TPS agree very well even at the level of 10 -4 . The FSFs between the in-house calculation and the eclipse TPS also agree well. The maximum deviation is within 0.5%, and the standard deviation is less than 0.1%. The authors' method significantly reduced the time to find the desirable proton fluences of the clinical energies. The method is extensively validated and can be applied to any proton centers using PBS and the eclipse TPS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less
Accurately estimating PSF with straight lines detected by Hough transform
NASA Astrophysics Data System (ADS)
Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong
2018-04-01
This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.
A solution to coupled Dyson{endash}Schwinger equations for gluons and ghosts in Landau gauge
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Smekal, L.; Hauck, A.; Alkofer, R.
1998-07-01
A truncation scheme for the Dyson{endash}Schwinger equations of QCD in Landau gauge is presented which implements the Slavnov{endash}Taylor identities for the 3-point vertex functions. Neglecting contributions from 4-point correlations such as the 4-gluon vertex function and irreducible scattering kernels, a closed system of equations for the propagators is obtained. For the pure gauge theory without quarks this system of equations for the propagators of gluons and ghosts is solved in an approximation which allows for an analytic discussion of its solutions in the infrared: The gluon propagator is shown to vanish for small spacelike momenta whereas the ghost propagator ismore » found to be infrared enhanced. The running coupling of the non-perturbative subtraction scheme approaches an infrared stable fixed point at a critical value of the coupling, {alpha}{sub c}{approx_equal}9.5. The gluon propagator is shown to have no Lehmann representation. The results for the propagators obtained here compare favorably with recent lattice calculations. {copyright} 1998 Academic Press, Inc.« less
Yang, S A
2002-10-01
This paper presents an effective solution method for predicting acoustic radiation and scattering fields in two dimensions. The difficulty of the fictitious characteristic frequency is overcome by incorporating an auxiliary interior surface that satisfies certain boundary condition into the body surface. This process gives rise to a set of uniquely solvable boundary integral equations. Distributing monopoles with unknown strengths over the body and interior surfaces yields the simple source formulation. The modified boundary integral equations are further transformed to ordinary ones that contain nonsingular kernels only. This implementation allows direct application of standard quadrature formulas over the entire integration domain; that is, the collocation points are exactly the positions at which the integration points are located. Selecting the interior surface is an easy task. Moreover, only a few corresponding interior nodal points are sufficient for the computation. Numerical calculations consist of the acoustic radiation and scattering by acoustically hard elliptic and rectangular cylinders. Comparisons with analytical solutions are made. Numerical results demonstrate the efficiency and accuracy of the current solution method.
Kernel-Phase Interferometry for Super-Resolution Detection of Faint Companions
NASA Astrophysics Data System (ADS)
Factor, Samuel M.; Kraus, Adam L.
2017-06-01
Direct detection of close in companions (exoplanets or binary systems) is notoriously difficult. While coronagraphs and point spread function (PSF) subtraction can be used to reduce contrast and dig out signals of companions under the PSF, there are still significant limitations in separation and contrast near λ/D. Non-redundant aperture masking (NRM) interferometry can be used to detect companions well inside the PSF of a diffraction limited image, though the mask discards ˜ 95% of the light gathered by the telescope and thus the technique is severely flux limited. Kernel-phase analysis applies interferometric techniques similar to NRM to a diffraction limited image utilizing the full aperture. Instead of non-redundant closure-phases, kernel-phases are constructed from a grid of points on the full aperture, simulating a redundant interferometer. I have developed a new, easy to use, faint companion detection pipeline which analyzes kernel-phases utilizing Bayesian model comparison. I demonstrate this pipeline on archival images from HST/NICMOS, searching for new companions in order to constrain binary formation models at separations inaccessible to previous techniques. Using this method, it is possible to detect a companion well within the classical λ/D Rayleigh diffraction limit using a fraction of the telescope time as NRM. Since the James Webb Space Telescope (JWST) will be able to perform NRM observations, further development and characterization of kernel-phase analysis will allow efficient use of highly competitive JWST telescope time. As no mask is needed, this technique can easily be applied to archival data and even target acquisition images (e.g. from JWST), making the detection of close in companions cheap and simple as no additional observations are needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kidon, Lyran; The Sackler Center for Computational Molecular and Materials Science, Tel Aviv University, Tel Aviv 69978; Wilner, Eli Y.
2015-12-21
The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima–Zwanzig–Mori time-convolution (TC) and the other on the Tokuyama–Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called “memory kernel” or “generator,” going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operatormore » in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green’s function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.« less
Use of computer code for dose distribution studies in A 60CO industrial irradiator
NASA Astrophysics Data System (ADS)
Piña-Villalpando, G.; Sloan, D. P.
1995-09-01
This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).
Efficient approach to obtain free energy gradient using QM/MM MD simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asada, Toshio; Koseki, Shiro; The Research Institute for Molecular Electronic Devices
2015-12-31
The efficient computational approach denoted as charge and atom dipole response kernel (CDRK) model to consider polarization effects of the quantum mechanical (QM) region is described using the charge response and the atom dipole response kernels for free energy gradient (FEG) calculations in the quantum mechanical/molecular mechanical (QM/MM) method. CDRK model can reasonably reproduce energies and also energy gradients of QM and MM atoms obtained by expensive QM/MM calculations in a drastically reduced computational time. This model is applied on the acylation reaction in hydrated trypsin-BPTI complex to optimize the reaction path on the free energy surface by means ofmore » FEG and the nudged elastic band (NEB) method.« less
A simple method for computing the relativistic Compton scattering kernel for radiative transfer
NASA Technical Reports Server (NTRS)
Prasad, M. K.; Kershaw, D. S.; Beason, J. D.
1986-01-01
Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.
Nana, Roger; Hu, Xiaoping
2010-01-01
k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.
Accelerating the Original Profile Kernel.
Hamp, Tobias; Goldberg, Tatyana; Rost, Burkhard
2013-01-01
One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel.
On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint
Zhang, Chong; Liu, Yufeng; Wu, Yichao
2015-01-01
For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575
Convolution kernels for multi-wavelength imaging
NASA Astrophysics Data System (ADS)
Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.
2016-12-01
Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at
A Precise Drunk Driving Detection Using Weighted Kernel Based on Electrocardiogram.
Wu, Chung Kit; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei
2016-05-09
Globally, 1.2 million people die and 50 million people are injured annually due to traffic accidents. These traffic accidents cost $500 billion dollars. Drunk drivers are found in 40% of the traffic crashes. Existing drunk driving detection (DDD) systems do not provide accurate detection and pre-warning concurrently. Electrocardiogram (ECG) is a proven biosignal that accurately and simultaneously reflects human's biological status. In this letter, a classifier for DDD based on ECG is investigated in an attempt to reduce traffic accidents caused by drunk drivers. At this point, it appears that there is no known research or literature found on ECG classifier for DDD. To identify drunk syndromes, the ECG signals from drunk drivers are studied and analyzed. As such, a precise ECG-based DDD (ECG-DDD) using a weighted kernel is developed. From the measurements, 10 key features of ECG signals were identified. To incorporate the important features, the feature vectors are weighted in the customization of kernel functions. Four commonly adopted kernel functions are studied. Results reveal that weighted feature vectors improve the accuracy by 11% compared to the computation using the prime kernel. Evaluation shows that ECG-DDD improved the accuracy by 8% to 18% compared to prevailing methods.
NASA Astrophysics Data System (ADS)
Chytyk-Praznik, Krista Joy
Radiation therapy is continuously increasing in complexity due to technological innovation in delivery techniques, necessitating thorough dosimetric verification. Comparing accurately predicted portal dose images to measured images obtained during patient treatment can determine if a particular treatment was delivered correctly. The goal of this thesis was to create a method to predict portal dose images that was versatile and accurate enough to use in a clinical setting. All measured images in this work were obtained with an amorphous silicon electronic portal imaging device (a-Si EPID), but the technique is applicable to any planar imager. A detailed, physics-motivated fluence model was developed to characterize fluence exiting the linear accelerator head. The model was further refined using results from Monte Carlo simulations and schematics of the linear accelerator. The fluence incident on the EPID was converted to a portal dose image through a superposition of Monte Carlo-generated, monoenergetic dose kernels specific to the a-Si EPID. Predictions of clinical IMRT fields with no patient present agreed with measured portal dose images within 3% and 3 mm. The dose kernels were applied ignoring the geometrically divergent nature of incident fluence on the EPID. A computational investigation into this parallel dose kernel assumption determined its validity under clinically relevant situations. Introducing a patient or phantom into the beam required the portal image prediction algorithm to account for patient scatter and attenuation. Primary fluence was calculated by attenuating raylines cast through the patient CT dataset, while scatter fluence was determined through the superposition of pre-calculated scatter fluence kernels. Total dose in the EPID was calculated by convolving the total predicted incident fluence with the EPID-specific dose kernels. The algorithm was tested on water slabs with square fields, agreeing with measurement within 3% and 3 mm. The method was then applied to five prostate and six head-and-neck IMRT treatment courses (˜1900 clinical images). Deviations between the predicted and measured images were quantified. The portal dose image prediction model developed in this thesis work has been shown to be accurate, and it was demonstrated to be able to verify patients' delivered radiation treatments.
NASA Technical Reports Server (NTRS)
McNab, A. David; woo, Alex (Technical Monitor)
1999-01-01
Portals, an experimental feature of 4.4BSD, extend the file system name space by exporting certain open () requests to a user-space daemon. A portal daemon is mounted into the file name space as if it were a standard file system. When the kernel resolves a pathname and encounters a portal mount point, the remainder of the path is passed to the portal daemon. Depending on the portal "pathname" and the daemon's configuration, some type of open (2) is performed. The resulting file descriptor is passed back to the kernel which eventually returns it to the user, to whom it appears that a "normal" open has occurred. A proxy portalfs file system is responsible for kernel interaction with the daemon. The overall effect is that the portal daemon performs an open (2) on behalf of the kernel, possibly hiding substantial complexity from the calling process. One particularly useful application is implementing a connection service that allows simple scripts to open network sockets. This paper describes the implementation of portals for LINUX 2.0.
Accurate interatomic force fields via machine learning with covariant kernels
NASA Astrophysics Data System (ADS)
Glielmo, Aldo; Sollich, Peter; De Vita, Alessandro
2017-06-01
We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian process (GP) regression. This is based on matrix-valued kernel functions, on which we impose the requirements that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such covariant GP kernels can be obtained by integration over the elements of the rotation group SO (d ) for the relevant dimensionality d . Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni, Fe, and Si crystalline systems.
Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization
Zhu, Qingxin; Niu, Xinzheng
2016-01-01
By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996
Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng
2016-01-01
By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms.
Efficient High Performance Collective Communication for Distributed Memory Environments
ERIC Educational Resources Information Center
Ali, Qasim
2009-01-01
Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…
On Pfaffian Random Point Fields
NASA Astrophysics Data System (ADS)
Kargin, V.
2014-02-01
We study Pfaffian random point fields by using the Moore-Dyson quaternion determinants. First, we give sufficient conditions that ensure that a self-dual quaternion kernel defines a valid random point field, and then we prove a CLT for Pfaffian point fields. The proofs are based on a new quaternion extension of the Cauchy-Binet determinantal identity. In addition, we derive the Fredholm determinantal formulas for the Pfaffian point fields which use the quaternion determinant.
Visco-elastic controlled-source full waveform inversion without surface waves
NASA Astrophysics Data System (ADS)
Paschke, Marco; Krause, Martin; Bleibinhaus, Florian
2016-04-01
We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.
Skin dose from radionuclide contamination on clothing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, D.C.; Hussein, E.M.A.; Yuen, P.S.
1997-06-01
Skin dose due to radio nuclide contamination on clothing is calculated by Monte Carlo simulation of electron and photon radiation transport. Contamination due to a hot particle on some selected clothing geometries of cotton garment is simulated. The effect of backscattering in the surrounding air is taken into account. For each combination of source-clothing geometry, the dose distribution function in the skin, including the dose at tissue depths of 7 mg cm{sup -2} and 1,000 Mg cm{sup -2}, is calculated by simulating monoenergetic photon and electron sources. Skin dose due to contamination by a radionuclide is then determined by propermore » weighting of & monoenergetic dose distribution functions. The results are compared with the VARSKIN point-kernel code for some radionuclides, indicating that the latter code tends to under-estimate the dose for gamma and high energy beta sources while it overestimates skin dose for low energy beta sources. 13 refs., 4 figs., 2 tabs.« less
StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.
Li, Chenhui; Baciu, George; Han, Yu
2018-03-01
Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.
Removal of Malachite Green Dye by Mangifera indica Seed Kernel Powder
NASA Astrophysics Data System (ADS)
Singh, Dilbagh; Sowmya, V.; Abinandan, S.; Shanthakumar, S.
2017-11-01
In this study, batch experiments were carried out to study the adsorption of Malachite green dye from aqueous solution by Mangifera indica (mango) seed kernel powder. The mango seed kernel powder was characterized by Fourier transform infrared spectroscopy and scanning electron microscopy. Effect of various parameters including pH, contact time, adsorbent dosage, initial dye concentration and temperature on adsorption capacity of the adsorbent was observed and the optimized condition for maximum dye removal was identified. Maximum percentage removal of 96% was achieved with an adsorption capacity of 22.8 mg/g at pH 6 with an initial concentration of 100 mg/l. The equilibrium data were examined to fit the Langmuir and Freundlich isotherm models. Thermodynamic parameters for the adsorption process were also calculated.
Kernel machines for epilepsy diagnosis via EEG signal classification: a comparative study.
Lima, Clodoaldo A M; Coelho, André L V
2011-10-01
We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely, Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). Copyright © 2011 Elsevier B.V. All rights reserved.
Higher-order finite-difference formulation of periodic Orbital-free Density Functional Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Swarnava; Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu
2016-02-15
We present a real-space formulation and higher-order finite-difference implementation of periodic Orbital-free Density Functional Theory (OF-DFT). Specifically, utilizing a local reformulation of the electrostatic and kernel terms, we develop a generalized framework for performing OF-DFT simulations with different variants of the electronic kinetic energy. In particular, we propose a self-consistent field (SCF) type fixed-point method for calculations involving linear-response kinetic energy functionals. In this framework, evaluation of both the electronic ground-state and forces on the nuclei are amenable to computations that scale linearly with the number of atoms. We develop a parallel implementation of this formulation using the finite-difference discretization.more » We demonstrate that higher-order finite-differences can achieve relatively large convergence rates with respect to mesh-size in both the energies and forces. Additionally, we establish that the fixed-point iteration converges rapidly, and that it can be further accelerated using extrapolation techniques like Anderson's mixing. We validate the accuracy of the results by comparing the energies and forces with plane-wave methods for selected examples, including the vacancy formation energy in Aluminum. Overall, the suitability of the proposed formulation for scalable high performance computing makes it an attractive choice for large-scale OF-DFT calculations consisting of thousands of atoms.« less
Advanced Fuels for LWRs: Fully-Ceramic Microencapsulated and Related Concepts FY 2012 Interim Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Sonat Sen; Brian Boer; John D. Bess
2012-03-01
This report summarizes the progress in the Deep Burn project at Idaho National Laboratory during the first half of fiscal year 2012 (FY2012). The current focus of this work is on Fully-Ceramic Microencapsulated (FCM) fuel containing low-enriched uranium (LEU) uranium nitride (UN) fuel kernels. UO2 fuel kernels have not been ruled out, and will be examined as later work in FY2012. Reactor physics calculations confirmed that the FCM fuel containing 500 mm diameter kernels of UN fuel has positive MTC with a conventional fuel pellet radius of 4.1 mm. The methodology was put into place and validated against MCNP tomore » perform whole-core calculations using DONJON, which can interpolate cross sections from a library generated using DRAGON. Comparisons to MCNP were performed on the whole core to confirm the accuracy of the DRAGON/DONJON schemes. A thermal fluid coupling scheme was also developed and implemented with DONJON. This is currently able to iterate between diffusion calculations and thermal fluid calculations in order to update fuel temperatures and cross sections in whole-core calculations. Now that the DRAGON/DONJON calculation capability is in place and has been validated against MCNP results, and a thermal-hydraulic capability has been implemented in the DONJON methodology, the work will proceed to more realistic reactor calculations. MTC calculations at the lattice level without the correct burnable poison are inadequate to guarantee zero or negative values in a realistic mode of operation. Using the DONJON calculation methodology described in this report, a startup core with enrichment zoning and burnable poisons will be designed. Larger fuel pins will be evaluated for their ability to (1) alleviate the problem of positive MTC and (2) increase reactivity-limited burnup. Once the critical boron concentration of the startup core is determined, MTC will be calculated to verify a non-positive value. If the value is positive, the design will be changed to require less soluble boron by, for example, increasing the reactivity hold-down by burnable poisons. Then, the whole core analysis will be repeated until an acceptable design is found. Calculations of departure from nucleate boiling ratio (DNBR) will be included in the safety evaluation as well. Once a startup core is shown to be viable, subsequent reloads will be simulated by shuffling fuel and introducing fresh fuel. The PASTA code has been updated with material properties of UN fuel from literature and a model for the diffusion and release of volatile fission products from the SiC matrix material . Preliminary simulations have been performed for both normal conditions and elevated temperatures. These results indicated that the fuel performs well and that the SiC matrix has a good retention of the fission products. The path forward for fuel performance work includes improvement of metallic fission product release from the kernel. Results should be considered preliminary and further validation is required.« less
Chaves, J; Barroso, J M; Bultinck, P; Carbó-Dorca, R
2006-01-01
This study presents an alternative of the Electronegativity Equalization Method (EEM), where the usual Coulomb kernel has been transformed into a smooth function. The new framework, as the classical EEM, permits fast calculations of atomic charges in a given molecule for a small computational cost. The original EEM procedure needs to previously calibrate the different implied atomic hardness and electronegativity, using a chosen set of molecules. In the new EEM algorithm half the number of parameters needs to be calibrated, since a relationship between electronegativities and hardnesses has been found.
Propagation and Directional Scattering of Ocean Waves in the Marginal Ice Zone and Neighboring Seas
2015-09-30
expected to be the average of the kernel for 10 s and 12 s. This means that we should be able to calculate empirical formulas for 2 the scattering kernel...floe packing. Thus, establish a way to incorporate what has been done by Squire and co-workers into the wave model paradigm (in which the phase of the...cases observed by Kohout et al. (2014) in Antarctica . vii. Validation: We are planning validation tests for wave-ice scattering / attenuation model by
Using Adjoint Methods to Improve 3-D Velocity Models of Southern California
NASA Astrophysics Data System (ADS)
Liu, Q.; Tape, C.; Maggi, A.; Tromp, J.
2006-12-01
We use adjoint methods popular in climate and ocean dynamics to calculate Fréchet derivatives for tomographic inversions in southern California. The Fréchet derivative of an objective function χ(m), where m denotes the Earth model, may be written in the generic form δχ=int Km(x) δln m(x) d3x, where δln m=δ m/m denotes the relative model perturbation. For illustrative purposes, we construct the 3-D finite-frequency banana-doughnut kernel Km, corresponding to the misfit of a single traveltime measurement, by simultaneously computing the 'adjoint' wave field s† forward in time and reconstructing the regular wave field s backward in time. The adjoint wave field is produced by using the time-reversed velocity at the receiver as a fictitious source, while the regular wave field is reconstructed on the fly by propagating the last frame of the wave field saved by a previous forward simulation backward in time. The approach is based upon the spectral-element method, and only two simulations are needed to produce density, shear-wave, and compressional-wave sensitivity kernels. This method is applied to the SCEC southern California velocity model. Various density, shear-wave, and compressional-wave sensitivity kernels are presented for different phases in the seismograms. We also generate 'event' kernels for Pnl, S and surface waves, which are the Fréchet kernels of misfit functions that measure the P, S or surface wave traveltime residuals at all the receivers simultaneously for one particular event. Effectively, an event kernel is a sum of weighted Fréchet kernels, with weights determined by the associated traveltime anomalies. By the nature of the 3-D simulation, every event kernel is also computed based upon just two simulations, i.e., its construction costs the same amount of computation time as an individual banana-doughnut kernel. One can think of the sum of the event kernels for all available earthquakes, called the 'misfit' kernel, as a graphical representation of the gradient of the misfit function. With the capability of computing both the value of the misfit function and its gradient, which assimilates the traveltime anomalies, we are ready to use a non-linear conjugate gradient algorithm to iteratively improve velocity models of southern California.
Bose–Einstein condensation temperature of finite systems
NASA Astrophysics Data System (ADS)
Xie, Mi
2018-05-01
In studies of the Bose–Einstein condensation of ideal gases in finite systems, the divergence problem usually arises in the equation of state. In this paper, we present a technique based on the heat kernel expansion and zeta function regularization to solve the divergence problem, and obtain the analytical expression of the Bose–Einstein condensation temperature for general finite systems. The result is represented by the heat kernel coefficients, where the asymptotic energy spectrum of the system is used. Besides the general case, for systems with exact spectra, e.g. ideal gases in an infinite slab or in a three-sphere, the sums of the spectra can be obtained exactly and the calculation of corrections to the critical temperatures is more direct. For a system confined in a bounded potential, the form of the heat kernel is different from the usual heat kernel expansion. We show that as long as the asymptotic form of the global heat kernel can be found, our method works. For Bose gases confined in three- and two-dimensional isotropic harmonic potentials, we obtain the higher-order corrections to the usual results of the critical temperatures. Our method can also be applied to the problem of generalized condensation, and we give the correction of the boundary on the second critical temperature in a highly anisotropic slab.
Lévy processes on a generalized fractal comb
NASA Astrophysics Data System (ADS)
Sandev, Trifce; Iomin, Alexander; Méndez, Vicenç
2016-09-01
Comb geometry, constituted of a backbone and fingers, is one of the most simple paradigm of a two-dimensional structure, where anomalous diffusion can be realized in the framework of Markov processes. However, the intrinsic properties of the structure can destroy this Markovian transport. These effects can be described by the memory and spatial kernels. In particular, the fractal structure of the fingers, which is controlled by the spatial kernel in both the real and the Fourier spaces, leads to the Lévy processes (Lévy flights) and superdiffusion. This generalization of the fractional diffusion is described by the Riesz space fractional derivative. In the framework of this generalized fractal comb model, Lévy processes are considered, and exact solutions for the probability distribution functions are obtained in terms of the Fox H-function for a variety of the memory kernels, and the rate of the superdiffusive spreading is studied by calculating the fractional moments. For a special form of the memory kernels, we also observed a competition between long rests and long jumps. Finally, we considered the fractal structure of the fingers controlled by a Weierstrass function, which leads to the power-law kernel in the Fourier space. This is a special case, when the second moment exists for superdiffusion in this competition between long rests and long jumps.
Voronoi cell patterns: Theoretical model and applications
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2011-11-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.
Voronoi Cell Patterns: theoretical model and application to submonolayer growth
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2012-02-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We apply our model to describe the Voronoi cell patterns of island nucleation for critical island sizes i=0,1,2,3. Experimental results for the Voronoi cells of InAs/GaAs quantum dots are also described by our model.
Classification of Microarray Data Using Kernel Fuzzy Inference System
Kumar Rath, Santanu
2014-01-01
The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature selection method. Kernel functions are used to map original data points into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function ϕ through a mathematical process called the kernel trick. This paper also presents a comparative study for classification using K-FIS along with support vector machine (SVM) for different set of features (genes). Performance parameters available in the literature such as precision, recall, specificity, F-measure, ROC curve, and accuracy are considered to analyze the efficiency of the classification model. From the proposed approach, it is apparent that K-FIS model obtains similar results when compared with SVM model. This is an indication that the proposed approach relies on kernel function. PMID:27433543
[Study on application of SVM in prediction of coronary heart disease].
Zhu, Yue; Wu, Jianghua; Fang, Ying
2013-12-01
Base on the data of blood pressure, plasma lipid, Glu and UA by physical test, Support Vector Machine (SVM) was applied to identify coronary heart disease (CHD) in patients and non-CHD individuals in south China population for guide of further prevention and treatment of the disease. Firstly, the SVM classifier was built using radial basis kernel function, liner kernel function and polynomial kernel function, respectively. Secondly, the SVM penalty factor C and kernel parameter sigma were optimized by particle swarm optimization (PSO) and then employed to diagnose and predict the CHD. By comparison with those from artificial neural network with the back propagation (BP) model, linear discriminant analysis, logistic regression method and non-optimized SVM, the overall results of our calculation demonstrated that the classification performance of optimized RBF-SVM model could be superior to other classifier algorithm with higher accuracy rate, sensitivity and specificity, which were 94.51%, 92.31% and 96.67%, respectively. So, it is well concluded that SVM could be used as a valid method for assisting diagnosis of CHD.
The moving-least-squares-particle hydrodynamics method (MLSPH)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dilts, G.
1997-12-31
An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for themore » SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.« less
Validation and uncertainty analysis of a pre-treatment 2D dose prediction model
NASA Astrophysics Data System (ADS)
Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank
2018-02-01
Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.
Analysis of the cable equation with non-local and non-singular kernel fractional derivative
NASA Astrophysics Data System (ADS)
Karaagac, Berat
2018-02-01
Recently a new concept of differentiation was introduced in the literature where the kernel was converted from non-local singular to non-local and non-singular. One of the great advantages of this new kernel is its ability to portray fading memory and also well defined memory of the system under investigation. In this paper the cable equation which is used to develop mathematical models of signal decay in submarine or underwater telegraphic cables will be analysed using the Atangana-Baleanu fractional derivative due to the ability of the new fractional derivative to describe non-local fading memory. The existence and uniqueness of the more generalized model is presented in detail via the fixed point theorem. A new numerical scheme is used to solve the new equation. In addition, stability, convergence and numerical simulations are presented.
An orthogonal oriented quadrature hexagonal image pyramid
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.
1987-01-01
An image pyramid has been developed with basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The pyramid operates on a hexagonal sample lattice. The set of seven basis functions consist of three even high-pass kernels, three odd high-pass kernels, and one low-pass kernel. The three even kernels are identified when rotated by 60 or 120 deg, and likewise for the odd. The seven basis functions occupy a point and a hexagon of six nearest neighbors on a hexagonal sample lattice. At the lowest level of the pyramid, the input lattice is the image sample lattice. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing sq rt 7 larger than the previous level, so that the number of coefficients is reduced by a factor of 7 at each level. The relationship between this image code and the processing architecture of the primate visual cortex is discussed.
Methods for compressible fluid simulation on GPUs using high-order finite differences
NASA Astrophysics Data System (ADS)
Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer
2017-08-01
We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klüter, Sebastian, E-mail: sebastian.klueter@med.uni-heidelberg.de; Schubert, Kai; Lissner, Steffen
Purpose: The dosimetric verification of treatment plans in helical tomotherapy usually is carried out via verification measurements. In this study, a method for independent dose calculation of tomotherapy treatment plans is presented, that uses a conventional treatment planning system with a pencil kernel dose calculation algorithm for generation of verification dose distributions based on patient CT data. Methods: A pencil beam algorithm that directly uses measured beam data was configured for dose calculation for a tomotherapy machine. Tomotherapy treatment plans were converted into a format readable by an in-house treatment planning system by assigning each projection to one static treatmentmore » field and shifting the calculation isocenter for each field in order to account for the couch movement. The modulation of the fluence for each projection is read out of the delivery sinogram, and with the kernel-based dose calculation, this information can directly be used for dose calculation without the need for decomposition of the sinogram. The sinogram values are only corrected for leaf output and leaf latency. Using the converted treatment plans, dose was recalculated with the independent treatment planning system. Multiple treatment plans ranging from simple static fields to real patient treatment plans were calculated using the new approach and either compared to actual measurements or the 3D dose distribution calculated by the tomotherapy treatment planning system. In addition, dose–volume histograms were calculated for the patient plans. Results: Except for minor deviations at the maximum field size, the pencil beam dose calculation for static beams agreed with measurements in a water tank within 2%/2 mm. A mean deviation to point dose measurements in the cheese phantom of 0.89% ± 0.81% was found for unmodulated helical plans. A mean voxel-based deviation of −0.67% ± 1.11% for all voxels in the respective high dose region (dose values >80%), and a mean local voxel-based deviation of −2.41% ± 0.75% for all voxels with dose values >20% were found for 11 modulated plans in the cheese phantom. Averaged over nine patient plans, the deviations amounted to −0.14% ± 1.97% (voxels >80%) and −0.95% ± 2.27% (>20%, local deviations). For a lung case, mean voxel-based deviations of more than 4% were found, while for all other patient plans, all mean voxel-based deviations were within ±2.4%. Conclusions: The presented method is suitable for independent dose calculation for helical tomotherapy within the known limitations of the pencil beam algorithm. It can serve as verification of the primary dose calculation and thereby reduce the need for time-consuming measurements. By using the patient anatomy and generating full 3D dose data, and combined with measurements of additional machine parameters, it can substantially contribute to overall patient safety.« less
Reducing disk storage of full-3D seismic waveform tomography (F3DT) through lossy online compression
NASA Astrophysics Data System (ADS)
Lindstrom, Peter; Chen, Po; Lee, En-Jui
2016-08-01
Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithm into our F3DT-SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.
Reducing Disk Storage of Full-3D Seismic Waveform Tomography (F3DT) Through Lossy Online Compression
Lindstrom, Peter; Chen, Po; Lee, En-Jui
2016-05-05
Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithmmore » into our F3DT SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.« less
Hesselmann, Andreas; Görling, Andreas
2011-01-21
A recently introduced time-dependent exact-exchange (TDEXX) method, i.e., a response method based on time-dependent density-functional theory that treats the frequency-dependent exchange kernel exactly, is reformulated. In the reformulated version of the TDEXX method electronic excitation energies can be calculated by solving a linear generalized eigenvalue problem while in the original version of the TDEXX method a laborious frequency iteration is required in the calculation of each excitation energy. The lowest eigenvalues of the new TDEXX eigenvalue equation corresponding to the lowest excitation energies can be efficiently obtained by, e.g., a version of the Davidson algorithm appropriate for generalized eigenvalue problems. Alternatively, with the help of a series expansion of the new TDEXX eigenvalue equation, standard eigensolvers for large regular eigenvalue problems, e.g., the standard Davidson algorithm, can be used to efficiently calculate the lowest excitation energies. With the help of the series expansion as well, the relation between the TDEXX method and time-dependent Hartree-Fock is analyzed. Several ways to take into account correlation in addition to the exact treatment of exchange in the TDEXX method are discussed, e.g., a scaling of the Kohn-Sham eigenvalues, the inclusion of (semi)local approximate correlation potentials, or hybrids of the exact-exchange kernel with kernels within the adiabatic local density approximation. The lowest lying excitations of the molecules ethylene, acetaldehyde, and pyridine are considered as examples.
Xyloglucans from flaxseed kernel cell wall: Structural and conformational characterisation.
Ding, Huihuang H; Cui, Steve W; Goff, H Douglas; Chen, Jie; Guo, Qingbin; Wang, Qi
2016-10-20
The structure of ethanol precipitated fraction from 1M KOH extracted flaxseed kernel polysaccharides (KPI-EPF) was studied for better understanding the molecular structures of flaxseed kernel cell wall polysaccharides. Based on methylation/GC-MS, NMR spectroscopy, and MALDI-TOF-MS analysis, the dominate sugar residues of KPI-EPF fraction comprised of (1,4,6)-linked-β-d-glucopyranose (24.1mol%), terminal α-d-xylopyranose (16.2mol%), (1,2)-α-d-linked-xylopyranose (10.7mol%), (1,4)-β-d-linked-glucopyranose (10.7mol%), and terminal β-d-galactopyranose (8.5mol%). KPI-EPF was proposed as xyloglucans: The substitution rate of the backbone is 69.3%; R1 could be T-α-d-Xylp-(1→, or none; R2 could be T-α-d-Xylp-(1→, T-β-d-Galp-(1→2)-α-d-Xylp-(1→, or T-α-l-Araf-(1→2)-α-d-Xylp-(1→; R3 could be T-α-d-Xylp-(1→, T-β-d-Galp-(1→2)-α-d-Xylp-(1→, T-α-l-Fucp-(1→2)-β-d-Galp-(1→2)-α-d-Xylp-(1→, or none. The Mw of KPI-EPF was calculated to be 1506kDa by static light scattering (SLS). The structure-sensitive parameter (ρ) of KPI-EPF was calculated as 1.44, which confirmed the highly branched structure of extracted xyloglucans. This new findings on flaxseed kernel xyloglucans will be helpful for understanding its fermentation properties and potential applications. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikell, J; Siman, W; Kappadath, S
2014-06-15
Purpose: 90Y microsphere therapy in liver presents a situation where beta transport is dominant and the tissue is relatively homogenous. We compare voxel-based absorbed doses from a 90Y kernel to Monte Carlo (MC) using quantitative 90Y bremsstrahlung SPECT/CT as source distribution. Methods: Liver, normal liver, and tumors were delineated by an interventional radiologist using contrast-enhanced CT registered with 90Y SPECT/CT scans for 14 therapies. Right lung was segmented via region growing. The kernel was generated with 1.04 g/cc soft tissue for 4.8 mm voxel matching the SPECT. MC simulation materials included air, lung, soft tissue, and bone with varying densities.more » We report percent difference between kernel and MC (%Δ(K,MC)) for mean absorbed dose, D70, and V20Gy in total liver, normal liver, tumors, and right lung. We also report %Δ(K,MC) for heterogeneity metrics: coefficient of variation (COV) and D10/D90. The impact of spatial resolution (0, 10, 20 mm FWHM) and lung shunt fraction (LSF) (1,5,10,20%) on the accuracy of MC and kernel doses near the liver-lung interface was modeled in 1D. We report the distance from the interface where errors become <10% of unblurred MC as d10(side of interface, dose calculation, FWHM blurring, LSF). Results: The %Δ(K,MC) for mean, D70, and V20Gy in tumor and liver was <7% while right lung differences varied from 60–90%. The %Δ(K,MC) for COV was <4.8% for tumor and liver and <54% for the right lung. The %Δ(K,MC) for D10/D90 was <5% for 22/23 tumors. d10(liver,MC,10,1–20) awere <9mm and d10(liver,MC,20,1–20) awere <15mm; both agreed within 3mm to the kernel. d10(lung,MC,10,20), d10(lung,MC,10,1), d10(lung,MC,20,20), and d10(lung,MC,20,1) awere 6, 25, 15, and 34mm, respectively. Kernel calculations on blurred distributions in lung had errors > 10%. Conclusions: Liver and tumor voxel doses with 90Y kernel and MC agree within 7%. Large differences exist between the two methods in right lung. Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA138986. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
Wang, Gang; Wang, Yalin
2017-02-15
In this paper, we propose a heat kernel based regional shape descriptor that may be capable of better exploiting volumetric morphological information than other available methods, thereby improving statistical power on brain magnetic resonance imaging (MRI) analysis. The mechanism of our analysis is driven by the graph spectrum and the heat kernel theory, to capture the volumetric geometry information in the constructed tetrahedral meshes. In order to capture profound brain grey matter shape changes, we first use the volumetric Laplace-Beltrami operator to determine the point pair correspondence between white-grey matter and CSF-grey matter boundary surfaces by computing the streamlines in a tetrahedral mesh. Secondly, we propose multi-scale grey matter morphology signatures to describe the transition probability by random walk between the point pairs, which reflects the inherent geometric characteristics. Thirdly, a point distribution model is applied to reduce the dimensionality of the grey matter morphology signatures and generate the internal structure features. With the sparse linear discriminant analysis, we select a concise morphology feature set with improved classification accuracies. In our experiments, the proposed work outperformed the cortical thickness features computed by FreeSurfer software in the classification of Alzheimer's disease and its prodromal stage, i.e., mild cognitive impairment, on publicly available data from the Alzheimer's Disease Neuroimaging Initiative. The multi-scale and physics based volumetric structure feature may bring stronger statistical power than some traditional methods for MRI-based grey matter morphology analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Effects of study area size on home range estimates of common bottlenose dolphins Tursiops truncatus
Nekolny, Samantha R; Denny, Matthew; Biedenbach, George; Howells, Elisabeth M; Mazzoil, Marilyn; Durden, Wendy N; Moreland, Lydia; David Lambert, J
2017-01-01
Abstract Knowledge of an animal’s home range is a crucial component in making informed management decisions. However, many home range studies are limited by study area size, and therefore may underestimate the size of the home range. In many cases, individuals have been shown to travel outside of the study area and utilize a larger area than estimated by the study design. In this study, data collected by multiple research groups studying bottlenose dolphins on the east coast of Florida were combined to determine how home range estimates increased with increasing study area size. Home range analyses utilized photo-identification data collected from 6 study areas throughout the St Johns River (SJR; Jacksonville, FL, USA) and adjacent waterways, extending a total of 253 km to the southern end of Mosquito Lagoon in the Indian River Lagoon Estuarine System. Univariate kernel density estimates (KDEs) were computed for individuals with 10 or more sightings (n = 20). Kernels were calculated for the primary study area (SJR) first, then additional kernels were calculated by combining the SJR and the next adjacent waterway; this continued in an additive fashion until all study areas were included. The 95% and 50% KDEs calculated for the SJR alone ranged from 21 to 35 km and 4 to 19 km, respectively. The 95% and 50% KDEs calculated for all combined study areas ranged from 116 to 217 km and 9 to 70 km, respectively. This study illustrates the degree to which home range may be underestimated by the use of limited study areas and demonstrates the benefits of conducting collaborative science. PMID:29492031
Effects of study area size on home range estimates of common bottlenose dolphins Tursiops truncatus.
Nekolny, Samantha R; Denny, Matthew; Biedenbach, George; Howells, Elisabeth M; Mazzoil, Marilyn; Durden, Wendy N; Moreland, Lydia; David Lambert, J; Gibson, Quincy A
2017-12-01
Knowledge of an animal's home range is a crucial component in making informed management decisions. However, many home range studies are limited by study area size, and therefore may underestimate the size of the home range. In many cases, individuals have been shown to travel outside of the study area and utilize a larger area than estimated by the study design. In this study, data collected by multiple research groups studying bottlenose dolphins on the east coast of Florida were combined to determine how home range estimates increased with increasing study area size. Home range analyses utilized photo-identification data collected from 6 study areas throughout the St Johns River (SJR; Jacksonville, FL, USA) and adjacent waterways, extending a total of 253 km to the southern end of Mosquito Lagoon in the Indian River Lagoon Estuarine System. Univariate kernel density estimates (KDEs) were computed for individuals with 10 or more sightings ( n = 20). Kernels were calculated for the primary study area (SJR) first, then additional kernels were calculated by combining the SJR and the next adjacent waterway; this continued in an additive fashion until all study areas were included. The 95% and 50% KDEs calculated for the SJR alone ranged from 21 to 35 km and 4 to 19 km, respectively. The 95% and 50% KDEs calculated for all combined study areas ranged from 116 to 217 km and 9 to 70 km, respectively. This study illustrates the degree to which home range may be underestimated by the use of limited study areas and demonstrates the benefits of conducting collaborative science.
Unified connected theory of few-body reaction mechanisms in N-body scattering theory
NASA Technical Reports Server (NTRS)
Polyzou, W. N.; Redish, E. F.
1978-01-01
A unified treatment of different reaction mechanisms in nonrelativistic N-body scattering is presented. The theory is based on connected kernel integral equations that are expected to become compact for reasonable constraints on the potentials. The operators T/sub +-//sup ab/(A) are approximate transition operators that describe the scattering proceeding through an arbitrary reaction mechanism A. These operators are uniquely determined by a connected kernel equation and satisfy an optical theorem consistent with the choice of reaction mechanism. Connected kernel equations relating T/sub +-//sup ab/(A) to the full T/sub +-//sup ab/ allow correction of the approximate solutions for any ignored process to any order. This theory gives a unified treatment of all few-body reaction mechanisms with the same dynamic simplicity of a model calculation, but can include complicated reaction mechanisms involving overlapping configurations where it is difficult to formulate models.
NASA Astrophysics Data System (ADS)
Güleçyüz, M. Ç.; Şenyiğit, M.; Ersoy, A.
2018-01-01
The Milne problem is studied in one speed neutron transport theory using the linearly anisotropic scattering kernel which combines forward and backward scatterings (extremely anisotropic scattering) for a non-absorbing medium with specular and diffuse reflection boundary conditions. In order to calculate the extrapolated endpoint for the Milne problem, Legendre polynomial approximation (PN method) is applied and numerical results are tabulated for selected cases as a function of different degrees of anisotropic scattering. Finally, some results are discussed and compared with the existing results in literature.
Calculation of the time resolution of the J-PET tomograph using kernel density estimation
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
2017-06-01
In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.
A locally adaptive kernel regression method for facies delineation
NASA Astrophysics Data System (ADS)
Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.
2015-12-01
Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
Sensitivity kernels for viscoelastic loading based on adjoint methods
NASA Astrophysics Data System (ADS)
Al-Attar, David; Tromp, Jeroen
2014-01-01
Observations of glacial isostatic adjustment (GIA) allow for inferences to be made about mantle viscosity, ice sheet history and other related parameters. Typically, this inverse problem can be formulated as minimizing the misfit between the given observations and a corresponding set of synthetic data. When the number of parameters is large, solution of such optimization problems can be computationally challenging. A practical, albeit non-ideal, solution is to use gradient-based optimization. Although the gradient of the misfit required in such methods could be calculated approximately using finite differences, the necessary computation time grows linearly with the number of model parameters, and so this is often infeasible. A far better approach is to apply the `adjoint method', which allows the exact gradient to be calculated from a single solution of the forward problem, along with one solution of the associated adjoint problem. As a first step towards applying the adjoint method to the GIA inverse problem, we consider its application to a simpler viscoelastic loading problem in which gravitationally self-consistent ocean loading is neglected. The earth model considered is non-rotating, self-gravitating, compressible, hydrostatically pre-stressed, laterally heterogeneous and possesses a Maxwell solid rheology. We determine adjoint equations and Fréchet kernels for this problem based on a Lagrange multiplier method. Given an objective functional J defined in terms of the surface deformation fields, we show that its first-order perturbation can be written δ J = int _{MS}K_{η }δ ln η dV +int _{t0}^{t1}int _{partial M}K_{dot{σ }} δ dot{σ } dS dt, where δ ln η = δη/η denotes relative viscosity variations in solid regions MS, dV is the volume element, δ dot{σ } is the perturbation to the time derivative of the surface load which is defined on the earth model's surface ∂M and for times [t0, t1] and dS is the surface element on ∂M. The `viscosity kernel' Kη determines the linearized sensitivity of J to viscosity perturbations defined with respect to a laterally heterogeneous reference earth model, while the `rate-of-loading kernel' K_{dot{σ }} determines the sensitivity to variations in the time derivative of the surface load. By restricting attention to spherically symmetric viscosity perturbations, we also obtain a `radial viscosity kernel' overline{K}_{η } such that the associated contribution to δJ can be written int _{IS}overline{K}_{η }δ ln η dr, where IS denotes the subset of radii lying in solid regions. In order to illustrate this theory, we describe its numerical implementation in the case of a spherically symmetric earth model using a 1-D spectral element method, and calculate sensitivity kernels for a range of realistic observables.
On the solution of integral equations with a generalized cauchy kernel
NASA Technical Reports Server (NTRS)
Kaya, A. C.; Erdogan, F.
1986-01-01
In this paper a certain class of singular integral equations that may arise from the mixed boundary value problems in nonhomogeneous materials is considered. The distinguishing feature of these equations is that in addition to the Cauchy singularity, the kernels contain terms that are singular only at the end points. In the form of the singular integral equations adopted, the density function is a potential or a displacement and consequently the kernel has strong singularities of the form (t-x) sup-2, x sup n-2 (t+x) sup n, (n or = 2, 0x,tb). The complex function theory is used to determine the fundamental function of the problem for the general case and a simple numerical technique is described to solve the integral equation. Two examples from the theory of elasticity are then considered to show the application of the technique.
Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan
2007-11-01
Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.
NASA Astrophysics Data System (ADS)
Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan
2007-11-01
Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.
NASA Astrophysics Data System (ADS)
Mancinelli, N. J.; Fischer, K. M.
2018-03-01
We characterize the spatial sensitivity of Sp converted waves to improve constraints on lateral variations in uppermost-mantle velocity gradients, such as the lithosphere-asthenosphere boundary (LAB) and the mid-lithospheric discontinuities. We use SPECFEM2D to generate 2-D scattering kernels that relate perturbations from an elastic half-space to Sp waveforms. We then show that these kernels can be well approximated using ray theory, and develop an approach to calculating kernels for layered background models. As proof of concept, we show that lateral variations in uppermost-mantle discontinuity structure are retrieved by implementing these scattering kernels in the first iteration of a conjugate-directions inversion algorithm. We evaluate the performance of this technique on synthetic seismograms computed for 2-D models with undulations on the LAB of varying amplitude, wavelength and depth. The technique reliably images the position of discontinuities with dips <35° and horizontal wavelengths >100-200 km. In cases of mild topography on a shallow LAB, the relative brightness of the LAB and Moho converters approximately agrees with the ratio of velocity contrasts across the discontinuities. Amplitude retrieval degrades at deeper depths. For dominant periods of 4 s, the minimum station spacing required to produce unaliased results is 5 km, but the application of a Gaussian filter can improve discontinuity imaging where station spacing is greater.
Selection and properties of alternative forming fluids for TRISO fuel kernel production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, M. P.; King, J. C.; Gorman, B. P.
2013-01-01
Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardousmore » alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1- bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.« less
Application of stochastic weighted algorithms to a multidimensional silica particle model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menz, William J.; Patterson, Robert I.A.; Wagner, Wolfgang
2013-09-01
Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associatedmore » majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.« less
Selection and properties of alternative forming fluids for TRISO fuel kernel production
NASA Astrophysics Data System (ADS)
Baker, M. P.; King, J. C.; Gorman, B. P.; Marshall, D. W.
2013-01-01
Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ˜10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.
Javed, Faizan; Chan, Gregory S H; Savkin, Andrey V; Middleton, Paul M; Malouf, Philip; Steel, Elizabeth; Mackie, James; Lovell, Nigel H
2009-01-01
This paper uses non-linear support vector regression (SVR) to model the blood volume and heart rate (HR) responses in 9 hemodynamically stable kidney failure patients during hemodialysis. Using radial bias function (RBF) kernels the non-parametric models of relative blood volume (RBV) change with time as well as percentage change in HR with respect to RBV were obtained. The e-insensitivity based loss function was used for SVR modeling. Selection of the design parameters which includes capacity (C), insensitivity region (e) and the RBF kernel parameter (sigma) was made based on a grid search approach and the selected models were cross-validated using the average mean square error (AMSE) calculated from testing data based on a k-fold cross-validation technique. Linear regression was also applied to fit the curves and the AMSE was calculated for comparison with SVR. For the model based on RBV with time, SVR gave a lower AMSE for both training (AMSE=1.5) as well as testing data (AMSE=1.4) compared to linear regression (AMSE=1.8 and 1.5). SVR also provided a better fit for HR with RBV for both training as well as testing data (AMSE=15.8 and 16.4) compared to linear regression (AMSE=25.2 and 20.1).
Pixel-based meshfree modelling of skeletal muscles.
Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu
2016-01-01
This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.
Estimating average growth trajectories in shape-space using kernel smoothing.
Hutton, Tim J; Buxton, Bernard F; Hammond, Peter; Potts, Henry W W
2003-06-01
In this paper, we show how a dense surface point distribution model of the human face can be computed and demonstrate the usefulness of the high-dimensional shape-space for expressing the shape changes associated with growth and aging. We show how average growth trajectories for the human face can be computed in the absence of longitudinal data by using kernel smoothing across a population. A training set of three-dimensional surface scans of 199 male and 201 female subjects of between 0 and 50 years of age is used to build the model.
On the solution of integral equations with strongly singular kernels
NASA Technical Reports Server (NTRS)
Kaya, A. C.; Erdogan, F.
1986-01-01
Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m ,m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup -m , terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.
On the solution of integral equations with strong ly singular kernels
NASA Technical Reports Server (NTRS)
Kaya, A. C.; Erdogan, F.
1985-01-01
In this paper some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m or = 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t,x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.
On the solution of integral equations with strongly singular kernels
NASA Technical Reports Server (NTRS)
Kaya, A. C.; Erdogan, F.
1987-01-01
Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.
A shock-capturing SPH scheme based on adaptive kernel estimation
NASA Astrophysics Data System (ADS)
Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime
2006-02-01
Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.
NASA Astrophysics Data System (ADS)
Shah-Heydari pour, A.; Pahlavani, P.; Bigdeli, B.
2017-09-01
According to the industrialization of cities and the apparent increase in pollutants and greenhouse gases, the importance of forests as the natural lungs of the earth is felt more than ever to clean these pollutants. Annually, a large part of the forests is destroyed due to the lack of timely action during the fire. Knowledge about areas with a high-risk of fire and equipping these areas by constructing access routes and allocating the fire-fighting equipment can help to eliminate the destruction of the forest. In this research, the fire risk of region was forecasted and the risk map of that was provided using MODIS images by applying geographically weighted regression model with Gaussian kernel and ordinary least squares over the effective parameters in forest fire including distance from residential areas, distance from the river, distance from the road, height, slope, aspect, soil type, land use, average temperature, wind speed, and rainfall. After the evaluation, it was found that the geographically weighted regression model with Gaussian kernel forecasted 93.4% of the all fire points properly, however the ordinary least squares method could forecast properly only 66% of the fire points.
Kanematsu, Nobuyuki
2011-04-01
This work addresses computing techniques for dose calculations in treatment planning with proton and ion beams, based on an efficient kernel-convolution method referred to as grid-dose spreading (GDS) and accurate heterogeneity-correction method referred to as Gaussian beam splitting. The original GDS algorithm suffered from distortion of dose distribution for beams tilted with respect to the dose-grid axes. Use of intermediate grids normal to the beam field has solved the beam-tilting distortion. Interplay of arrangement between beams and grids was found as another intrinsic source of artifact. Inclusion of rectangular-kernel convolution in beam transport, to share the beam contribution among the nearest grids in a regulatory manner, has solved the interplay problem. This algorithmic framework was applied to a tilted proton pencil beam and a broad carbon-ion beam. In these cases, while the elementary pencil beams individually split into several tens, the calculation time increased only by several times with the GDS algorithm. The GDS and beam-splitting methods will complementarily enable accurate and efficient dose calculations for radiotherapy with protons and ions. Copyright © 2010 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
The effect of carbon crystal structure on treat reactor physics calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swanson, R.W.; Harrison, L.J.
1988-01-01
The Transient Reactor Test Facility (TREAT) at Argonne National Laboratory-West (ANL-W) is fueled with urania in a graphite and carbon mixture. This fuel was fabricated from a mixture of graphite flour, thermax (a thermatomic carbon produced by ''cracking'' natural gas), coal-tar resin and U/sub 3/O/sub 8/. During the fabrication process, the fuel was baked to dissociate the resin, but the high temperature necessary to graphitize the carbon in the thermax and in the resin was avoided. Therefore, the carbon crystal structure is a complex mixture of graphite particles in a nongraphitized elemental carbon matrix. Results of calculations using macroscopic carbonmore » cross sections obtained by mixing bound-kernel graphite cross sections for the graphitized carbon and free-gas carbon cross sections for the remainder of the carbon and calculations using only bound-kernel graphite cross sections are compared to experimental data. It is shown that the use of the hybridized cross sections which reflect the allotropic mixture of the carbon in the TREAT fuel results in a significant improvement in the accuracy of calculated neutronics parameters for the TREAT reactor. 6 refs., 2 figs., 3 tabs.« less
Handling Density Conversion in TPS.
Isobe, Tomonori; Mori, Yutaro; Takei, Hideyuki; Sato, Eisuke; Tadano, Kiichi; Kobayashi, Daisuke; Tomita, Tetsuya; Sakae, Takeji
2016-01-01
Conversion from CT value to density is essential to a radiation treatment planning system. Generally CT value is converted to the electron density in photon therapy. In the energy range of therapeutic photon, interactions between photons and materials are dominated with Compton scattering which the cross-section depends on the electron density. The dose distribution is obtained by calculating TERMA and kernel using electron density where TERMA is the energy transferred from primary photons and kernel is a volume considering spread electrons. Recently, a new method was introduced which uses the physical density. This method is expected to be faster and more accurate than that using the electron density. As for particle therapy, dose can be calculated with CT-to-stopping power conversion since the stopping power depends on the electron density. CT-to-stopping power conversion table is also called as CT-to-water-equivalent range and is an essential concept for the particle therapy.
Production of Low Enriched Uranium Nitride Kernels for TRISO Particle Irradiation Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMurray, J. W.; Silva, C. M.; Helmreich, G. W.
2016-06-01
A large batch of UN microspheres to be used as kernels for TRISO particle fuel was produced using carbothermic reduction and nitriding of a sol-gel feedstock bearing tailored amounts of low-enriched uranium (LEU) oxide and carbon. The process parameters, established in a previous study, produced phasepure NaCl structure UN with dissolved C on the N sublattice. The composition, calculated by refinement of the lattice parameter from X-ray diffraction, was determined to be UC 0.27N 0.73. The final accepted product weighed 197.4 g. The microspheres had an average diameter of 797±1.35 μm and a composite mean theoretical density of 89.9±0.5% formore » a solid solution of UC and UN with the same atomic ratio; both values are reported with their corresponding calculated standard error.« less
Learning molecular energies using localized graph kernels.
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-21
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Learning molecular energies using localized graph kernels
NASA Astrophysics Data System (ADS)
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-01
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Progress in understanding fission-product behaviour in coated uranium-dioxide fuel particles
NASA Astrophysics Data System (ADS)
Barrachin, M.; Dubourg, R.; Kissane, M. P.; Ozrin, V.
2009-03-01
Supported by results of calculations performed with two analytical tools (MFPR, which takes account of physical and chemical mechanisms in calculating the chemical forms and physical locations of fission products in UO2, and MEPHISTA, a thermodynamic database), this paper presents an investigation of some important aspects of the fuel microstructure and chemical evolutions of irradiated TRISO particles. The following main conclusions can be identified with respect to irradiated TRISO fuel: first, the relatively low oxygen potential within the fuel particles with respect to PWR fuel leads to chemical speciation that is not typical of PWR fuels, e.g., the relatively volatile behaviour of barium; secondly, the safety-critical fission-product caesium is released from the urania kernel but the buffer and pyrolytic-carbon coatings could form an important chemical barrier to further migration (i.e., formation of carbides). Finally, significant releases of fission gases from the urania kernel are expected even in nominal conditions.
Rediscovering the Kernels of Truth in the Urban Legends of the Freshman Composition Classroom
ERIC Educational Resources Information Center
Lovoy, Thomas
2004-01-01
English teachers, as well as teachers within other disciplines, often boil down abstract principles to easily explainable bullet points. Students often pick up and retain these points but fail to grasp the broader contexts that make them relevant. It is therefore sometimes helpful to revisit some of the more common of these "rules of thumb" to…
Hirayama, Shusuke; Matsuura, Taeko; Ueda, Hideaki; Fujii, Yusuke; Fujii, Takaaki; Takao, Seishin; Miyamoto, Naoki; Shimizu, Shinichi; Fujimoto, Rintaro; Umegaki, Kikuo; Shirato, Hiroki
2018-05-22
To evaluate the biological effects of proton beams as part of daily clinical routine, fast and accurate calculation of dose-averaged linear energy transfer (LET d ) is required. In this study, we have developed the analytical LET d calculation method based on the pencil-beam algorithm (PBA) considering the off-axis enhancement by secondary protons. This algorithm (PBA-dLET) was then validated using Monte Carlo simulation (MCS) results. In PBA-dLET, LET values were assigned separately for each individual dose kernel based on the PBA. For the dose kernel, we employed a triple Gaussian model which consists of the primary component (protons that undergo the multiple Coulomb scattering) and the halo component (protons that undergo inelastic, nonelastic and elastic nuclear reaction); the primary and halo components were represented by a single Gaussian and the sum of two Gaussian distributions, respectively. Although the previous analytical approaches assumed a constant LET d value for the lateral distribution of a pencil beam, the actual LET d increases away from the beam axis, because there are more scattered and therefore lower energy protons with higher stopping powers. To reflect this LET d behavior, we have assumed that the LETs of primary and halo components can take different values (LET p and LET halo ), which vary only along the depth direction. The values of dual-LET kernels were determined such that the PBA-dLET reproduced the MCS-generated LET d distribution in both small and large fields. These values were generated at intervals of 1 mm in depth for 96 energies from 70.2 to 220 MeV and collected in the look-up table. Finally, we compared the LET d distributions and mean LET d (LET d,mean ) values of targets and organs at risk between PBA-dLET and MCS. Both homogeneous phantom and patient geometries (prostate, liver, and lung cases) were used to validate the present method. In the homogeneous phantom, the LET d profiles obtained by the dual-LET kernels agree well with the MCS results except for the low-dose region in the lateral penumbra, where the actual dose was below 10% of the maximum dose. In the patient geometry, the LET d profiles calculated with the developed method reproduces MCS with the similar accuracy as in the homogeneous phantom. The maximum differences in LET d,mean for each structure between the PBA-dLET and the MCS were 0.06 keV/μm in homogeneous phantoms and 0.08 keV/μm in patient geometries under all tested conditions, respectively. We confirmed that the dual-LET-kernel model well reproduced the MCS, not only in the homogeneous phantom but also in complex patient geometries. The accuracy of the LET d was largely improved from the single-LET-kernel model, especially at the lateral penumbra. The model is expected to be useful, especially for proper recognition of the risk of side effects when the target is next to critical organs. © 2018 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Ingacheva, Anastasia; Chukalina, Marina; Khanipov, Timur; Nikolaev, Dmitry
2018-04-01
Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.
Generalized time-dependent Schrödinger equation in two dimensions under constraints
NASA Astrophysics Data System (ADS)
Sandev, Trifce; Petreska, Irina; Lenzi, Ervin K.
2018-01-01
We investigate a generalized two-dimensional time-dependent Schrödinger equation on a comb with a memory kernel. A Dirac delta term is introduced in the Schrödinger equation so that the quantum motion along the x-direction is constrained at y = 0. The wave function is analyzed by using Green's function approach for several forms of the memory kernel, which are of particular interest. Closed form solutions for the cases of Dirac delta and power-law memory kernels in terms of Fox H-function, as well as for a distributed order memory kernel, are obtained. Further, a nonlocal term is also introduced and investigated analytically. It is shown that the solution for such a case can be represented in terms of infinite series in Fox H-functions. Green's functions for each of the considered cases are analyzed and plotted for the most representative ones. Anomalous diffusion signatures are evident from the presence of the power-law tails. The normalized Green's functions obtained in this work are of broader interest, as they are an important ingredient for further calculations and analyses of some interesting effects in the transport properties in low-dimensional heterogeneous media.
Boisdenghien, Zino; Fias, Stijn; Van Alsenoy, Christian; De Proft, Frank; Geerlings, Paul
2014-07-28
Most of the work done on the linear response kernel χ(r,r') has focussed on its atom-atom condensed form χAB. Our previous work [Boisdenghien et al., J. Chem. Theory Comput., 2013, 9, 1007] was the first effort to truly focus on the non-condensed form of this function for closed (sub)shell atoms in a systematic fashion. In this work, we extend our method to the open shell case. To simplify the plotting of our results, we average our results to a symmetrical quantity χ(r,r'). This allows us to plot the linear response kernel for all elements up to and including argon and to investigate the periodicity throughout the first three rows in the periodic table and in the different representations of χ(r,r'). Within the context of Spin Polarized Conceptual Density Functional Theory, the first two-dimensional plots of spin polarized linear response functions are presented and commented on for some selected cases on the basis of the atomic ground state electronic configurations. Using the relation between the linear response kernel and the polarizability we compare the values of the polarizability tensor calculated using our method to high-level values.
Fission Product Release and Survivability of UN-Kernel LWR TRISO Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besmann, Theodore M; Ferber, Mattison K; Lin, Hua-Tay
2014-01-01
A thermomechanical assessment of the LWR application of TRISO fuel with UN kernels was performed. Fission product release under operational and transient temperature conditions was determined by extrapolation from range calculations and limited data from irradiated UN pellets. Both fission recoil and diffusive release were considered and internal particle pressures computed for both 650 and 800 m diameter kernels as a function of buffer layer thickness. These pressures were used in conjunction with a finite element program to compute the radial and tangential stresses generated with a TRISO particle as a function of fluence. Creep and swelling of the innermore » and outer pyrolytic carbon layers were included in the analyses. A measure of reliability of the TRISO particle was obtained by measuring the probability of survival of the SiC barrier layer and the maximum tensile stress generated in the pyrolytic carbon layers as a function of fluence. These reliability estimates were obtained as functions of the kernel diameter, buffer layer thickness, and pyrolytic carbon layer thickness. The value of the probability of survival at the end of irradiation was inversely proportional to the maximum pressure.« less
Fission product release and survivability of UN-kernel LWR TRISO fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. M. Besmann; M. K. Ferber; H.-T. Lin
2014-05-01
A thermomechanical assessment of the LWR application of TRISO fuel with UN kernels was performed. Fission product release under operational and transient temperature conditions was determined by extrapolation from fission product recoil calculations and limited data from irradiated UN pellets. Both fission recoil and diffusive release were considered and internal particle pressures computed for both 650 and 800 um diameter kernels as a function of buffer layer thickness. These pressures were used in conjunction with a finite element program to compute the radial and tangential stresses generated within a TRISO particle undergoing burnup. Creep and swelling of the inner andmore » outer pyrolytic carbon layers were included in the analyses. A measure of reliability of the TRISO particle was obtained by computing the probability of survival of the SiC barrier layer and the maximum tensile stress generated in the pyrolytic carbon layers from internal pressure and thermomechanics of the layers. These reliability estimates were obtained as functions of the kernel diameter, buffer layer thickness, and pyrolytic carbon layer thickness. The value of the probability of survival at the end of irradiation was inversely proportional to the maximum pressure.« less
NASA Astrophysics Data System (ADS)
Abdulhameed, M.; Vieru, D.; Roslan, R.
2017-10-01
This paper investigates the electro-magneto-hydrodynamic flow of the non-Newtonian behavior of biofluids, with heat transfer, through a cylindrical microchannel. The fluid is acted by an arbitrary time-dependent pressure gradient, an external electric field and an external magnetic field. The governing equations are considered as fractional partial differential equations based on the Caputo-Fabrizio time-fractional derivatives without singular kernel. The usefulness of fractional calculus to study fluid flows or heat and mass transfer phenomena was proven. Several experimental measurements led to conclusion that, in such problems, the models described by fractional differential equations are more suitable. The most common time-fractional derivative used in Continuum Mechanics is Caputo derivative. However, two disadvantages appear when this derivative is used. First, the definition kernel is a singular function and, secondly, the analytical expressions of the problem solutions are expressed by generalized functions (Mittag-Leffler, Lorenzo-Hartley, Robotnov, etc.) which, generally, are not adequate to numerical calculations. The new time-fractional derivative Caputo-Fabrizio, without singular kernel, is more suitable to solve various theoretical and practical problems which involve fractional differential equations. Using the Caputo-Fabrizio derivative, calculations are simpler and, the obtained solutions are expressed by elementary functions. Analytical solutions of the biofluid velocity and thermal transport are obtained by means of the Laplace and finite Hankel transforms. The influence of the fractional parameter, Eckert number and Joule heating parameter on the biofluid velocity and thermal transport are numerically analyzed and graphic presented. This fact can be an important in Biochip technology, thus making it possible to use this analysis technique extremely effective to control bioliquid samples of nanovolumes in microfluidic devices used for biological analysis and medical diagnosis.
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.
Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis
2017-10-16
Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods
Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.
2017-01-01
Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333
Lossy Wavefield Compression for Full-Waveform Inversion
NASA Astrophysics Data System (ADS)
Boehm, C.; Fichtner, A.; de la Puente, J.; Hanzich, M.
2015-12-01
We present lossy compression techniques, tailored to the inexact computation of sensitivity kernels, that significantly reduce the memory requirements of adjoint-based minimization schemes. Adjoint methods are a powerful tool to solve tomography problems in full-waveform inversion (FWI). Yet they face the challenge of massive memory requirements caused by the opposite directions of forward and adjoint simulations and the necessity to access both wavefields simultaneously during the computation of the sensitivity kernel. Thus, storage, I/O operations, and memory bandwidth become key topics in FWI. In this talk, we present strategies for the temporal and spatial compression of the forward wavefield. This comprises re-interpolation with coarse time steps and an adaptive polynomial degree of the spectral element shape functions. In addition, we predict the projection errors on a hierarchy of grids and re-quantize the residuals with an adaptive floating-point accuracy to improve the approximation. Furthermore, we use the first arrivals of adjoint waves to identify "shadow zones" that do not contribute to the sensitivity kernel at all. Updating and storing the wavefield within these shadow zones is skipped, which reduces memory requirements and computational costs at the same time. Compared to check-pointing, our approach has only a negligible computational overhead, utilizing the fact that a sufficiently accurate sensitivity kernel does not require a fully resolved forward wavefield. Furthermore, we use adaptive compression thresholds during the FWI iterations to ensure convergence. Numerical experiments on the reservoir scale and for the Western Mediterranean prove the high potential of this approach with an effective compression factor of 500-1000. Furthermore, it is computationally cheap and easy to integrate in both, finite-differences and finite-element wave propagation codes.
Miyatake, Aya; Nishio, Teiji; Ogino, Takashi
2011-10-01
The purpose of this study is to develop a new calculation algorithm that is satisfactory in terms of the requirements for both accuracy and calculation time for a simulation of imaging of the proton-irradiated volume in a patient body in clinical proton therapy. The activity pencil beam algorithm (APB algorithm), which is a new technique to apply the pencil beam algorithm generally used for proton dose calculations in proton therapy to the calculation of activity distributions, was developed as a calculation algorithm of the activity distributions formed by positron emitter nuclei generated from target nuclear fragment reactions. In the APB algorithm, activity distributions are calculated using an activity pencil beam kernel. In addition, the activity pencil beam kernel is constructed using measured activity distributions in the depth direction and calculations in the lateral direction. (12)C, (16)O, and (40)Ca nuclei were determined as the major target nuclei that constitute a human body that are of relevance for calculation of activity distributions. In this study, "virtual positron emitter nuclei" was defined as the integral yield of various positron emitter nuclei generated from each target nucleus by target nuclear fragment reactions with irradiated proton beam. Compounds, namely, polyethylene, water (including some gelatin) and calcium oxide, which contain plenty of the target nuclei, were irradiated using a proton beam. In addition, depth activity distributions of virtual positron emitter nuclei generated in each compound from target nuclear fragment reactions were measured using a beam ON-LINE PET system mounted a rotating gantry port (BOLPs-RGp). The measured activity distributions depend on depth or, in other words, energy. The irradiated proton beam energies were 138, 179, and 223 MeV, and measurement time was about 5 h until the measured activity reached the background level. Furthermore, the activity pencil beam data were made using the activity pencil beam kernel, which was composed of the measured depth data and the lateral data including multiple Coulomb scattering approximated by the Gaussian function, and were used for calculating activity distributions. The data of measured depth activity distributions for every target nucleus by proton beam energy were obtained using BOLPs-RGp. The form of the depth activity distribution was verified, and the data were made in consideration of the time-dependent change of the form. Time dependence of an activity distribution form could be represented by two half-lives. Gaussian form of the lateral distribution of the activity pencil beam kernel was decided by the effect of multiple Coulomb scattering. Thus, the data of activity pencil beam involving time dependence could be obtained in this study. The simulation of imaging of the proton-irradiated volume in a patient body using target nuclear fragment reactions was feasible with the developed APB algorithm taking time dependence into account. With the use of the APB algorithm, it was suggested that a system of simulation of activity distributions that has levels of both accuracy and calculation time appropriate for clinical use can be constructed.
Full Wave Parallel Code for Modeling RF Fields in Hot Plasmas
NASA Astrophysics Data System (ADS)
Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo
2015-11-01
FAR-TECH, Inc. is developing a suite of full wave RF codes in hot plasmas. It is based on a formulation in configuration space with grid adaptation capability. The conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating the linearized Vlasov equation along unperturbed test particle orbits. For Tokamak applications a 2-D version of the code is being developed. Progress of this work will be reported. This suite of codes has the following advantages over existing spectral codes: 1) It utilizes the localized nature of plasma dielectric response to the RF field and calculates this response numerically without approximations. 2) It uses an adaptive grid to better resolve resonances in plasma and antenna structures. 3) It uses an efficient sparse matrix solver to solve the formulated linear equations. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel is calculated. Work is supported by the U.S. DOE SBIR program.
Gould, Tim; Bučko, Tomáš
2016-08-09
Using time-dependent density functional theory (TDDFT) with exchange kernels, we calculate and test imaginary frequency-dependent dipole polarizabilities for all atoms and many ions in rows 1-6 of the periodic table. These are then integrated over frequency to produce C6 coefficients. Results are presented under different models: straight TDDFT calculations using two different kernels; "benchmark" TDDFT calculations corrected by more accurate quantum chemical and experimental data; and "benchmark" TDDFT with frozen orbital anions. Parametrizations are presented for 411+ atoms and ions, allowing results to be easily used by other researchers. A curious relationship, C6,XY ∝ [αX(0)αY(0)](0.73), is found between C6 coefficients and static polarizabilities α(0). The relationship C6,XY = 2C6,XC6,Y/[(αX/αY)C6,Y + (αY/αX)C6,X] is tested and found to work well (<5% errors) in ∼80% of the cases, but can break down badly (>30% errors) in a small fraction of cases.
Construction of phylogenetic trees by kernel-based comparative analysis of metabolic networks.
Oh, S June; Joung, Je-Gun; Chang, Jeong-Ho; Zhang, Byoung-Tak
2006-06-06
To infer the tree of life requires knowledge of the common characteristics of each species descended from a common ancestor as the measuring criteria and a method to calculate the distance between the resulting values of each measure. Conventional phylogenetic analysis based on genomic sequences provides information about the genetic relationships between different organisms. In contrast, comparative analysis of metabolic pathways in different organisms can yield insights into their functional relationships under different physiological conditions. However, evaluating the similarities or differences between metabolic networks is a computationally challenging problem, and systematic methods of doing this are desirable. Here we introduce a graph-kernel method for computing the similarity between metabolic networks in polynomial time, and use it to profile metabolic pathways and to construct phylogenetic trees. To compare the structures of metabolic networks in organisms, we adopted the exponential graph kernel, which is a kernel-based approach with a labeled graph that includes a label matrix and an adjacency matrix. To construct the phylogenetic trees, we used an unweighted pair-group method with arithmetic mean, i.e., a hierarchical clustering algorithm. We applied the kernel-based network profiling method in a comparative analysis of nine carbohydrate metabolic networks from 81 biological species encompassing Archaea, Eukaryota, and Eubacteria. The resulting phylogenetic hierarchies generally support the tripartite scheme of three domains rather than the two domains of prokaryotes and eukaryotes. By combining the kernel machines with metabolic information, the method infers the context of biosphere development that covers physiological events required for adaptation by genetic reconstruction. The results show that one may obtain a global view of the tree of life by comparing the metabolic pathway structures using meta-level information rather than sequence information. This method may yield further information about biological evolution, such as the history of horizontal transfer of each gene, by studying the detailed structure of the phylogenetic tree constructed by the kernel-based method.
Mutual information estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
For the automated, objective and joint analysis of time series, similarity measures are crucial. Used in the analysis of climate records, they allow for a complimentary, unbiased view onto sparse datasets. The irregular sampling of many of these time series, however, makes it necessary to either perform signal reconstruction (e.g. interpolation) or to develop and use adapted measures. Standard linear interpolation comes with an inevitable loss of information and bias effects. We have recently developed a Gaussian kernel-based correlation algorithm with which the interpolation error can be substantially lowered, but this would not work should the functional relationship in a bivariate setting be non-linear. We therefore propose an algorithm to estimate lagged auto and cross mutual information from irregularly sampled time series. We have extended the standard and adaptive binning histogram estimators and use Gaussian distributed weights in the estimation of the (joint) probabilities. To test our method we have simulated linear and nonlinear auto-regressive processes with Gamma-distributed inter-sampling intervals. We have then performed a sensitivity analysis for the estimation of actual coupling length, the lag of coupling and the decorrelation time in the synthetic time series and contrast our results to the performance of a signal reconstruction scheme. Finally we applied our estimator to speleothem records. We compare the estimated memory (or decorrelation time) to that from a least-squares estimator based on fitting an auto-regressive process of order 1. The calculated (cross) mutual information results are compared for the different estimators (standard or adaptive binning) and contrasted with results from signal reconstruction. We find that the kernel-based estimator has a significantly lower root mean square error and less systematic sampling bias than the interpolation-based method. It is possible that these encouraging results could be further improved by using non-histogram mutual information estimators, like k-Nearest Neighbor or Kernel-Density estimators, but for short (<1000 points) and irregularly sampled datasets the proposed algorithm is already a great improvement.
Finite-frequency sensitivity kernels for head waves
NASA Astrophysics Data System (ADS)
Zhang, Zhigang; Shen, Yang; Zhao, Li
2007-11-01
Head waves are extremely important in determining the structure of the predominantly layered Earth. While several recent studies have shown the diffractive nature and the 3-D Fréchet kernels of finite-frequency turning waves, analogues of head waves in a continuous velocity structure, the finite-frequency effects and sensitivity kernels of head waves are yet to be carefully examined. We present the results of a numerical study focusing on the finite-frequency effects of head waves. Our model has a low-velocity layer over a high-velocity half-space and a cylindrical-shaped velocity perturbation placed beneath the interface at different locations. A 3-D finite-difference method is used to calculate synthetic waveforms. Traveltime and amplitude anomalies are measured by the cross-correlation of synthetic seismograms from models with and without the velocity perturbation and are compared to the 3-D sensitivity kernels constructed from full waveform simulations. The results show that the head wave arrival-time and amplitude are influenced by the velocity structure surrounding the ray path in a pattern that is consistent with the Fresnel zones. Unlike the `banana-doughnut' traveltime sensitivity kernels of turning waves, the traveltime sensitivity of the head wave along the ray path below the interface is weak, but non-zero. Below the ray path, the traveltime sensitivity reaches the maximum (absolute value) at a depth that depends on the wavelength and propagation distance. The sensitivity kernels vary with the vertical velocity gradient in the lower layer, but the variation is relatively small at short propagation distances when the vertical velocity gradient is within the range of the commonly accepted values. Finally, the depression or shoaling of the interface results in increased or decreased sensitivities, respectively, beneath the interface topography.
NASA Astrophysics Data System (ADS)
Challamel, Noël
2018-04-01
The static and dynamic behaviour of a nonlocal bar of finite length is studied in this paper. The nonlocal integral models considered in this paper are strain-based and relative displacement-based nonlocal models; the latter one is also labelled as a peridynamic model. For infinite media, and for sufficiently smooth displacement fields, both integral nonlocal models can be equivalent, assuming some kernel correspondence rules. For infinite media (or finite media with extended reflection rules), it is also shown that Eringen's differential model can be reformulated into a consistent strain-based integral nonlocal model with exponential kernel, or into a relative displacement-based integral nonlocal model with a modified exponential kernel. A finite bar in uniform tension is considered as a paradigmatic static case. The strain-based nonlocal behaviour of this bar in tension is analyzed for different kernels available in the literature. It is shown that the kernel has to fulfil some normalization and end compatibility conditions in order to preserve the uniform strain field associated with this homogeneous stress state. Such a kernel can be built by combining a local and a nonlocal strain measure with compatible boundary conditions, or by extending the domain outside its finite size while preserving some kinematic compatibility conditions. The same results are shown for the nonlocal peridynamic bar where a homogeneous strain field is also analytically obtained in the elastic bar for consistent compatible kinematic boundary conditions at the vicinity of the end conditions. The results are extended to the vibration of a fixed-fixed finite bar where the natural frequencies are calculated for both the strain-based and the peridynamic models.
Li, Kan; Príncipe, José C.
2018-01-01
This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime. PMID:29666568
Li, Kan; Príncipe, José C
2018-01-01
This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime.
Computational investigation of intense short-wavelength laser interaction with rare gas clusters
NASA Astrophysics Data System (ADS)
Bigaouette, Nicolas
Current Very High Temperature Reactor designs incorporate TRi-structural ISOtropic (TRISO) particle fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel by dropping a cold precursor solution into a column of hot trichloroethylene (TCE). The temperature difference drives the liquid precursor solution to precipitate the metal solution into gel spheres before reaching the bottom of a production column. Over time, gelation byproducts inhibit complete gelation and the TCE must be purified or discarded. The resulting mixed-waste stream is expensive to dispose of or recycle, and changing the forming fluid to a non-hazardous alternative could greatly improve the economics of kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacements. The physical properties of the alternatives were measured as a function of temperature between 25 °C and 80 °C. Calculated terminal velocities and heat transfer rates provided an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane were selected for further testing, and surrogate yttria-stabilized zirconia (YSZ) kernels were produced using these selected fluids. The kernels were characterized for density, geometry, composition, and crystallinity and compared to a control group of kernels produced in silicone oil. Production in 1-bromotetradecane showed positive results, producing dense (93.8 %TD) and spherical (1.03 aspect ratio) kernels, but proper gelation did not occur in the other alternative forming fluids. With many of the YSZ kernels not properly gelling within the length of the column, this project further investigated the heat transfer properties of the forming fluids and precursor solution. A sensitivity study revealed that the heat transfer properties of the precursor solution have the strongest impact on gelation time. A COMSOL heat transfer model estimated an effective thermal diffusivity range for the YSZ precursor solution as 1.13x10 -8 m2/s to 3.35x10-8 m 2/s, which is an order of magnitude smaller than the value used in previous studies. 1-bromotetradecane is recommended for further investigation with the production of uranium-based kernels.
Polarizable atomistic calculation of site energy disorder in amorphous Alq3.
Nagata, Yuki
2010-02-01
A polarizable molecular dynamics simulation and calculation scheme for site energy disorder is presented in amorphous tris(8-hydroxyquinolinato)aluminum (Alq(3)) by means of the charge response kernel (CRK) method. The CRK fit to the electrostatic potential and the tight-binding approximation are introduced, which enables modeling of the polarizable electrostatic interaction for a large molecule systematically from an ab initio calculation. The site energy disorder for electron and hole transfers is calculated in amorphous Alq(3) and the effect of the polarization on the site energy disorder is discussed.
On- and off-axis spectral emission features from laser-produced gas breakdown plasmas
NASA Astrophysics Data System (ADS)
Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.; Brumfield, B. E.; Phillips, M. C.; Miloshevsky, G.
2017-06-01
Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during their early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of the surrounding ambient: photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early times of their creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with a pulse duration of 6 ns are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density, and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times, while space and time resolved spectroscopy is used for evaluating the emission features and for inferring plasma physical conditions at on- and off-axis positions. The structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using the computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms, and molecules are separated in time with early time temperatures and densities in excess of 35 000 K and 4 × 1018/cm3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N2 bands and is represented by non-local thermodynamic equilibrium (non-LTE) conditions. Our results also highlight that the ultraviolet radiation emitted during the early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.
On- and off-axis spectral emission features from laser-produced gas breakdown plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.
Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during its early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of surrounding ambient: viz. photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early timesmore » of its creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission features of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with 6 ns pulse duration are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times while space and time resolved spectroscopy is used for evaluating the emission features as well as for inferring plasma fundaments at on- and off-axis. Structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms and molecules are separated in time with an early time temperatures and densities in excess of 35000 K and 4×10 18 /cm 3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N 2 bands and represented by non-LTE conditions. Finally, our results also highlight that the ultraviolet radiation emitted during early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.« less
On- and off-axis spectral emission features from laser-produced gas breakdown plasmas
Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.; ...
2017-06-01
Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during its early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of surrounding ambient: viz. photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early timesmore » of its creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission features of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with 6 ns pulse duration are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times while space and time resolved spectroscopy is used for evaluating the emission features as well as for inferring plasma fundaments at on- and off-axis. Structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms and molecules are separated in time with an early time temperatures and densities in excess of 35000 K and 4×1018 /cm3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N2 bands and represented by non-LTE conditions. Our results also highlight that the ultraviolet radiation emitted during early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.« less
Alvarez Prado, Santiago; Sadras, Víctor O; Borrás, Lucas
2014-08-01
Maize kernel weight (KW) is associated with the duration of the grain-filling period (GFD) and the rate of kernel biomass accumulation (KGR). It is also related to the dynamics of water and hence is physiologically linked to the maximum kernel water content (MWC), kernel desiccation rate (KDR), and moisture concentration at physiological maturity (MCPM). This work proposed that principles of phenotypic plasticity can help to consolidated the understanding of the environmental modulation and genetic control of these traits. For that purpose, a maize population of 245 recombinant inbred lines (RILs) was grown under different environmental conditions. Trait plasticity was calculated as the ratio of the variance of each RIL to the overall phenotypic variance of the population of RILs. This work found a hierarchy of plasticities: KDR ≈ GFD > MCPM > KGR > KW > MWC. There was no phenotypic and genetic correlation between traits per se and trait plasticities. MWC, the trait with the lowest plasticity, was the exception because common quantitative trait loci were found for the trait and its plasticity. Independent genetic control of a trait per se and genetic control of its plasticity is a condition for the independent evolution of traits and their plasticities. This allows breeders potentially to select for high or low plasticity in combination with high or low values of economically relevant traits. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
2015-06-01
5110P and 16 dx360M4 nodes each with one NVIDIA Kepler K20M/K40M GPU. Each node contained dual Intel Xeon E5-2670 (Sandy Bridge) central processing...kernel and as such does not employ multiple processors. This work makes use of a single processing core and a single NVIDIA Kepler K40 GK110...bandwidth (2 × 16 slot), 7.877 GFloat/s; Kepler K40 peak, 4,290 × 1 billion floating-point operations (GFLOPs), and 288 GB/s Kepler K40 memory
The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware
NASA Astrophysics Data System (ADS)
Kathiara, Jainik
There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.
Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nigg, D.W.; Wheeler, F.J.
1981-01-01
The Poloidal Diverter Experiment (PDX) facility at Princeton University is the first operating tokamak to require substantial radiation shielding. A calculational model has been developed to estimate the radiation dose in the PDX control room and at the site boundary due to the skyshine effect. An efficient one-dimensional method is used to compute the neutron and capture gamma leakage currents at the top surface of the PDX roof shield. This method employs an S /SUB n/ calculation in slab geometry and, for the PDX, is superior to spherical models found in the literature. If certain conditions are met, the slabmore » model provides the exact probability of leakage out the top surface of the roof for fusion source neutrons and for capture gamma rays produced in the PDX floor and roof shield. The model also provides the correct neutron and capture gamma leakage current spectra and angular distributions, averaged over the top roof shield surface. For the PDX, this method is nearly as accurate as multidimensional techniques for computing the roof leakage and is much less costly. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab S /SUB n/ calculation. The capture gamma dose is computed using a simple point-kernel single-scatter method.« less
NASA Technical Reports Server (NTRS)
Levine, H.
1982-01-01
The calculation of power output from a (finite) linear array of equidistant point sources is investigated with allowance for a relative phase shift and particular focus on the circumstances of small/large individual source separation. A key role is played by the estimates found for a twin parameter definite integral that involves the Fejer kernel functions, where N denotes a (positive) integer; these results also permit a quantitative accounting of energy partition between the principal and secondary lobes of the array pattern. Continuously distributed sources along a finite line segment or an open ended circular cylindrical shell are considered, and estimates for the relatively lower output in the latter configuration are made explicit when the shell radius is small compared to the wave length. A systematic reduction of diverse integrals which characterize the energy output from specific line and strip sources is investigated.
Two-species-coagulation approach to consensus by group level interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escudero, Carlos; Macia, Fabricio; Velazquez, Juan J. L.
2010-07-15
We explore the self-organization dynamics of a set of entities by considering the interactions that affect the different subgroups conforming the whole. To this end, we employ the widespread example of coagulation kinetics, and characterize which interaction types lead to consensus formation and which do not, as well as the corresponding different macroscopic patterns. The crucial technical point is extending the usual one species coagulation dynamics to the two species one. This is achieved by means of introducing explicitly solvable kernels which have a clear physical meaning. The corresponding solutions are calculated in the long time limit, in which consensusmore » may or may not be reached. The lack of consensus is characterized by means of scaling limits of the solutions. The possible applications of our results to some topics in which consensus reaching is fundamental, such as collective animal motion and opinion spreading dynamics, are also outlined.« less
Two-species-coagulation approach to consensus by group level interactions
NASA Astrophysics Data System (ADS)
Escudero, Carlos; Macià, Fabricio; Velázquez, Juan J. L.
2010-07-01
We explore the self-organization dynamics of a set of entities by considering the interactions that affect the different subgroups conforming the whole. To this end, we employ the widespread example of coagulation kinetics, and characterize which interaction types lead to consensus formation and which do not, as well as the corresponding different macroscopic patterns. The crucial technical point is extending the usual one species coagulation dynamics to the two species one. This is achieved by means of introducing explicitly solvable kernels which have a clear physical meaning. The corresponding solutions are calculated in the long time limit, in which consensus may or may not be reached. The lack of consensus is characterized by means of scaling limits of the solutions. The possible applications of our results to some topics in which consensus reaching is fundamental, such as collective animal motion and opinion spreading dynamics, are also outlined.
A fast simulation method for radiation maps using interpolation in a virtual environment.
Li, Meng-Kun; Liu, Yong-Kuo; Peng, Min-Jun; Xie, Chun-Li; Yang, Li-Qun
2018-05-10
In nuclear decommissioning, virtual simulation technology is a useful tool to achieve an effective work process by using virtual environments to represent the physical and logical scheme of a real decommissioning project. This technology is cost-saving and time-saving, with the capacity to develop various decommissioning scenarios and reduce the risk of retrofitting. The method utilises a radiation map in a virtual simulation as the basis for the assessment of exposure to a virtual human. In this paper, we propose a fast simulation method using a known radiation source. The method has a unique advantage over point kernel and Monte Carlo methods because it generates the radiation map using interpolation in a virtual environment. The simulation of the radiation map including the calculation and the visualisation were realised using UNITY and MATLAB. The feasibility of the proposed method was tested on a hypothetical case and the results obtained are discussed in this paper.
Optimized formulas for the gravitational field of a tesseroid
NASA Astrophysics Data System (ADS)
Grombein, Thomas; Seitz, Kurt; Heck, Bernhard
2013-07-01
Various tasks in geodesy, geophysics, and related geosciences require precise information on the impact of mass distributions on gravity field-related quantities, such as the gravitational potential and its partial derivatives. Using forward modeling based on Newton's integral, mass distributions are generally decomposed into regular elementary bodies. In classical approaches, prisms or point mass approximations are mostly utilized. Considering the effect of the sphericity of the Earth, alternative mass modeling methods based on tesseroid bodies (spherical prisms) should be taken into account, particularly in regional and global applications. Expressions for the gravitational field of a point mass are relatively simple when formulated in Cartesian coordinates. In the case of integrating over a tesseroid volume bounded by geocentric spherical coordinates, it will be shown that it is also beneficial to represent the integral kernel in terms of Cartesian coordinates. This considerably simplifies the determination of the tesseroid's potential derivatives in comparison with previously published methodologies that make use of integral kernels expressed in spherical coordinates. Based on this idea, optimized formulas for the gravitational potential of a homogeneous tesseroid and its derivatives up to second-order are elaborated in this paper. These new formulas do not suffer from the polar singularity of the spherical coordinate system and can, therefore, be evaluated for any position on the globe. Since integrals over tesseroid volumes cannot be solved analytically, the numerical evaluation is achieved by means of expanding the integral kernel in a Taylor series with fourth-order error in the spatial coordinates of the integration point. As the structure of the Cartesian integral kernel is substantially simplified, Taylor coefficients can be represented in a compact and computationally attractive form. Thus, the use of the optimized tesseroid formulas particularly benefits from a significant decrease in computation time by about 45 % compared to previously used algorithms. In order to show the computational efficiency and to validate the mathematical derivations, the new tesseroid formulas are applied to two realistic numerical experiments and are compared to previously published tesseroid methods and the conventional prism approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less
Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
NASA Astrophysics Data System (ADS)
Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.
2016-03-01
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
NASA Astrophysics Data System (ADS)
Chiavassa, S.; Aubineau-Lanièce, I.; Bitar, A.; Lisbona, A.; Barbet, J.; Franck, D.; Jourdain, J. R.; Bardiès, M.
2006-02-01
Dosimetric studies are necessary for all patients treated with targeted radiotherapy. In order to attain the precision required, we have developed Oedipe, a dosimetric tool based on the MCNPX Monte Carlo code. The anatomy of each patient is considered in the form of a voxel-based geometry created using computed tomography (CT) images or magnetic resonance imaging (MRI). Oedipe enables dosimetry studies to be carried out at the voxel scale. Validation of the results obtained by comparison with existing methods is complex because there are multiple sources of variation: calculation methods (different Monte Carlo codes, point kernel), patient representations (model or specific) and geometry definitions (mathematical or voxel-based). In this paper, we validate Oedipe by taking each of these parameters into account independently. Monte Carlo methodology requires long calculation times, particularly in the case of voxel-based geometries, and this is one of the limits of personalized dosimetric methods. However, our results show that the use of voxel-based geometry as opposed to a mathematically defined geometry decreases the calculation time two-fold, due to an optimization of the MCNPX2.5e code. It is therefore possible to envisage the use of Oedipe for personalized dosimetry in the clinical context of targeted radiotherapy.
NASA Astrophysics Data System (ADS)
Gao, Ling; Ren, Shouxin
2005-10-01
Simultaneous determination of Ni(II), Cd(II), Cu(II) and Zn(II) was studied by two methods, kernel partial least squares (KPLS) and wavelet packet transform partial least squares (WPTPLS), with xylenol orange and cetyltrimethyl ammonium bromide as reagents in the medium pH = 9.22 borax-hydrochloric acid buffer solution. Two programs, PKPLS and PWPTPLS, were designed to perform the calculations. Data reduction was performed using kernel matrices and wavelet packet transform, respectively. In the KPLS method, the size of the kernel matrix is only dependent on the number of samples, thus the method was suitable for the data matrix with many wavelengths and fewer samples. Wavelet packet representations of signals provide a local time-frequency description, thus in the wavelet packet domain, the quality of the noise removal can be improved. In the WPTPLS by optimization, wavelet function and decomposition level were selected as Daubeches 12 and 5, respectively. Experimental results showed both methods to be successful even where there was severe overlap of spectra.
OPC modeling by genetic algorithm
NASA Astrophysics Data System (ADS)
Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Tsay, C. S.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.; Lin, B. J.
2005-05-01
Optical proximity correction (OPC) is usually used to pre-distort mask layouts to make the printed patterns as close to the desired shapes as possible. For model-based OPC, a lithographic model to predict critical dimensions after lithographic processing is needed. The model is usually obtained via a regression of parameters based on experimental data containing optical proximity effects. When the parameters involve a mix of the continuous (optical and resist models) and the discrete (kernel numbers) sets, the traditional numerical optimization method may have difficulty handling model fitting. In this study, an artificial-intelligent optimization method was used to regress the parameters of the lithographic models for OPC. The implemented phenomenological models were constant-threshold models that combine diffused aerial image models with loading effects. Optical kernels decomposed from Hopkin"s equation were used to calculate aerial images on the wafer. Similarly, the numbers of optical kernels were treated as regression parameters. This way, good regression results were obtained with different sets of optical proximity effect data.
ASIC-based architecture for the real-time computation of 2D convolution with large kernel size
NASA Astrophysics Data System (ADS)
Shao, Rui; Zhong, Sheng; Yan, Luxin
2015-12-01
Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.
Robinson, Andrew P; Tipping, Jill; Cullen, David M; Hamilton, David; Brown, Richard; Flynn, Alex; Oldfield, Christopher; Page, Emma; Price, Emlyn; Smith, Andrew; Snee, Richard
2016-12-01
Patient-specific absorbed dose calculations for molecular radiotherapy require accurate activity quantification. This is commonly derived from Single-Photon Emission Computed Tomography (SPECT) imaging using a calibration factor relating detected counts to known activity in a phantom insert. A series of phantom inserts, based on the mathematical models underlying many clinical dosimetry calculations, have been produced using 3D printing techniques. SPECT/CT data for the phantom inserts has been used to calculate new organ-specific calibration factors for (99m) Tc and (177)Lu. The measured calibration factors are compared to predicted values from calculations using a Gaussian kernel. Measured SPECT calibration factors for 3D printed organs display a clear dependence on organ shape for (99m) Tc and (177)Lu. The observed variation in calibration factor is reproduced using Gaussian kernel-based calculation over two orders of magnitude change in insert volume for (99m) Tc and (177)Lu. These new organ-specific calibration factors show a 24, 11 and 8 % reduction in absorbed dose for the liver, spleen and kidneys, respectively. Non-spherical calibration factors from 3D printed phantom inserts can significantly improve the accuracy of whole organ activity quantification for molecular radiotherapy, providing a crucial step towards individualised activity quantification and patient-specific dosimetry. 3D printed inserts are found to provide a cost effective and efficient way for clinical centres to access more realistic phantom data.
NASA Astrophysics Data System (ADS)
Isegawa, Miho; Kato, Shigeki
2007-12-01
Low-frequency infrared (IR) and depolarized Raman scattering (DRS) spectra of acetonitrile, methylene chloride, and acetone liquids are simulated via molecular dynamics calculations with the charge response kernel (CRK) model obtained at the second order Møller-Plesset perturbation (MP2) level. For this purpose, the analytical second derivative technique for the MP2 energy is employed to evaluate the CRK matrices. The calculated IR spectra reasonably agree with the experiments. In particular, the agreement is excellent for acetone because the present CRK model well reproduces the experimental polarizability in the gas phase. The importance of interaction induced dipole moments in characterizing the spectral shapes is stressed. The DRS spectrum of acetone is mainly discussed because the experimental spectrum is available only for this molecule. The calculated spectrum is close to the experiment. The comparison of the present results with those by the multiple random telegraph model is also made. By decomposing the polarizability anisotropy time correlation function to the contributions from the permanent, induced polarizability and their cross term, a discrepancy from the previous calculations is observed in the sign of permanent-induce cross term contribution. The origin of this discrepancy is discussed by analyzing the correlation functions for acetonitrile.
Building machine learning force fields for nanoclusters
NASA Astrophysics Data System (ADS)
Zeni, Claudio; Rossi, Kevin; Glielmo, Aldo; Fekete, Ádám; Gaston, Nicola; Baletto, Francesca; De Vita, Alessandro
2018-06-01
We assess Gaussian process (GP) regression as a technique to model interatomic forces in metal nanoclusters by analyzing the performance of 2-body, 3-body, and many-body kernel functions on a set of 19-atom Ni cluster structures. We find that 2-body GP kernels fail to provide faithful force estimates, despite succeeding in bulk Ni systems. However, both 3- and many-body kernels predict forces within an ˜0.1 eV/Å average error even for small training datasets and achieve high accuracy even on out-of-sample, high temperature structures. While training and testing on the same structure always provide satisfactory accuracy, cross-testing on dissimilar structures leads to higher prediction errors, posing an extrapolation problem. This can be cured using heterogeneous training on databases that contain more than one structure, which results in a good trade-off between versatility and overall accuracy. Starting from a 3-body kernel trained this way, we build an efficient non-parametric 3-body force field that allows accurate prediction of structural properties at finite temperatures, following a newly developed scheme [A. Glielmo et al., Phys. Rev. B 95, 214302 (2017)]. We use this to assess the thermal stability of Ni19 nanoclusters at a fractional cost of full ab initio calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaise Collin
The Idaho National Laboraroty (INL) PARFUME (particle fuel model) code was used to assess the overall fuel performance of uranium nitride (UN) tristructural isotropic (TRISO) ceramic fuel under irradiation conditions typical of a Light Water Reactor (LWR). The dimensional changes of the fuel particle layers and kernel were calculated, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated depending on the strain behavior of the constituent materials at high fast fluence and burn up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along withmore » stress levels in the inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn up. These material properties have large uncertainties at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, a large experimental effort would be needed to establish material properties, including kernel and PyC swelling rates, under these conditions before definitive conclusions can be drawn on the behavior of UN TRISO fuel in LWRs.« less
Tcl as a Software Environment for a TCS
NASA Astrophysics Data System (ADS)
Terrett, David L.
2002-12-01
This paper describes how the Tcl scripting language and C API has been used as the software environment for a telescope pointing kernel so that new pointing algorithms and software architectures can be developed and tested without needing a real-time operating system or real-time software environment. It has enabled development to continue outside the framework of a specific telescope project while continuing to build a system that is sufficiently complete to be capable of controlling real hardware but expending minimum effort on replacing the services that would normally by provided by a real-time software environment. Tcl is used as a scripting language for configuring the system at startup and then as the command interface for controlling the running system; the Tcl C language API is used to provided a system independent interface to file and socket I/O and other operating system services. The pointing algorithms themselves are implemented as a set of C++ objects calling C library functions that implement the algorithms described in [2]. Although originally designed as a test and development environment, the system, running as a soft real-time process on Linux, has been used to test the SOAR mount control system and will be used as the pointing kernel of the SOAR telescope control system
NASA Astrophysics Data System (ADS)
Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.
2016-02-01
In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.
Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A
2016-02-21
In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.
Warren, Frederick J; Perston, Benjamin B; Galindez-Najera, Silvia P; Edwards, Cathrina H; Powell, Prudence O; Mandalari, Giusy; Campbell, Grant M; Butterworth, Peter J; Ellis, Peter R
2015-01-01
Infrared microspectroscopy is a tool with potential for studies of the microstructure, chemical composition and functionality of plants at a subcellular level. Here we present the use of high-resolution bench top-based infrared microspectroscopy to investigate the microstructure of Triticum aestivum L. (wheat) kernels and Arabidopsis leaves. Images of isolated wheat kernel tissues and whole wheat kernels following hydrothermal processing and simulated gastric and duodenal digestion were generated, as well as images of Arabidopsis leaves at different points during a diurnal cycle. Individual cells and cell walls were resolved, and large structures within cells, such as starch granules and protein bodies, were clearly identified. Contrast was provided by converting the hyperspectral image cubes into false-colour images using either principal component analysis (PCA) overlays or by correlation analysis. The unsupervised PCA approach provided a clear view of the sample microstructure, whereas the correlation analysis was used to confirm the identity of different anatomical structures using the spectra from isolated components. It was then demonstrated that gelatinized and native starch within cells could be distinguished, and that the loss of starch during wheat digestion could be observed, as well as the accumulation of starch in leaves during a diurnal period. PMID:26400058
Factors affecting cadmium absorbed by pistachio kernel in calcareous soils, southeast of Iran.
Shirani, H; Hosseinifard, S J; Hashemipour, H
2018-03-01
Cadmium (Cd) which does not have a biological role is one of the most toxic heavy metals for organisms. This metal enters environment through industrial processes and fertilizers. The main objective of this study was to determine the relationships between absorbed Cd by pistachio kernel and some of soil physical and chemical characteristics using modeling by stepwise regression and Artificial Neural Network (ANN), in calcareous soils in Rafsanjan region, southeast of Iran. For these purposes, 220 pistachio orchards were selected, and soil samples were taken from two depths of 0-40 and 40-80cm. Besides, fruit and leaf samples from branches with and without fruit were taken in each sampling point. The results showed that affecting factors on absorbed Cd by pistachio kernel which were obtained by regression method (pH and clay percent) were not interpretable, and considering unsuitable vales of determinant coefficient (R 2 ) and Root Mean Squares Error (RMSE), the model did not have sufficient validity. However, ANN modeling was highly accurate and reliable. Based on its results, soil available P and Zn and soil salinity were the most important factors affecting the concentration of Cd in pistachio kernel in pistachio growing areas of Rafsanjan. Copyright © 2017 Elsevier B.V. All rights reserved.
Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density
NASA Astrophysics Data System (ADS)
Hohl, A.; Delmelle, E. M.; Tang, W.
2015-07-01
Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.
Speeding Up the Bilateral Filter: A Joint Acceleration Way.
Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng
2016-06-01
Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.
Learning molecular energies using localized graph kernels
Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos
2017-03-21
We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less
Learning molecular energies using localized graph kernels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos
We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Algan, O; Ahmad, S
Purpose: To model patient motion and produce four-dimensional (4D) optimized dose distributions that consider motion-artifacts in the dose calculation during the treatment planning process. Methods: An algorithm for dose calculation is developed where patient motion is considered in dose calculation at the stage of the treatment planning. First, optimal dose distributions are calculated for the stationary target volume where the dose distributions are optimized considering intensity-modulated radiation therapy (IMRT). Second, a convolution-kernel is produced from the best-fitting curve which matches the motion trajectory of the patient. Third, the motion kernel is deconvolved with the initial dose distribution optimized for themore » stationary target to produce a dose distribution that is optimized in four-dimensions. This algorithm is tested with measured doses using a mobile phantom that moves with controlled motion patterns. Results: A motion-optimized dose distribution is obtained from the initial dose distribution of the stationary target by deconvolution with the motion-kernel of the mobile target. This motion-optimized dose distribution is equivalent to that optimized for the stationary target using IMRT. The motion-optimized and measured dose distributions are tested with the gamma index with a passing rate of >95% considering 3% dose-difference and 3mm distance-to-agreement. If the dose delivery per beam takes place over several respiratory cycles, then the spread-out of the dose distributions is only dependent on the motion amplitude and not affected by motion frequency and phase. This algorithm is limited to motion amplitudes that are smaller than the length of the target along the direction of motion. Conclusion: An algorithm is developed to optimize dose in 4D. Besides IMRT that provides optimal dose coverage for a stationary target, it extends dose optimization to 4D considering target motion. This algorithm provides alternative to motion management techniques such as beam-gating or breath-holding and has potential applications in adaptive radiation therapy.« less
NASA Astrophysics Data System (ADS)
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg
2016-02-01
We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.
Arcisauskaite, Vaida; Melo, Juan I; Hemmingsen, Lars; Sauer, Stephan P A
2011-07-28
We investigate the importance of relativistic effects on NMR shielding constants and chemical shifts of linear HgL(2) (L = Cl, Br, I, CH(3)) compounds using three different relativistic methods: the fully relativistic four-component approach and the two-component approximations, linear response elimination of small component (LR-ESC) and zeroth-order regular approximation (ZORA). LR-ESC reproduces successfully the four-component results for the C shielding constant in Hg(CH(3))(2) within 6 ppm, but fails to reproduce the Hg shielding constants and chemical shifts. The latter is mainly due to an underestimation of the change in spin-orbit contribution. Even though ZORA underestimates the absolute Hg NMR shielding constants by ∼2100 ppm, the differences between Hg chemical shift values obtained using ZORA and the four-component approach without spin-density contribution to the exchange-correlation (XC) kernel are less than 60 ppm for all compounds using three different functionals, BP86, B3LYP, and PBE0. However, larger deviations (up to 366 ppm) occur for Hg chemical shifts in HgBr(2) and HgI(2) when ZORA results are compared with four-component calculations with non-collinear spin-density contribution to the XC kernel. For the ZORA calculations it is necessary to use large basis sets (QZ4P) and the TZ2P basis set may give errors of ∼500 ppm for the Hg chemical shifts, despite deceivingly good agreement with experimental data. A Gaussian nucleus model for the Coulomb potential reduces the Hg shielding constants by ∼100-500 ppm and the Hg chemical shifts by 1-143 ppm compared to the point nucleus model depending on the atomic number Z of the coordinating atom and the level of theory. The effect on the shielding constants of the lighter nuclei (C, Cl, Br, I) is, however, negligible. © 2011 American Institute of Physics
Méndez, Nelson; Oviedo-Pastrana, Misael; Mattar, Salim; Caicedo-Castro, Isaac; Arrieta, German
2017-01-01
The Zika virus disease (ZVD) has had a huge impact on public health in Colombia for the numbers of people affected and the presentation of Guillain-Barre syndrome (GBS) and microcephaly cases associated to ZVD. A retrospective descriptive study was carried out, we analyze the epidemiological situation of ZVD and its association with microcephaly and GBS during a 21-month period, from October 2015 to June 2017. The variables studied were: (i) ZVD cases, (ii) ZVD cases in pregnant women, (iii) laboratory-confirmed ZVD in pregnant women, (iv) ZVD cases associated with microcephaly, (v) laboratory-confirmed ZVD associated with microcephaly, and (vi) ZVD associated to GBS cases. Average number of cases, attack rates (AR) and proportions were also calculated. The studied variables were plotted by epidemiological weeks and months. The distribution of ZVD cases in Colombia was mapped across the time using Kernel density estimator and QGIS software; we adopted Kernel Ridge Regression (KRR) and the Gaussian Kernel to estimate the number of Guillain Barre cases given the number of ZVD cases. One hundred eight thousand eighty-seven ZVD cases had been reported in Colombia, including 19,963 (18.5%) in pregnant women, 710 (0.66%) associated with microcephaly (AR, 4.87 cases per 10,000 live births) and 453 (0.42%) ZVD associated to GBS cases (AR, 41.9 GBS cases per 10,000 ZVD cases). It appears the cases of GBS increased in parallel with the cases of ZVD, cases of microcephaly appeared 5 months after recognition of the outbreak. The kernel density map shows that throughout the study period, the states most affected by the Zika outbreak in Colombia were mainly San Andrés and Providencia islands, Casanare, Norte de Santander, Arauca and Huila. The KRR shows that there is no proportional relationship between the number of GBS and ZVD cases. During the cross validation, the RMSE achieved for the second order polynomial kernel, the linear kernel, the sigmoid kernel, and the Gaussian kernel are 9.15, 9.2, 10.7, and 7.2 respectively. This study updates the epidemiological analysis of the ZVD situation in Colombia describes the geographical distribution of ZVD and shows the functional relationship between ZVD cases and GBS.
NASA Astrophysics Data System (ADS)
Dougherty, Andrew W.
Metal oxides are a staple of the sensor industry. The combination of their sensitivity to a number of gases, and the electrical nature of their sensing mechanism, make the particularly attractive in solid state devices. The high temperature stability of the ceramic material also make them ideal for detecting combustion byproducts where exhaust temperatures can be high. However, problems do exist with metal oxide sensors. They are not very selective as they all tend to be sensitive to a number of reduction and oxidation reactions on the oxide's surface. This makes sensors with large numbers of sensors interesting to study as a method for introducing orthogonality to the system. Also, the sensors tend to suffer from long term drift for a number of reasons. In this thesis I will develop a system for intelligently modeling metal oxide sensors and determining their suitability for use in large arrays designed to analyze exhaust gas streams. It will introduce prior knowledge of the metal oxide sensors' response mechanisms in order to produce a response function for each sensor from sparse training data. The system will use the same technique to model and remove any long term drift from the sensor response. It will also provide an efficient means for determining the orthogonality of the sensor to determine whether they are useful in gas sensing arrays. The system is based on least squares support vector regression using the reciprocal kernel. The reciprocal kernel is introduced along with a method of optimizing the free parameters of the reciprocal kernel support vector machine. The reciprocal kernel is shown to be simpler and to perform better than an earlier kernel, the modified reciprocal kernel. Least squares support vector regression is chosen as it uses all of the training points and an emphasis was placed throughout this research for extracting the maximum information from very sparse data. The reciprocal kernel is shown to be effective in modeling the sensor responses in the time, gas and temperature domains, and the dual representation of the support vector regression solution is shown to provide insight into the sensor's sensitivity and potential orthogonality. Finally, the dual weights of the support vector regression solution to the sensor's response are suggested as a fitness function for a genetic algorithm, or some other method for efficiently searching large parameter spaces.
Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan
2012-05-15
Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less
A GPU-accelerated implicit meshless method for compressible flows
NASA Astrophysics Data System (ADS)
Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng
2018-05-01
This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.
NASA Astrophysics Data System (ADS)
Dimofte, Andreea; Finlay, Jarod C.; Liang, Xing; Zhu, Timothy C.
2012-10-01
For interstitial photodynamic therapy (PDT), cylindrical diffusing fibers (CDFs) are often used to deliver light. This study examines the feasibility and accuracy of using CDFs to characterize the absorption (μa) and reduced scattering (μ‧s) coefficients of heterogeneous turbid media. Measurements were performed in tissue-simulating phantoms with μa between 0.1 and 1 cm-1 and μ‧s between 3 and 10 cm-1 with CDFs 2 to 4 cm in length. Optical properties were determined by fitting the measured light fluence rate profiles at a fixed distance from the CDF axis using a heterogeneous kernel model in which the cylindrical diffusing fiber is treated as a series of point sources. The resulting optical properties were compared with independent measurement using a point source method. In a homogenous medium, we are able to determine the absorption coefficient μa using a value of μ‧s determined a priori (uniform fit) or μ‧s obtained by fitting (variable fit) with standard (maximum) deviations of 6% (18%) and 18% (44%), respectively. However, the CDF method is found to be insensitive to variations in μ‧s, thus requiring a complementary method such as using a point source for determination of μ‧s. The error for determining μa decreases in very heterogeneous turbid media because of the local absorption extremes. The data acquisition time for obtaining the one-dimensional optical properties distribution is less than 8 s. This method can result in dramatically improved accuracy of light fluence rate calculation for CDFs for prostate PDT in vivo when the same model and geometry is used for forward calculations using the extrapolated tissue optical properties.
NASA Astrophysics Data System (ADS)
Xu, Chong; Dai, Fuchu; Xu, Xiwei; Lee, Yuan Hsi
2012-04-01
Support vector machine (SVM) modeling is based on statistical learning theory. It involves a training phase with associated input and target output values. In recent years, the method has become increasingly popular. The main purpose of this study is to evaluate the mapping power of SVM modeling in earthquake triggered landslide-susceptibility mapping for a section of the Jianjiang River watershed using a Geographic Information System (GIS) software. The river was affected by the Wenchuan earthquake of May 12, 2008. Visual interpretation of colored aerial photographs of 1-m resolution and extensive field surveys provided a detailed landslide inventory map containing 3147 landslides related to the 2008 Wenchuan earthquake. Elevation, slope angle, slope aspect, distance from seismogenic faults, distance from drainages, and lithology were used as the controlling parameters. For modeling, three groups of positive and negative training samples were used in concert with four different kernel functions. Positive training samples include the centroids of 500 large landslides, those of all 3147 landslides, and 5000 randomly selected points in landslide polygons. Negative training samples include 500, 3147, and 5000 randomly selected points on slopes that remained stable during the Wenchuan earthquake. The four kernel functions are linear, polynomial, radial basis, and sigmoid. In total, 12 cases of landslide susceptibility were mapped. Comparative analyses of landslide-susceptibility probability and area relation curves show that both the polynomial and radial basis functions suitably classified the input data as either landslide positive or negative though the radial basis function was more successful. The 12 generated landslide-susceptibility maps were compared with known landslide centroid locations and landslide polygons to verify the success rate and predictive accuracy of each model. The 12 results were further validated using area-under-curve analysis. Group 3 with 5000 randomly selected points on the landslide polygons, and 5000 randomly selected points along stable slopes gave the best results with a success rate of 79.20% and predictive accuracy of 79.13% under the radial basis function. Of all the results, the sigmoid kernel function was the least skillful when used in concert with the centroid data of all 3147 landslides as positive training samples, and the negative training samples of 3147 randomly selected points in regions of stable slope (success rate = 54.95%; predictive accuracy = 61.85%). This paper also provides suggestions and reference data for selecting appropriate training samples and kernel function types for earthquake triggered landslide-susceptibility mapping using SVM modeling. Predictive landslide-susceptibility maps could be useful in hazard mitigation by helping planners understand the probability of landslides in different regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abrecht, David G.; Schwantes, Jon M.; Kukkadapu, Ravi K.
2015-02-01
Spectrum-processing software that incorporates a gaussian smoothing kernel within the statistics of first-order Kalman filtration has been developed to provide cross-channel spectral noise reduction for increased real-time signal-to-noise ratios for Mossbauer spectroscopy. The filter was optimized for the breadth of the gaussian using the Mossbauer spectrum of natural iron foil, and comparisons between the peak broadening, signal-to-noise ratios, and shifts in the calculated hyperfine parameters are presented. The results of optimization give a maximum improvement in the signal-to-noise ratio of 51.1% over the unfiltered spectrum at a gaussian breadth of 27 channels, or 2.5% of the total spectrum width. Themore » full-width half-maximum of the spectrum peaks showed an increase of 19.6% at this optimum point, indicating a relatively weak increase in the peak broadening relative to the signal enhancement, leading to an overall increase in the observable signal. Calculations of the hyperfine parameters showed no statistically significant deviations were introduced from the application of the filter, confirming the utility of this filter for spectroscopy applications.« less
Manley, Marena; du Toit, Gerida; Geladi, Paul
2011-02-07
The combination of near infrared (NIR) hyperspectral imaging and chemometrics was used to follow the diffusion of conditioning water over time in wheat kernels of different hardnesses. Conditioning was attempted with deionised water (dH(2)O) and deuterium oxide (D(2)O). The images were recorded at different conditioning times (0-36 h) from 1000 to 2498 nm with a line scan imaging system. After multivariate cleaning and spectral pre-processing (either multiplicative scatter correction or standard normal variate and Savitzky-Golay smoothing) six principal components (PCs) were calculated. These were studied visually interactively as score images and score plots. As no clear clusters were present in the score plots, changes in the score plots were investigated by means of classification gradients made within the respective PCs. Classes were selected in the direction of a PC (from positive to negative or negative to positive score values) in almost equal segments. Subsequently loading line plots were used to provide a spectroscopic explanation of the classification gradients. It was shown that the first PC explained kernel curvature. PC3 was shown to be related to a moisture-starch contrast and could explain the progress of water uptake. The positive influence of protein was also observed. The behaviour of soft, hard and very hard kernels was different in this respect, with the uptake of water observed much earlier in the soft kernels than in the harder ones. The harder kernels also showed a stronger influence of protein in the loading line plots. Difference spectra showed interpretable changes over time for water but not for D(2)O which had a too low signal in the wavelength range used. NIR hyperspectral imaging together with exploratory chemometrics, as detailed in this paper, may have wider applications than merely conditioning studies. Copyright © 2010 Elsevier B.V. All rights reserved.
MO-G-17A-05: PET Image Deblurring Using Adaptive Dictionary Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valiollahzadeh, S; Clark, J; Mawlawi, O
2014-06-15
Purpose: The aim of this work is to deblur PET images while suppressing Poisson noise effects using adaptive dictionary learning (DL) techniques. Methods: The model that relates a blurred and noisy PET image to the desired image is described as a linear transform y=Hm+n where m is the desired image, H is a blur kernel, n is Poisson noise and y is the blurred image. The approach we follow to recover m involves the sparse representation of y over a learned dictionary, since the image has lots of repeated patterns, edges, textures and smooth regions. The recovery is based onmore » an optimization of a cost function having four major terms: adaptive dictionary learning term, sparsity term, regularization term, and MLEM Poisson noise estimation term. The optimization is solved by a variable splitting method that introduces additional variables. We simulated a 128×128 Hoffman brain PET image (baseline) with varying kernel types and sizes (Gaussian 9×9, σ=5.4mm; Uniform 5×5, σ=2.9mm) with additive Poisson noise (Blurred). Image recovery was performed once when the kernel type was included in the model optimization and once with the model blinded to kernel type. The recovered image was compared to the baseline as well as another recovery algorithm PIDSPLIT+ (Setzer et. al.) by calculating PSNR (Peak SNR) and normalized average differences in pixel intensities (NADPI) of line profiles across the images. Results: For known kernel types, the PSNR of the Gaussian (Uniform) was 28.73 (25.1) and 25.18 (23.4) for DL and PIDSPLIT+ respectively. For blinded deblurring the PSNRs were 25.32 and 22.86 for DL and PIDSPLIT+ respectively. NADPI between baseline and DL, and baseline and blurred for the Gaussian kernel was 2.5 and 10.8 respectively. Conclusion: PET image deblurring using dictionary learning seems to be a good approach to restore image resolution in presence of Poisson noise. GE Health Care.« less
P- and S-wave Receiver Function Imaging with Scattering Kernels
NASA Astrophysics Data System (ADS)
Hansen, S. M.; Schmandt, B.
2017-12-01
Full waveform inversion provides a flexible approach to the seismic parameter estimation problem and can account for the full physics of wave propagation using numeric simulations. However, this approach requires significant computational resources due to the demanding nature of solving the forward and adjoint problems. This issue is particularly acute for temporary passive-source seismic experiments (e.g. PASSCAL) that have traditionally relied on teleseismic earthquakes as sources resulting in a global scale forward problem. Various approximation strategies have been proposed to reduce the computational burden such as hybrid methods that embed a heterogeneous regional scale model in a 1D global model. In this study, we focus specifically on the problem of scattered wave imaging (migration) using both P- and S-wave receiver function data. The proposed method relies on body-wave scattering kernels that are derived from the adjoint data sensitivity kernels which are typically used for full waveform inversion. The forward problem is approximated using ray theory yielding a computationally efficient imaging algorithm that can resolve dipping and discontinuous velocity interfaces in 3D. From the imaging perspective, this approach is closely related to elastic reverse time migration. An energy stable finite-difference method is used to simulate elastic wave propagation in a 2D hypothetical subduction zone model. The resulting synthetic P- and S-wave receiver function datasets are used to validate the imaging method. The kernel images are compared with those generated by the Generalized Radon Transform (GRT) and Common Conversion Point stacking (CCP) methods. These results demonstrate the potential of the kernel imaging approach to constrain lithospheric structure in complex geologic environments with sufficiently dense recordings of teleseismic data. This is demonstrated using a receiver function dataset from the Central California Seismic Experiment which shows several dipping interfaces related to the tectonic assembly of this region. Figure 1. Scattering kernel examples for three receiver function phases. A) direct P-to-s (Ps), B) direct S-to-p and C) free-surface PP-to-s (PPs).
Producing data-based sensitivity kernels from convolution and correlation in exploration geophysics.
NASA Astrophysics Data System (ADS)
Chmiel, M. J.; Roux, P.; Herrmann, P.; Rondeleux, B.
2016-12-01
Many studies have shown that seismic interferometry can be used to estimate surface wave arrivals by correlation of seismic signals recorded at a pair of locations. In the case of ambient noise sources, the convergence towards the surface wave Green's functions is obtained with the criterion of equipartitioned energy. However, seismic acquisition with active, controlled sources gives more possibilities when it comes to interferometry. The use of controlled sources makes it possible to recover the surface wave Green's function between two points using either correlation or convolution. We investigate the convolutional and correlational approaches using land active-seismic data from exploration geophysics. The data were recorded on 10,710 vertical receivers using 51,808 sources (seismic vibrator trucks). The sources spacing is the same in both X and Y directions (30 m) which is known as a "carpet shooting". The receivers are placed in parallel lines with a spacing 150 m in the X direction and 30 m in the Y direction. Invoking spatial reciprocity between sources and receivers, correlation and convolution functions can thus be constructed between either pairs of receivers or pairs of sources. Benefiting from the dense acquisition, we extract sensitivity kernels from correlation and convolution measurements of the seismic data. These sensitivity kernels are subsequently used to produce phase-velocity dispersion curves between two points and to separate the higher mode from the fundamental mode for surface waves. Potential application to surface wave cancellation is also envisaged.
Modeling and analysis of UN TRISO fuel for LWR application using the PARFUME code
NASA Astrophysics Data System (ADS)
Collin, Blaise P.
2014-08-01
The Idaho National Laboratory (INL) PARFUME (PARticle FUel ModEl) code was used to assess the overall fuel performance of uranium nitride (UN) tristructural isotropic (TRISO) ceramic fuel under irradiation conditions typical of a Light Water Reactor (LWR). The dimensional changes of the fuel particle layers and kernel were calculated, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated depending on the strain behavior of the constituent materials at high fast fluence and burn-up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn-up. These material properties have large uncertainties at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, a large experimental effort would be needed to establish material properties, including kernel and PyC swelling rates, under these conditions before definitive conclusions can be drawn on the behavior of UN TRISO fuel in LWRs.
Muñoz, Jesús Escrivá; Gambús, Pedro; Jensen, Erik W; Vallverdú, Montserrat
2018-01-01
This works investigates the time-frequency content of impedance cardiography signals during a propofol-remifentanil anesthesia. In the last years, impedance cardiography (ICG) is a technique which has gained much attention. However, ICG signals need further investigation. Time-Frequency Distributions (TFDs) with 5 different kernels are used in order to analyze impedance cardiography signals (ICG) before the start of the anesthesia and after the loss of consciousness. In total, ICG signals from one hundred and thirty-one consecutive patients undergoing major surgery under general anesthesia were analyzed. Several features were extracted from the calculated TFDs in order to characterize the time-frequency content of the ICG signals. Differences between those features before and after the loss of consciousness were studied. The Extended Modified Beta Distribution (EMBD) was the kernel for which most features shows statistically significant changes between before and after the loss of consciousness. Among all analyzed features, those based on entropy showed a sensibility, specificity and area under the curve of the receiver operating characteristic above 60%. The anesthetic state of the patient is reflected on linear and non-linear features extracted from the TFDs of the ICG signals. Especially, the EMBD is a suitable kernel for the analysis of ICG signals and offers a great range of features which change according to the patient's anesthesia state in a statistically significant way. Schattauer GmbH.
Investigations of Reactive Processes at Temperatures Relevant to the Hypersonic Flight Regime
2014-10-31
molecule is constructed based on high- level ab-initio calculations and interpolated using the reproducible kernel Hilbert space (RKHS) method and...a potential energy surface (PES) for the ground state of the NO2 molecule is constructed based on high- level ab initio calculations and interpolated...between O(3P) and NO(2Π) at higher temperatures relevant to the hypersonic flight regime of reentering space- crafts. At a more fundamental level , we
NASA Astrophysics Data System (ADS)
Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe
2017-07-01
This paper introduces a statistical framework for detecting cylindrical shapes in dense point clouds. We target the application of mapping fallen trees in datasets obtained through terrestrial laser scanning. This is a challenging task due to the presence of ground vegetation, standing trees, DTM artifacts, as well as the fragmentation of dead trees into non-collinear segments. Our method shares the concept of voting in parameter space with the generalized Hough transform, however two of its significant drawbacks are improved upon. First, the need to generate samples on the shape's surface is eliminated. Instead, pairs of nearby input points lying on the surface cast a vote for the cylinder's parameters based on the intrinsic geometric properties of cylindrical shapes. Second, no discretization of the parameter space is required: the voting is carried out in continuous space by means of constructing a kernel density estimator and obtaining its local maxima, using automatic, data-driven kernel bandwidth selection. Furthermore, we show how the detected cylindrical primitives can be efficiently merged to obtain object-level (entire tree) semantic information using graph-cut segmentation and a tailored dynamic algorithm for eliminating cylinder redundancy. Experiments were performed on 3 plots from the Bavarian Forest National Park, with ground truth obtained through visual inspection of the point clouds. It was found that relative to sample consensus (SAC) cylinder fitting, the proposed voting framework can improve the detection completeness by up to 10 percentage points while maintaining the correctness rate.
Control of Early Flame Kernel Growth by Multi-Wavelength Laser Pulses for Enhanced Ignition
Dumitrache, Ciprian; VanOsdol, Rachel; Limbach, Christopher M.; ...
2017-08-31
The present contribution examines the impact of plasma dynamics and plasma-driven fluid dynamics on the flame growth of laser ignited mixtures and shows that a new dual-pulse scheme can be used to control the kernel formation process in ways that extend the lean ignition limit. We do this by performing a comparative study between (conventional) single-pulse laser ignition (λ = 1064 nm) and a novel dual-pulse method based on combining an ultraviolet (UV) pre-ionization pulse (λ = 266 nm) with an overlapped near-infrared (NIR) energy addition pulse (λ = 1064 nm). We employ OH* chemiluminescence to visualize the evolution ofmore » the early flame kernel. For single-pulse laser ignition at lean conditions, the flame kernel separates through third lobe detachment, corresponding to high strain rates that extinguish the flame. In this work, we investigate the capabilities of the dual-pulse to control the plasma-driven fluid dynamics by adjusting the axial offset of the two focal points. In particular, we find there exists a beam waist offset whereby the resulting vorticity suppresses formation of the third lobe, consequently reducing flame stretch. With this approach, we demonstrate that the dual-pulse method enables reduced flame speeds (at early times), an extended lean limit, increased combustion efficiency, and decreased laser energy requirements.« less
Warren, Frederick J; Perston, Benjamin B; Galindez-Najera, Silvia P; Edwards, Cathrina H; Powell, Prudence O; Mandalari, Giusy; Campbell, Grant M; Butterworth, Peter J; Ellis, Peter R
2015-11-01
Infrared microspectroscopy is a tool with potential for studies of the microstructure, chemical composition and functionality of plants at a subcellular level. Here we present the use of high-resolution bench top-based infrared microspectroscopy to investigate the microstructure of Triticum aestivum L. (wheat) kernels and Arabidopsis leaves. Images of isolated wheat kernel tissues and whole wheat kernels following hydrothermal processing and simulated gastric and duodenal digestion were generated, as well as images of Arabidopsis leaves at different points during a diurnal cycle. Individual cells and cell walls were resolved, and large structures within cells, such as starch granules and protein bodies, were clearly identified. Contrast was provided by converting the hyperspectral image cubes into false-colour images using either principal component analysis (PCA) overlays or by correlation analysis. The unsupervised PCA approach provided a clear view of the sample microstructure, whereas the correlation analysis was used to confirm the identity of different anatomical structures using the spectra from isolated components. It was then demonstrated that gelatinized and native starch within cells could be distinguished, and that the loss of starch during wheat digestion could be observed, as well as the accumulation of starch in leaves during a diurnal period. © 2015 The Authors The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.
Control of Early Flame Kernel Growth by Multi-Wavelength Laser Pulses for Enhanced Ignition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumitrache, Ciprian; VanOsdol, Rachel; Limbach, Christopher M.
The present contribution examines the impact of plasma dynamics and plasma-driven fluid dynamics on the flame growth of laser ignited mixtures and shows that a new dual-pulse scheme can be used to control the kernel formation process in ways that extend the lean ignition limit. We do this by performing a comparative study between (conventional) single-pulse laser ignition (λ = 1064 nm) and a novel dual-pulse method based on combining an ultraviolet (UV) pre-ionization pulse (λ = 266 nm) with an overlapped near-infrared (NIR) energy addition pulse (λ = 1064 nm). We employ OH* chemiluminescence to visualize the evolution ofmore » the early flame kernel. For single-pulse laser ignition at lean conditions, the flame kernel separates through third lobe detachment, corresponding to high strain rates that extinguish the flame. In this work, we investigate the capabilities of the dual-pulse to control the plasma-driven fluid dynamics by adjusting the axial offset of the two focal points. In particular, we find there exists a beam waist offset whereby the resulting vorticity suppresses formation of the third lobe, consequently reducing flame stretch. With this approach, we demonstrate that the dual-pulse method enables reduced flame speeds (at early times), an extended lean limit, increased combustion efficiency, and decreased laser energy requirements.« less
Initial Kernel Timing Using a Simple PIM Performance Model
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Block, Gary L.; Springer, Paul L.; Sterling, Thomas; Brockman, Jay B.; Callahan, David
2005-01-01
This presentation will describe some initial results of paper-and-pencil studies of 4 or 5 application kernels applied to a processor-in-memory (PIM) system roughly similar to the Cascade Lightweight Processor (LWP). The application kernels are: * Linked list traversal * Sun of leaf nodes on a tree * Bitonic sort * Vector sum * Gaussian elimination The intent of this work is to guide and validate work on the Cascade project in the areas of compilers, simulators, and languages. We will first discuss the generic PIM structure. Then, we will explain the concepts needed to program a parallel PIM system (locality, threads, parcels). Next, we will present a simple PIM performance model that will be used in the remainder of the presentation. For each kernel, we will then present a set of codes, including codes for a single PIM node, and codes for multiple PIM nodes that move data to threads and move threads to data. These codes are written at a fairly low level, between assembly and C, but much closer to C than to assembly. For each code, we will present some hand-drafted timing forecasts, based on the simple PIM performance model. Finally, we will conclude by discussing what we have learned from this work, including what programming styles seem to work best, from the point-of-view of both expressiveness and performance.
Control of Early Flame Kernel Growth by Multi-Wavelength Laser Pulses for Enhanced Ignition.
Dumitrache, Ciprian; VanOsdol, Rachel; Limbach, Christopher M; Yalin, Azer P
2017-08-31
The present contribution examines the impact of plasma dynamics and plasma-driven fluid dynamics on the flame growth of laser ignited mixtures and shows that a new dual-pulse scheme can be used to control the kernel formation process in ways that extend the lean ignition limit. We perform a comparative study between (conventional) single-pulse laser ignition (λ = 1064 nm) and a novel dual-pulse method based on combining an ultraviolet (UV) pre-ionization pulse (λ = 266 nm) with an overlapped near-infrared (NIR) energy addition pulse (λ = 1064 nm). We employ OH* chemiluminescence to visualize the evolution of the early flame kernel. For single-pulse laser ignition at lean conditions, the flame kernel separates through third lobe detachment, corresponding to high strain rates that extinguish the flame. In this work, we investigate the capabilities of the dual-pulse to control the plasma-driven fluid dynamics by adjusting the axial offset of the two focal points. In particular, we find there exists a beam waist offset whereby the resulting vorticity suppresses formation of the third lobe, consequently reducing flame stretch. With this approach, we demonstrate that the dual-pulse method enables reduced flame speeds (at early times), an extended lean limit, increased combustion efficiency, and decreased laser energy requirements.
NASA Astrophysics Data System (ADS)
Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong
2017-05-01
Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.
NASA Astrophysics Data System (ADS)
Hill, C.
2008-12-01
Low cost graphic cards today use many, relatively simple, compute cores to deliver support for memory bandwidth of more than 100GB/s and theoretical floating point performance of more than 500 GFlop/s. Right now this performance is, however, only accessible to highly parallel algorithm implementations that, (i) can use a hundred or more, 32-bit floating point, concurrently executing cores, (ii) can work with graphics memory that resides on the graphics card side of the graphics bus and (iii) can be partially expressed in a language that can be compiled by a graphics programming tool. In this talk we describe our experiences implementing a complete, but relatively simple, time dependent shallow-water equations simulation targeting a cluster of 30 computers each hosting one graphics card. The implementation takes into account the considerations (i), (ii) and (iii) listed previously. We code our algorithm as a series of numerical kernels. Each kernel is designed to be executed by multiple threads of a single process. Kernels are passed memory blocks to compute over which can be persistent blocks of memory on a graphics card. Each kernel is individually implemented using the NVidia CUDA language but driven from a higher level supervisory code that is almost identical to a standard model driver. The supervisory code controls the overall simulation timestepping, but is written to minimize data transfer between main memory and graphics memory (a massive performance bottle-neck on current systems). Using the recipe outlined we can boost the performance of our cluster by nearly an order of magnitude, relative to the same algorithm executing only on the cluster CPU's. Achieving this performance boost requires that many threads are available to each graphics processor for execution within each numerical kernel and that the simulations working set of data can fit into the graphics card memory. As we describe, this puts interesting upper and lower bounds on the problem sizes for which this technology is currently most useful. However, many interesting problems fit within this envelope. Looking forward, we extrapolate our experience to estimate full-scale ocean model performance and applicability. Finally we describe preliminary hybrid mixed 32-bit and 64-bit experiments with graphics cards that support 64-bit arithmetic, albeit at a lower performance.
McNamara, C; Naddy, B; Rohan, D; Sexton, J
2003-10-01
The Monte Carlo computational system for stochastic modelling of dietary exposure to food chemicals and nutrients is presented. This system was developed through a European Commission-funded research project. It is accessible as a Web-based application service. The system allows and supports very significant complexity in the data sets used as the model input, but provides a simple, general purpose, linear kernel for model evaluation. Specific features of the system include the ability to enter (arbitrarily) complex mathematical or probabilistic expressions at each and every input data field, automatic bootstrapping on subjects and on subject food intake diaries, and custom kernels to apply brand information such as market share and loyalty to the calculation of food and chemical intake.
Livermore Compiler Analysis Loop Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hornung, R. D.
2013-03-01
LCALS is designed to evaluate compiler optimizations and performance of a variety of loop kernels and loop traversal software constructs. Some of the loop kernels are pulled directly from "Livermore Loops Coded in C", developed at LLNL (see item 11 below for details of earlier code versions). The older suites were used to evaluate floating-point performances of hardware platforms prior to porting larger application codes. The LCALS suite is geared toward assissing C++ compiler optimizations and platform performance related to SIMD vectorization, OpenMP threading, and advanced C++ language features. LCALS contains 20 of 24 loop kernels from the older Livermoremore » Loop suites, plus various others representative of loops found in current production appkication codes at LLNL. The latter loops emphasize more diverse loop constructs and data access patterns than the others, such as multi-dimensional difference stencils. The loops are included in a configurable framework, which allows control of compilation, loop sampling for execution timing, which loops are run and their lengths. It generates timing statistics for analysis and comparing variants of individual loops. Also, it is easy to add loops to the suite as desired.« less
Delimiting Areas of Endemism through Kernel Interpolation
Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971
Exploring the Brighter-fatter Effect with the Hyper Suprime-Cam
NASA Astrophysics Data System (ADS)
Coulton, William R.; Armstrong, Robert; Smith, Kendrick M.; Lupton, Robert H.; Spergel, David N.
2018-06-01
The brighter-fatter effect has been postulated to arise due to the build up of a transverse electric field, produced as photocharges accumulate in the pixels’ potential wells. We investigate the brighter-fatter effect in the Hyper Suprime-Cam by examining flat fields and moments of stars. We observe deviations from the expected linear relation in the photon transfer curve (PTC), luminosity-dependent correlations between pixels in flat-field images, and a luminosity-dependent point-spread function (PSF) in stellar observations. Under the key assumptions of translation invariance and Maxwell’s equations in the quasi-static limit, we give a first-principles proof that the effect can be parameterized by a translationally invariant scalar kernel. We describe how this kernel can be estimated from flat fields and discuss how this kernel has been used to remove the brighter-fatter distortions in Hyper Suprime-Cam images. We find that our correction restores the expected linear relation in the PTCs and significantly reduces, but does not completely remove, the luminosity dependence of the PSF over a wide range of magnitudes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, Karen E.; van Rooyen, Isabella J.
2016-11-01
AGR-1 fuel Compact 4-3-3 achieved 18.63% FIMA and was exposed subsequently to a safety test at 1600°C. Two particles, AGR1-433-003 and AGR1-433-007, with measured-to-calculated 110mAg inventories of <22% and 100%, respectively, were selected for comparative electron microprobe analysis to determine whether the distribution or abundance of fission products differed proximally and distally from the deformed kernel in AGR1-433-003, and how this compared to fission product distribution in AGR1-433-007. On the deformed side of AGR1-433-003, Xe, Cs, I, Eu, Sr, and Te concentrations in the kernel buffer interface near the protruded kernel were up to six times higher than on themore » opposite, non-deformed side. At the SiC-inner pyrolytic carbon (IPyC) interface proximal to the deformed kernel, Pd and Ag concentrations were 1.2 wt% and 0.04 wt% respectively, whereas on the SiC-IPyC interface distal from the kernel deformation those elements measured 0.4 and 0.01 wt%, respectively. Palladium and Ag concentrations at the SiC-IPyC interface of AGR1-433-007 were 2.05 and 0.05 wt.%, respectively. Rare earth element concentrations at the SiC-IPyC interface of AGR1-433-007 were a factor of ten higher than at the SiC-IPyC interfaces measured in particle AGR1-433-003. Palladium permeated the SiC layer of AGR1-433-007 and the non-deformed SiC layer of AGR1-433-003.« less
Vokoun, Jason C.; Rabeni, Charles F.
2005-01-01
Flathead catfish Pylodictis olivaris were radio-tracked in the Grand River and Cuivre River, Missouri, from late July until they moved to overwintering habitats in late October. Fish moved within a definable area, and although occasional long-distance movements occurred, the fish typically returned to the previously occupied area. Seasonal home range was calculated with the use of kernel density estimation, which can be interpreted as a probabilistic utilization distribution that documents the internal structure of the estimate by delineating portions of the range that was used a specified percentage of the time. A traditional linear range also was reported. Most flathead catfish (89%) had one 50% kernel-estimated core area, whereas 11% of the fish split their time between two core areas. Core areas were typically in the middle of the 90% kernel-estimated home range (58%), although several had core areas in upstream (26%) and downstream (16%) portions of the home range. Home-range size did not differ based on river, sex, or size and was highly variable among individuals. The median 95% kernel estimate was 1,085 m (range, 70– 69,090 m) for all fish. The median 50% kernel-estimated core area was 135 m (10–2,260 m). The median linear range was 3,510 m (150–50,400 m). Fish pairs with core areas in the same and neighboring pools had static joint space use values of up to 49% (area of intersection index), indicating substantial overlap and use of the same area. However, all fish pairs had low dynamic joint space use values (<0.07; coefficient of association), indicating that fish pairs were temporally segregated, rarely occurring in the same location at the same time.
Benchmarking of MCNP for calculating dose rates at an interim storage facility for nuclear waste.
Heuel-Fabianek, Burkhard; Hille, Ralf
2005-01-01
During the operation of research facilities at Research Centre Jülich, Germany, nuclear waste is stored in drums and other vessels in an interim storage building on-site, which has a concrete shielding at the side walls. Owing to the lack of a well-defined source, measured gamma spectra were unfolded to determine the photon flux on the surface of the containers. The dose rate simulation, including the effects of skyshine, using the Monte Carlo transport code MCNP is compared with the measured dosimetric data at some locations in the vicinity of the interim storage building. The MCNP data for direct radiation confirm the data calculated using a point-kernel method. However, a comparison of the modelled dose rates for direct radiation and skyshine with the measured data demonstrate the need for a more precise definition of the source. Both the measured and the modelled dose rates verified the fact that the legal limits (<1 mSv a(-1)) are met in the area outside the perimeter fence of the storage building to which members of the public have access. Using container surface data (gamma spectra) to define the source may be a useful tool for practical calculations and additionally for benchmarking of computer codes if the discussed critical aspects with respect to the source can be addressed adequately.
Verma, Prashant; Doyley, Marvin M
2017-09-01
We derived the Cramér Rao lower bound for 2-D estimators employed in quasi-static elastography. To illustrate the theory, we modeled the 2-D point spread function as a sinc-modulated sine pulse in the axial direction and as a sinc function in the lateral direction. We compared theoretical predictions of the variance incurred in displacements and strains when quasi-static elastography was performed under varying conditions (different scanning methods, different configuration of conventional linear array imaging and different-size kernels) with those measured from simulated or experimentally acquired data. We performed studies to illustrate the application of the derived expressions when performing vascular elastography with plane wave and compounded plane wave imaging. Standard deviations in lateral displacements were an order higher than those in axial. Additionally, the derived expressions predicted that peak performance should occur when 2% strain is applied, the same order of magnitude as observed in simulations (1%) and experiments (1%-2%). We assessed how different configurations of conventional linear array imaging (number of active reception and transmission elements) influenced the quality of axial and lateral strain elastograms. The theoretical expressions predicted that 2-D echo tracking should be performed with wide kernels, but the length of the kernels should be selected using knowledge of the magnitude of the applied strain: specifically, longer kernels for small strains (<5%) and shorter kernels for larger strains. Although the general trends of theoretical predictions and experimental observations were similar, biases incurred during beamforming and subsample displacement estimation produced noticeable differences. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Validation of Born Traveltime Kernels
NASA Astrophysics Data System (ADS)
Baig, A. M.; Dahlen, F. A.; Hung, S.
2001-12-01
Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.
NASA Astrophysics Data System (ADS)
Tran, H.; Hartmann, J. M.
2011-06-01
Collision induced velocity changes for pure H{_2} have been computed from classical dynamic simulations. The results have been compared with the Keilson-Storer model from four different points of view. The first involves various autocorrelation functions associated with the velocity. The second and third give more detailed information, and are time evolutions of some conditional probabilities for changes of the velocity modulus and orientation and the collision kernels themselves. The fourth considers the evolutions, with density, of the half widths of the Q(1) lines of the isotropic Raman (1-0) fundamental band and of the (2-0) overtone quadrupole band. These spectroscopic data enable an indirect test of the models since velocity changes translate into line-shape modifications through the speed dependence of collisional parameters and the Dicke narrowing of the Doppler contribution to the profile. The results indicate that, while the KS approach gives a poor description of detailed velocity-to-velocty changes, it leads to accurate results for the correlation functions and spectral shapes, quantities related to large averages over the velocity. It is also shown that the use of collision kernels directly derived from MDS lead to an almost perfect prediction of all considered quantities (correlation functions, conditional probabilities, and spectral shapes). Finally, the results stress the need for very accurate calculations of line-broadening and -shifting coefficients from the intermolecular potential to obviate the need for experimental data and permit fully meaningful tests of the models. H. Tran, J.M. Hartmann J. Chem. Phys. 130, 094301, 2009.
Systemic Growth of F. graminearum in Wheat Plants and Related Accumulation of Deoxynivalenol
Moretti, Antonio; Panzarini, Giuseppe; Somma, Stefania; Campagna, Claudio; Ravaglia, Stefano; Logrieco, Antonio F.; Solfrizzo, Michele
2014-01-01
Fusarium head blight (FHB) is an important disease of wheat worldwide caused mainly by Fusarium graminearum (syn. Gibberella zeae). This fungus can be highly aggressive and can produce several mycotoxins such as deoxynivalenol (DON), a well known harmful metabolite for humans, animals, and plants. The fungus can survive overwinter on wheat residues and on the soil, and can usually attack the wheat plant at their point of flowering, being able to infect the heads and to contaminate the kernels at the maturity. Contaminated kernels can be sometimes used as seeds for the cultivation of the following year. Poor knowledge on the ability of the strains of F. graminearum occurring on wheat seeds to be transmitted to the plant and to contribute to the final DON contamination of kernels is available. Therefore, this study had the goals of evaluating: (a) the capability of F. graminearum causing FHB of wheat to be transmitted from the seeds or soil to the kernels at maturity and the progress of the fungus within the plant at different growth stages; (b) the levels of DON contamination in both plant tissues and kernels. The study has been carried out for two years in a climatic chamber. The F. gramineraum strain selected for the inoculation was followed within the plant by using Vegetative Compatibility technique, and quantified by Real-Time PCR. Chemical analyses of DON were carried out by using immunoaffinity cleanup and HPLC/UV/DAD. The study showed that F. graminearum originated from seeds or soil can grow systemically in the plant tissues, with the exception of kernels and heads. There seems to be a barrier that inhibits the colonization of the heads by the fungus. High levels of DON and F. graminearum were found in crowns, stems, and straw, whereas low levels of DON and no detectable levels of F. graminearum were found in both heads and kernels. Finally, in all parts of the plant (heads, crowns, and stems at milk and vitreous ripening stages, and straw at vitreous ripening), also the accumulation of significant quantities of DON-3-glucoside (DON-3G), a product of DON glycosylation, was detected, with decreasing levels in straw, crown, stems and kernels. The presence of DON and DON-3G in heads and kernels without the occurrence of F. graminearum may be explained by their water solubility that could facilitate their translocation from stem to heads and kernels. The presence of DON-3G at levels 23 times higher than DON in the heads at milk stage without the occurrence of F. graminearum may indicate that an active glycosylation of DON also occurs in the head tissues. Finally, the high levels of DON accumulated in straws are worrisome since they represent additional sources of mycotoxin for livestock. PMID:24727554
Hanft, J M; Jones, R J
1986-06-01
Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.
Calculating the Responses of Self-Powered Radiation Detectors.
NASA Astrophysics Data System (ADS)
Thornton, D. A.
Available from UMI in association with The British Library. The aim of this research is to review and develop the theoretical understanding of the responses of Self -Powered Radiation Detectors (SPDs) in Pressurized Water Reactors (PWRs). Two very different models are considered. A simple analytic model of the responses of SPDs to neutrons and gamma radiation is presented. It is a development of the work of several previous authors and has been incorporated into a computer program (called GENSPD), the predictions of which have been compared with experimental and theoretical results reported in the literature. Generally, the comparisons show reasonable consistency; where there is poor agreement explanations have been sought and presented. Two major limitations of analytic models have been identified; neglect of current generation in insulators and over-simplified electron transport treatments. Both of these are developed in the current work. A second model based on the Explicit Representation of Radiation Sources and Transport (ERRST) is presented and evaluated for several SPDs in a PWR at beginning of life. The model incorporates simulation of the production and subsequent transport of neutrons, gamma rays and electrons, both internal and external to the detector. Neutron fluxes and fuel power ratings have been evaluated with core physics calculations. Neutron interaction rates in assembly and detector materials have been evaluated in lattice calculations employing deterministic transport and diffusion methods. The transport of the reactor gamma radiation has been calculated with Monte Carlo, adjusted diffusion and point-kernel methods. The electron flux associated with the reactor gamma field as well as the internal charge deposition effects of the transport of photons and electrons have been calculated with coupled Monte Carlo calculations of photon and electron transport. The predicted response of a SPD is evaluated as the sum of contributions from individual response mechanisms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maneru, F; Gracia, M; Gallardo, N
2015-06-15
Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported frommore » the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other methods of dose calculation usually applied in the clinic.« less
7 CFR 810.602 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...
NASA Astrophysics Data System (ADS)
Panholzer, Martin; Gatti, Matteo; Reining, Lucia
2018-04-01
The charge-density response of extended materials is usually dominated by the collective oscillation of electrons, the plasmons. Beyond this feature, however, intriguing many-body effects are observed. They cannot be described by one of the most widely used approaches for the calculation of dielectric functions, which is time-dependent density functional theory (TDDFT) in the adiabatic local density approximation (ALDA). Here, we propose an approximation to the TDDFT exchange-correlation kernel which is nonadiabatic and nonlocal. It is extracted from correlated calculations in the homogeneous electron gas, where we have tabulated it for a wide range of wave vectors and frequencies. A simple mean density approximation allows one to use it in inhomogeneous materials where the density varies on a scale of 1.6 rs or faster. This kernel contains effects that are completely absent in the ALDA; in particular, it correctly describes the double plasmon in the dynamic structure factor of sodium, and it shows the characteristic low-energy peak that appears in systems with low electronic density. It also leads to an overall quantitative improvement of spectra.
Panholzer, Martin; Gatti, Matteo; Reining, Lucia
2018-04-20
The charge-density response of extended materials is usually dominated by the collective oscillation of electrons, the plasmons. Beyond this feature, however, intriguing many-body effects are observed. They cannot be described by one of the most widely used approaches for the calculation of dielectric functions, which is time-dependent density functional theory (TDDFT) in the adiabatic local density approximation (ALDA). Here, we propose an approximation to the TDDFT exchange-correlation kernel which is nonadiabatic and nonlocal. It is extracted from correlated calculations in the homogeneous electron gas, where we have tabulated it for a wide range of wave vectors and frequencies. A simple mean density approximation allows one to use it in inhomogeneous materials where the density varies on a scale of 1.6 r_{s} or faster. This kernel contains effects that are completely absent in the ALDA; in particular, it correctly describes the double plasmon in the dynamic structure factor of sodium, and it shows the characteristic low-energy peak that appears in systems with low electronic density. It also leads to an overall quantitative improvement of spectra.
Hanft, Jonathan M.; Jones, Robert J.
1986-01-01
Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846
7 CFR 810.1202 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...
Kernel Ada Programming Support Environment (KAPSE) Interface Team: Public Report. Volume II.
1982-10-28
essential I parameters from our work so far in this area and, using trade-offs concerning these, construct the KIT’s recommended alternative. 1145...environment that are also in the development states. At this point in development it is essential for the KITEC to provide a forum and act as a focal...standardization in this area. Moreover, this is an area with considerable divergence in proposed approaches. Or the other hand, an essential tool from the point of
Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang
2016-01-01
Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.
Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang
2016-01-01
Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143
USDA-ARS?s Scientific Manuscript database
Cochliobolus sativus (anamorph: Bipolaris sorokiniana) causes three major diseases in barley and wheat, including spot blotch, common root rot and kernel blight or black point. These diseases significantly reduce the yield and quality of the two most important cereal crops in the US and other region...
IMPLEMENTATION OF THE SMOKE EMISSION DATA PROCESSOR AND SMOKE TOOL INPUT DATA PROCESSOR IN MODELS-3
The U.S. Environmental Protection Agency has implemented Version 1.3 of SMOKE (Sparse Matrix Object Kernel Emission) processor for preparation of area, mobile, point, and biogenic sources emission data within Version 4.1 of the Models-3 air quality modeling framework. The SMOK...
EnviroAtlas - New York, NY - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Paterson, NJ - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Fresno, CA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Green Bay, WI - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Des Moines, IA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Minneapolis/St. Paul, MN - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Woodbine, IA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Phoenix, AZ - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Pittsburgh, PA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - New Bedford, MA - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Milwaukee, WI - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Austin, TX - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Cleveland, OH - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Portland, ME - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Portland, OR - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Durham, NC - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Tampa, FL - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas - Memphis, TN - Estimated Intersection Density of Walkable Roads
This EnviroAtlas dataset estimates the intersection density of walkable roads within a 750 meter radius of any given 10 meter pixel in the community. Intersections are defined as any point where 3 or more roads meet and density is calculated using kernel density, where closer intersections are weighted higher than further intersections. Intersection density is highly correlated with walking for transportation. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations
Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; ...
2017-06-01
As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less
NASA Astrophysics Data System (ADS)
Ngo, N. H.; Tran, H.; Gamache, R. R.; Bermejo, D.; Domenech, J.-L.
2012-08-01
The modeling of the shape of H2O lines perturbed by N2 (and air) using the Keilson-Storer (KS) kernel for collision-induced velocity changes is revisited with classical molecular dynamics simulations (CMDS). The latter have been performed for a large number of molecules starting from intermolecular-potential surfaces. Contrary to the assumption made in a previous study [H. Tran, D. Bermejo, J.-L. Domenech, P. Joubert, R. R. Gamache, and J.-M. Hartmann, J. Quant. Spectrosc. Radiat. Transf. 108, 126 (2007)], 10.1016/j.jqsrt.2007.03.009, the results of these CMDS show that the velocity-orientation and -modulus changes statistically occur at the same time scale. This validates the use of a single memory parameter in the Keilson-Storer kernel to describe both the velocity-orientation and -modulus changes. The CMDS results also show that velocity- and rotational state-changing collisions are statistically partially correlated. A partially correlated speed-dependent Keilson-Storer model has thus been used to describe the line-shape. For this, the velocity changes KS kernel parameters have been directly determined from CMDS, while the speed-dependent broadening and shifting coefficients have been calculated with a semi-classical approach. Comparisons between calculated spectra and measurements of several lines of H2O broadened by N2 (and air) in the ν3 and 2ν1 + ν2 + ν3 bands for a wide range of pressure show very satisfactory agreement. The evolution of non-Voigt effects from Doppler to collisional regimes is also presented and discussed.
Palmprint and Face Multi-Modal Biometric Recognition Based on SDA-GSVD and Its Kernelization
Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu
2012-01-01
When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance. PMID:22778600
Palmprint and face multi-modal biometric recognition based on SDA-GSVD and its kernelization.
Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu
2012-01-01
When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance.
A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel
As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less
7 CFR 810.802 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...
Eggermont, Florieke; Derikx, Loes C; Free, Jeffrey; van Leeuwen, Ruud; van der Linden, Yvette M; Verdonschot, Nico; Tanck, Esther
2018-03-06
In a multi-center patient study, using different CT scanners, CT-based finite element (FE) models are utilized to calculate failure loads of femora with metastases. Previous studies showed that using different CT scanners can result in different outcomes. This study aims to quantify the effects of (i) different CT scanners; (ii) different CT protocols with variations in slice thickness, field of view (FOV), and reconstruction kernel; and (iii) air between calibration phantom and patient, on Hounsfield Units (HU), bone mineral density (BMD), and FE failure load. Six cadaveric femora were scanned on four CT scanners. Scans were made with multiple CT protocols and with or without an air gap between the body model and calibration phantom. HU and calibrated BMD were determined in cortical and trabecular regions of interest. Non-linear isotropic FE models were constructed to calculate failure load. Mean differences between CT scanners varied up to 7% in cortical HU, 6% in trabecular HU, 6% in cortical BMD, 12% in trabecular BMD, and 17% in failure load. Changes in slice thickness and FOV had little effect (≤4%), while reconstruction kernels had a larger effect on HU (16%), BMD (17%), and failure load (9%). Air between the body model and calibration phantom slightly decreased the HU, BMD, and failure loads (≤8%). In conclusion, this study showed that quantitative analysis of CT images acquired with different CT scanners, and particularly reconstruction kernels, can induce relatively large differences in HU, BMD, and failure loads. Additionally, if possible, air artifacts should be avoided. © 2018 Orthopaedic Research Society. © 2018 The Authors. Journal of Orthopaedic Research® Published by Wiley Periodicals, Inc. on behalf of the Orthopaedic Research Society. J Orthop Res. © 2018 The Authors. Journal of Orthopaedic Research® Published by Wiley Periodicals, Inc. on behalf of the Orthopaedic Research Society.
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
Ambrosino, P; Fresa, R; Fogliano, V; Monti, S M; Ritieni, A
1999-12-01
A new supercritical extraction methodology was applied to extract azadirachtin A (AZA-A) from neem seed kernels. Supercritical and liquid carbon dioxide (CO(2)) were used as extractive agents in a three-separation-stage supercritical pilot plant. Subcritical conditions were tested too. Comparisons were carried out by calculating the efficiency of the pilot plant with respect to the milligrams per kilogram of seeds (ms/mo) of AZA-A extracted. The most convenient extraction was gained using an ms/mo ratio of 119 rather than 64. For supercritical extraction, a separation of cuticular waxes from oil was set up in the pilot plant. HPLC and electrospray mass spectroscopy were used to monitor the yield of AZA-A extraction.
Production of near-full density uranium nitride microspheres with a hot isostatic press
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMurray, Jacob W.; Kiggans, Jr., Jim O.; Helmreich, Grant W.
Depleted uranium nitride (UN) kernels with diameters ranging from 420 to 858 microns and theoretical densities (TD) between 87 and 91 percent were postprocessed using a hot isostatic press (HIP) in an argon gas media. This treatment was shown to increase the TD up to above 97%. Uranium nitride is highly reactive with oxygen. Therefore, a novel crucible design was implemented to remove impurities in the argon gas via in situ gettering to avoid oxidation of the UN kernels. The density before and after each HIP procedure was calculated from average weight, volume, and ellipticity determined with established characterization techniquesmore » for particle. Furthermore, micrographs confirmed the nearly full densification of the particles using the gettering approach and HIP processing parameters investigated in this work.« less
Modeling and Analysis of FCM UN TRISO Fuel Using the PARFUME Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaise Collin
2013-09-01
The PARFUME (PARticle Fuel ModEl) modeling code was used to assess the overall fuel performance of uranium nitride (UN) tri-structural isotropic (TRISO) ceramic fuel in the frame of the design and development of Fully Ceramic Matrix (FCM) fuel. A specific modeling of a TRISO particle with UN kernel was developed with PARFUME, and its behavior was assessed in irradiation conditions typical of a Light Water Reactor (LWR). The calculations were used to access the dimensional changes of the fuel particle layers and kernel, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated dependingmore » on the strain behavior of the constituent materials at high fast fluence and burn-up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the pyrolytic carbon (PyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn-up. These material properties are unknown at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, more effort is needed to determine them and positively conclude on the applicability of FCM fuel to LWRs.« less
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-08-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision-making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps, i.e. the spatial probability of a future vent opening given the past eruptive activity of a volcano. This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source Geographic Information System Quantum GIS, that is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows to select an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input datasets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
NASA Astrophysics Data System (ADS)
Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki
2017-02-01
Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
Analysis of the spatial distribution of dengue cases in the city of Rio de Janeiro, 2011 and 2012
Carvalho, Silvia; Magalhães, Mônica de Avelar Figueiredo Mafra; Medronho, Roberto de Andrade
2017-01-01
ABSTRACT OBJECTIVE Analyze the spatial distribution of classical dengue and severe dengue cases in the city of Rio de Janeiro. METHODS Exploratory study, considering cases of classical dengue and severe dengue with laboratory confirmation of the infection in the city of Rio de Janeiro during the years 2011/2012. The georeferencing technique was applied for the cases notified in the Notification Increase Information System in the period of 2011 and 2012. For this process, the fields “street” and “number” were used. The ArcGis10 program’s Geocoding tool’s automatic process was performed. The spatial analysis was done through the kernel density estimator. RESULTS Kernel density pointed out hotspots for classic dengue that did not coincide geographically with severe dengue and were in or near favelas. The kernel ratio did not show a notable change in the spatial distribution pattern observed in the kernel density analysis. The georeferencing process showed a loss of 41% of classic dengue registries and 17% of severe dengue registries due to the address in the Notification Increase Information System form. CONCLUSIONS The hotspots near the favelas suggest that the social vulnerability of these localities can be an influencing factor for the occurrence of this aggravation since there is a deficiency of the supply and access to essential goods and services for the population. To reduce this vulnerability, interventions must be related to macroeconomic policies. PMID:28832752
Comparative analysis of genetic architectures for nine developmental traits of rye.
Masojć, Piotr; Milczarski, P; Kruszona, P
2017-08-01
Genetic architectures of plant height, stem thickness, spike length, awn length, heading date, thousand-kernel weight, kernel length, leaf area and chlorophyll content were aligned on the DArT-based high-density map of the 541 × Ot1-3 RILs population of rye using the genes interaction assorting by divergent selection (GIABDS) method. Complex sets of QTL for particular traits contained 1-5 loci of the epistatic D class and 10-28 loci of the hypostatic, mostly R and E classes controlling traits variation through D-E or D-R types of two-loci interactions. QTL were distributed on each of the seven rye chromosomes in unique positions or as a coinciding loci for 2-8 traits. Detection of considerable numbers of the reversed (D', E' and R') classes of QTL might be attributed to the transgression effects observed for most of the studied traits. First examples of E* and F QTL classes, defined in the model, are reported for awn length, leaf area, thousand-kernel weight and kernel length. The results of this study extend experimental data to 11 quantitative traits (together with pre-harvest sprouting and alpha-amylase activity) for which genetic architectures fit the model of mechanism underlying alleles distribution within tails of bi-parental populations. They are also a valuable starting point for map-based search of genes underlying detected QTL and for planning advanced marker-assisted multi-trait breeding strategies.
Acceleration of GPU-based Krylov solvers via data transfer reduction
Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...
2015-04-08
Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
von Spiczak, Jochen; Mannil, Manoj; Peters, Benjamin; Hickethier, Tilman; Baer, Matthias; Henning, André; Schmidt, Bernhard; Flohr, Thomas; Manka, Robert; Maintz, David; Alkadhi, Hatem
2018-05-23
The aims of this study were to assess the value of a dedicated sharp convolution kernel for photon counting detector (PCD) computed tomography (CT) for coronary stent imaging and to evaluate to which extent iterative reconstructions can compensate for potential increases in image noise. For this in vitro study, a phantom simulating coronary artery stenting was prepared. Eighteen different coronary stents were expanded in plastic tubes of 3 mm diameter. Tubes were filled with diluted contrast agent, sealed, and immersed in oil calibrated to an attenuation of -100 HU simulating epicardial fat. The phantom was scanned in a modified second generation 128-slice dual-source CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Erlangen, Germany) equipped with both a conventional energy integrating detector and PCD. Image data were acquired using the PCD part of the scanner with 48 × 0.25 mm slices, a tube voltage of 100 kVp, and tube current-time product of 100 mAs. Images were reconstructed using a conventional convolution kernel for stent imaging with filtered back-projection (B46) and with sinogram-affirmed iterative reconstruction (SAFIRE) at level 3 (I463). For comparison, a dedicated sharp convolution kernel with filtered back-projection (D70) and SAFIRE level 3 (Q703) and level 5 (Q705) was used. The D70 and Q70 kernels were specifically designed for coronary stent imaging with PCD CT by optimizing the image modulation transfer function and the separation of contrast edges. Two independent, blinded readers evaluated subjective image quality (Likert scale 0-3, where 3 = excellent), in-stent diameter difference, in-stent attenuation difference, mathematically defined image sharpness, and noise of each reconstruction. Interreader reliability was calculated using Goodman and Kruskal's γ and intraclass correlation coefficients (ICCs). Differences in image quality were evaluated using a Wilcoxon signed-rank test. Differences in in-stent diameter difference, in-stent attenuation difference, image sharpness, and image noise were tested using a paired-sample t test corrected for multiple comparisons. Interreader and intrareader reliability were excellent (γ = 0.953, ICCs = 0.891-0.999, and γ = 0.996, ICCs = 0.918-0.999, respectively). Reconstructions using the dedicated sharp convolution kernel yielded significantly better results regarding image quality (B46: 0.4 ± 0.5 vs D70: 2.9 ± 0.3; P < 0.001), in-stent diameter difference (1.5 ± 0.3 vs 1.0 ± 0.3 mm; P < 0.001), and image sharpness (728 ± 246 vs 2069 ± 411 CT numbers/voxel; P < 0.001). Regarding in-stent attenuation difference, no significant difference was observed between the 2 kernels (151 ± 76 vs 158 ± 92 CT numbers; P = 0.627). Noise was significantly higher in all sharp convolution kernel images but was reduced by 41% and 59% by applying SAFIRE levels 3 and 5, respectively (B46: 16 ± 1, D70: 111 ± 3, Q703: 65 ± 2, Q705: 46 ± 2 CT numbers; P < 0.001 for all comparisons). A dedicated sharp convolution kernel for PCD CT imaging of coronary stents yields superior qualitative and quantitative image characteristics compared with conventional reconstruction kernels. Resulting higher noise levels in sharp kernel PCD imaging can be partially compensated with iterative image reconstruction techniques.
Gabor-based kernel PCA with fractional power polynomial models for face recognition.
Liu, Chengjun
2004-05-01
This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.
A multi-label learning based kernel automatic recommendation method for support vector machine.
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.
A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896
Tavakoli, Javad; Emadi, Teymour; Hashemi, Seyed Mohammad Bagher; Mousavi Khaneghah, Amin; Munekata, Paulo Eduardo Sichetti; Lorenzo, Jose Manuel; Brnčić, Mladen; Barba, Francisco J
2018-05-01
The oxidative stability, as well as the chemical composition of Amygdalus reuteri kernel oil (ARKO), were evaluated and compared to those of Amygdalus scoparia kernel oil (ASKO) and extra virgin olive oil (EVOO) during and after holding in the oven (170 °C for 8 h). The oxidative stability analysis was carried out by measuring the changes in conjugated dienes, carbonyl and acid values as well as oil/oxidative stability index and their correlation with the antioxidant compounds (tocopherol, polyphenols, and sterol compounds). The oleic acid was determined as the predominant fatty acid of ARKO (65.5%). Calculated oxidizability value and an iodine value of ARKO, ASKO and EVOO were reported as 3.29 and 3.24, 2.00 and 100.0, 101.4 and 81.9, respectively. Due to the high wax content (4.5% and 3.3%, respectively), the saponification number of ARKO and ASKO (96.4 and 99.8, respectively) was lower than that of EVOO (169.7). ARKO had the highest oxidative stability, followed by ASKO and EVOO. Therefore, ARKO can be introduced as a new source of edible oil with high oxidative stability. Copyright © 2018. Published by Elsevier Ltd.
Small-scale modification to the lensing kernel
NASA Astrophysics Data System (ADS)
Hadzhiyska, Boryana; Spergel, David; Dunkley, Joanna
2018-02-01
Calculations of the cosmic microwave background (CMB) lensing power implemented into the standard cosmological codes such as camb and class usually treat the surface of last scatter as an infinitely thin screen. However, since the CMB anisotropies are smoothed out on scales smaller than the diffusion length due to the effect of Silk damping, the photons which carry information about the small-scale density distribution come from slightly earlier times than the standard recombination time. The dominant effect is the scale dependence of the mean redshift associated with the fluctuations during recombination. We find that fluctuations at k =0.01 Mpc-1 come from a characteristic redshift of z ≈1090 , while fluctuations at k =0.3 Mpc-1 come from a characteristic redshift of z ≈1130 . We then estimate the corrections to the lensing kernel and the related power spectra due to this effect. We conclude that neglecting it would result in a deviation from the true value of the lensing kernel at the half percent level at small CMB scales. For an all-sky, noise-free experiment, this corresponds to a ˜0.1 σ shift in the observed temperature power spectrum on small scales (2500 ≲l ≲4000 ).
Heat kernel and Weyl anomaly of Schrödinger invariant theory
NASA Astrophysics Data System (ADS)
Pal, Sridip; Grinstein, Benjamín
2017-12-01
We propose a method inspired by discrete light cone quantization to determine the heat kernel for a Schrödinger field theory (Galilean boost invariant with z =2 anisotropic scaling symmetry) living in d +1 dimensions, coupled to a curved Newton-Cartan background, starting from a heat kernel of a relativistic conformal field theory (z =1 ) living in d +2 dimensions. We use this method to show that the Schrödinger field theory of a complex scalar field cannot have any Weyl anomalies. To be precise, we show that the Weyl anomaly Ad+1 G for Schrödinger theory is related to the Weyl anomaly of a free relativistic scalar CFT Ad+2 R via Ad+1 G=2 π δ (m )Ad+2 R , where m is the charge of the scalar field under particle number symmetry. We provide further evidence of the vanishing anomaly by evaluating Feynman diagrams in all orders of perturbation theory. We present an explicit calculation of the anomaly using a regulated Schrödinger operator, without using the null cone reduction technique. We generalize our method to show that a similar result holds for theories with a single time-derivative and with even z >2 .
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...
Alchemical and structural distribution based representation for universal quantum machine learning
NASA Astrophysics Data System (ADS)
Faber, Felix A.; Christensen, Anders S.; Huang, Bing; von Lilienfeld, O. Anatole
2018-06-01
We introduce a representation of any atom in any chemical environment for the automatized generation of universal kernel ridge regression-based quantum machine learning (QML) models of electronic properties, trained throughout chemical compound space. The representation is based on Gaussian distribution functions, scaled by power laws and explicitly accounting for structural as well as elemental degrees of freedom. The elemental components help us to lower the QML model's learning curve, and, through interpolation across the periodic table, even enable "alchemical extrapolation" to covalent bonding between elements not part of training. This point is demonstrated for the prediction of covalent binding in single, double, and triple bonds among main-group elements as well as for atomization energies in organic molecules. We present numerical evidence that resulting QML energy models, after training on a few thousand random training instances, reach chemical accuracy for out-of-sample compounds. Compound datasets studied include thousands of structurally and compositionally diverse organic molecules, non-covalently bonded protein side-chains, (H2O)40-clusters, and crystalline solids. Learning curves for QML models also indicate competitive predictive power for various other electronic ground state properties of organic molecules, calculated with hybrid density functional theory, including polarizability, heat-capacity, HOMO-LUMO eigenvalues and gap, zero point vibrational energy, dipole moment, and highest vibrational fundamental frequency.
NASA Astrophysics Data System (ADS)
Wirtz, Tim; Kieburg, Mario; Guhr, Thomas
2017-06-01
The correlated Wishart model provides the standard benchmark when analyzing time series of any kind. Unfortunately, the real case, which is the most relevant one in applications, poses serious challenges for analytical calculations. Often these challenges are due to square root singularities which cannot be handled using common random matrix techniques. We present a new way to tackle this issue. Using supersymmetry, we carry out an anlaytical study which we support by numerical simulations. For large but finite matrix dimensions, we show that statistical properties of the fully correlated real Wishart model generically approach those of a correlated real Wishart model with doubled matrix dimensions and doubly degenerate empirical eigenvalues. This holds for the local and global spectral statistics. With Monte Carlo simulations we show that this is even approximately true for small matrix dimensions. We explicitly investigate the k-point correlation function as well as the distribution of the largest eigenvalue for which we find a surprisingly compact formula in the doubly degenerate case. Moreover we show that on the local scale the k-point correlation function exhibits the sine and the Airy kernel in the bulk and at the soft edges, respectively. We also address the positions and the fluctuations of the possible outliers in the data.
Song, Weiran; Wang, Hui; Maguire, Paul; Nibouche, Omar
2018-06-07
Partial Least Squares Discriminant Analysis (PLS-DA) is one of the most effective multivariate analysis methods for spectral data analysis, which extracts latent variables and uses them to predict responses. In particular, it is an effective method for handling high-dimensional and collinear spectral data. However, PLS-DA does not explicitly address data multimodality, i.e., within-class multimodal distribution of data. In this paper, we present a novel method termed nearest clusters based PLS-DA (NCPLS-DA) for addressing the multimodality and nonlinearity issues explicitly and improving the performance of PLS-DA on spectral data classification. The new method applies hierarchical clustering to divide samples into clusters and calculates the corresponding centre of every cluster. For a given query point, only clusters whose centres are nearest to such a query point are used for PLS-DA. Such a method can provide a simple and effective tool for separating multimodal and nonlinear classes into clusters which are locally linear and unimodal. Experimental results on 17 datasets, including 12 UCI and 5 spectral datasets, show that NCPLS-DA can outperform 4 baseline methods, namely, PLS-DA, kernel PLS-DA, local PLS-DA and k-NN, achieving the highest classification accuracy most of the time. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Kohlmeyer, Axel; Plimpton, Steven J
The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with anmore » approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle-particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers.« less
Novel procedure for characterizing nonlinear systems with memory: 2017 update
NASA Astrophysics Data System (ADS)
Nuttall, Albert H.; Katz, Richard A.; Hughes, Derke R.; Koch, Robert M.
2017-05-01
The present article discusses novel improvements in nonlinear signal processing made by the prime algorithm developer, Dr. Albert H. Nuttall and co-authors, a consortium of research scientists from the Naval Undersea Warfare Center Division, Newport, RI. The algorithm, called the Nuttall-Wiener-Volterra or 'NWV' algorithm is named for its principal contributors [1], [2],[ 3] . The NWV algorithm significantly reduces the computational workload for characterizing nonlinear systems with memory. Following this formulation, two measurement waveforms are required in order to characterize a specified nonlinear system under consideration: (1) an excitation input waveform, x(t) (the transmitted signal); and, (2) a response output waveform, z(t) (the received signal). Given these two measurement waveforms for a given propagation channel, a 'kernel' or 'channel response', h= [h0,h1,h2,h3] between the two measurement points, is computed via a least squares approach that optimizes modeled kernel values by performing a best fit between measured response z(t) and a modeled response y(t). New techniques significantly diminish the exponential growth of the number of computed kernel coefficients at second and third order and alleviate the Curse of Dimensionality (COD) in order to realize practical nonlinear solutions of scientific and engineering interest.
NASA Astrophysics Data System (ADS)
Nepal, Niraj K.; Ruzsinszky, Adrienn; Bates, Jefferson E.
2018-03-01
The ground state structural and energetic properties for rocksalt and cesium chloride phases of the cesium halides were explored using the random phase approximation (RPA) and beyond-RPA methods to benchmark the nonempirical SCAN meta-GGA and its empirical dispersion corrections. The importance of nonadditivity and higher-order multipole moments of dispersion in these systems is discussed. RPA generally predicts the equilibrium volume for these halides within 2.4% of the experimental value, while beyond-RPA methods utilizing the renormalized adiabatic LDA (rALDA) exchange-correlation kernel are typically within 1.8%. The zero-point vibrational energy is small and shows that the stability of these halides is purely due to electronic correlation effects. The rAPBE kernel as a correction to RPA overestimates the equilibrium volume and could not predict the correct phase ordering in the case of cesium chloride, while the rALDA kernel consistently predicted results in agreement with the experiment for all of the halides. However, due to its reasonable accuracy with lower computational cost, SCAN+rVV10 proved to be a good alternative to the RPA-like methods for describing the properties of these ionic solids.
Bandlimited computerized improvements in characterization of nonlinear systems with memory
NASA Astrophysics Data System (ADS)
Nuttall, Albert H.; Katz, Richard A.; Hughes, Derke R.; Koch, Robert M.
2016-05-01
The present article discusses some inroads in nonlinear signal processing made by the prime algorithm developer, Dr. Albert H. Nuttall and co-authors, a consortium of research scientists from the Naval Undersea Warfare Center Division, Newport, RI. The algorithm, called the Nuttall-Wiener-Volterra 'NWV' algorithm is named for its principal contributors [1], [2],[ 3] over many years of developmental research. The NWV algorithm significantly reduces the computational workload for characterizing nonlinear systems with memory. Following this formulation, two measurement waveforms on the system are required in order to characterize a specified nonlinear system under consideration: (1) an excitation input waveform, x(t) (the transmitted signal); and, (2) a response output waveform, z(t) (the received signal). Given these two measurement waveforms for a given propagation channel, a 'kernel' or 'channel response', h= [h0,h1,h2,h3] between the two measurement points, is computed via a least squares approach that optimizes modeled kernel values by performing a best fit between measured response z(t) and a modeled response y(t). New techniques significantly diminish the exponential growth of the number of computed kernel coefficients at second and third order in order to combat and reasonably alleviate the curse of dimensionality.
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor); Ahumada, Albert J. (Inventor)
2014-01-01
A method of measuring motion blur is disclosed comprising obtaining a moving edge temporal profile r(sub 1)(k) of an image of a high-contrast moving edge, calculating the masked local contrast m(sub1)(k) for r(sub 1)(k) and the masked local contrast m(sub 2)(k) for an ideal step edge waveform r(sub 2)(k) with the same amplitude as r(sub 1)(k), and calculating the measure or motion blur Psi as a difference function, The masked local contrasts are calculated using a set of convolution kernels scaled to simulate the performance of the human visual system, and Psi is measured in units of just-noticeable differences.
Employing OpenCL to Accelerate Ab Initio Calculations on Graphics Processing Units.
Kussmann, Jörg; Ochsenfeld, Christian
2017-06-13
We present an extension of our graphics processing units (GPU)-accelerated quantum chemistry package to employ OpenCL compute kernels, which can be executed on a wide range of computing devices like CPUs, Intel Xeon Phi, and AMD GPUs. Here, we focus on the use of AMD GPUs and discuss differences as compared to CUDA-based calculations on NVIDIA GPUs. First illustrative timings are presented for hybrid density functional theory calculations using serial as well as parallel compute environments. The results show that AMD GPUs are as fast or faster than comparable NVIDIA GPUs and provide a viable alternative for quantum chemical applications.
Exploiting graph kernels for high performance biomedical relation extraction.
Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri
2018-01-30
Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.
7 CFR 810.2202 - Definition of other terms.
Code of Federal Regulations, 2014 CFR
2014-01-01
... kernels, foreign material, and shrunken and broken kernels. The sum of these three factors may not exceed... the removal of dockage and shrunken and broken kernels. (g) Heat-damaged kernels. Kernels, pieces of... sample after the removal of dockage and shrunken and broken kernels. (h) Other grains. Barley, corn...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...
7 CFR 51.1415 - Inedible kernels.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...
Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.
Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit
2018-02-13
Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Unconventional protein sources: apricot seed kernels.
Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M
1981-09-01
Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.
An introduction to kernel-based learning algorithms.
Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B
2001-01-01
This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...
Design of CT reconstruction kernel specifically for clinical lung imaging
NASA Astrophysics Data System (ADS)
Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.
2005-04-01
In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.
Rashno, Abdolreza; Nazari, Behzad; Koozekanani, Dara D.; Drayna, Paul M.; Sadri, Saeed; Rabbani, Hossein
2017-01-01
A fully-automated method based on graph shortest path, graph cut and neutrosophic (NS) sets is presented for fluid segmentation in OCT volumes for exudative age related macular degeneration (EAMD) subjects. The proposed method includes three main steps: 1) The inner limiting membrane (ILM) and the retinal pigment epithelium (RPE) layers are segmented using proposed methods based on graph shortest path in NS domain. A flattened RPE boundary is calculated such that all three types of fluid regions, intra-retinal, sub-retinal and sub-RPE, are located above it. 2) Seed points for fluid (object) and tissue (background) are initialized for graph cut by the proposed automated method. 3) A new cost function is proposed in kernel space, and is minimized with max-flow/min-cut algorithms, leading to a binary segmentation. Important properties of the proposed steps are proven and quantitative performance of each step is analyzed separately. The proposed method is evaluated using a publicly available dataset referred as Optima and a local dataset from the UMN clinic. For fluid segmentation in 2D individual slices, the proposed method outperforms the previously proposed methods by 18%, 21% with respect to the dice coefficient and sensitivity, respectively, on the Optima dataset, and by 16%, 11% and 12% with respect to the dice coefficient, sensitivity and precision, respectively, on the local UMN dataset. Finally, for 3D fluid volume segmentation, the proposed method achieves true positive rate (TPR) and false positive rate (FPR) of 90% and 0.74%, respectively, with a correlation of 95% between automated and expert manual segmentations using linear regression analysis. PMID:29059257
Digital PET compliance to EARL accreditation specifications.
Koopman, Daniëlle; Groot Koerkamp, Maureen; Jager, Pieter L; Arkies, Hester; Knollema, Siert; Slump, Cornelis H; Sanches, Pedro G; van Dalen, Jorn A
2017-12-01
Our aim was to evaluate if a recently introduced TOF PET system with digital photon counting technology (Philips Healthcare), potentially providing an improved image quality over analogue systems, can fulfil EANM research Ltd (EARL) accreditation specifications for tumour imaging with FDG-PET/CT. We have performed a phantom study on a digital TOF PET system using a NEMA NU2-2001 image quality phantom with six fillable spheres. Phantom preparation and PET/CT acquisition were performed according to the European Association of Nuclear Medicine (EANM) guidelines. We made list-mode ordered-subsets expectation maximization (OSEM) TOF PET reconstructions, with default settings, three voxel sizes (4 × 4 × 4 mm 3 , 2 × 2 × 2 mm 3 and 1 × 1 × 1 mm 3 ) and with/without point spread function (PSF) modelling. On each PET dataset, mean and maximum activity concentration recovery coefficients (RC mean and RC max ) were calculated for all phantom spheres and compared to EARL accreditation specifications. The RCs of the 4 × 4 × 4 mm 3 voxel dataset without PSF modelling proved closest to EARL specifications. Next, we added a Gaussian post-smoothing filter with varying kernel widths of 1-7 mm. EARL specifications were fulfilled when using kernel widths of 2 to 4 mm. TOF PET using digital photon counting technology fulfils EARL accreditation specifications for FDG-PET/CT tumour imaging when using an OSEM reconstruction with 4 × 4 × 4 mm 3 voxels, no PSF modelling and including a Gaussian post-smoothing filter of 2 to 4 mm.
Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.
Kwak, Nojun
2016-05-20
Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.
The intrinsic matter bispectrum in ΛCDM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tram, Thomas; Crittenden, Robert; Koyama, Kazuya
2016-05-01
We present a fully relativistic calculation of the matter bispectrum at second order in cosmological perturbation theory assuming a Gaussian primordial curvature perturbation. For the first time we perform a full numerical integration of the bispectrum for both baryons and cold dark matter using the second-order Einstein-Boltzmann code, SONG. We review previous analytical results and provide an improved analytic approximation for the second-order kernel in Poisson gauge which incorporates Newtonian nonlinear evolution, relativistic initial conditions, the effect of radiation at early times and the cosmological constant at late times. Our improved kernel provides a percent level fit to the fullmore » numerical result at late times for most configurations, including both equilateral shapes and the squeezed limit. We show that baryon acoustic oscillations leave an imprint in the matter bispectrum, making a significant impact on squeezed shapes.« less
Online polarimetry of the Nuclotron internal deuteron and proton beams
NASA Astrophysics Data System (ADS)
Isupov, A. Yu
2017-12-01
The spin studies at Nuclotron require fast and precise determination of the deuteron and proton beam polarization. For these purposes new powerful VME-based data acquisition (DAQ) system has been designed for the Deuteron Spin Structure setup placed at the Nuclotron Internal Target Station. The DAQ system is built using the netgraph-based data acquisition and processing framework ngdp. The software dealing with VME hardware is a set of netgraph nodes in form of the loadable kernel modules, so works in the operating system kernel context. The specific for current implementation nodes and user context utilities are described. The online events representation by ROOT classes allows us to generalize code for histograms filling and polarization calculations. The DAQ system was successfully used during 53rd and 54th Nuclotron runs, and their suitability for online polarimetry is demonstrated.
An efficient parallel algorithm for matrix-vector multiplication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrickson, B.; Leland, R.; Plimpton, S.
The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in themore » well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.« less
Graph embedding and extensions: a general framework for dimensionality reduction.
Yan, Shuicheng; Xu, Dong; Zhang, Benyu; Zhang, Hong-Jiang; Yang, Qiang; Lin, Stephen
2007-01-01
Over the past few decades, a large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions.
Yenilmez, Firdes; Düzgün, Sebnem; Aksoy, Aysegül
2015-01-01
In this study, kernel density estimation (KDE) was coupled with ordinary two-dimensional kriging (OK) to reduce the number of sampling locations in measurement and kriging of dissolved oxygen (DO) concentrations in Porsuk Dam Reservoir (PDR). Conservation of the spatial correlation structure in the DO distribution was a target. KDE was used as a tool to aid in identification of the sampling locations that would be removed from the sampling network in order to decrease the total number of samples. Accordingly, several networks were generated in which sampling locations were reduced from 65 to 10 in increments of 4 or 5 points at a time based on kernel density maps. DO variograms were constructed, and DO values in PDR were kriged. Performance of the networks in DO estimations were evaluated through various error metrics, standard error maps (SEM), and whether the spatial correlation structure was conserved or not. Results indicated that smaller number of sampling points resulted in loss of information in regard to spatial correlation structure in DO. The minimum representative sampling points for PDR was 35. Efficacy of the sampling location selection method was tested against the networks generated by experts. It was shown that the evaluation approach proposed in this study provided a better sampling network design in which the spatial correlation structure of DO was sustained for kriging.
Mueck, F G; Michael, L; Deak, Z; Scherr, M K; Maxien, D; Geyer, L L; Reiser, M; Wirth, S
2013-07-01
To compare the image quality in dose-reduced 64-row CT of the chest at different levels of adaptive statistical iterative reconstruction (ASIR) to full-dose baseline examinations reconstructed solely with filtered back projection (FBP) in a realistic upgrade scenario. A waiver of consent was granted by the institutional review board (IRB). The noise index (NI) relates to the standard deviation of Hounsfield units in a water phantom. Baseline exams of the chest (NI = 29; LightSpeed VCT XT, GE Healthcare) were intra-individually compared to follow-up studies on a CT with ASIR after system upgrade (NI = 45; Discovery HD750, GE Healthcare), n = 46. Images were calculated in slice and volume mode with ASIR levels of 0 - 100 % in the standard and lung kernel. Three radiologists independently compared the image quality to the corresponding full-dose baseline examinations (-2: diagnostically inferior, -1: inferior, 0: equal, + 1: superior, + 2: diagnostically superior). Statistical analysis used Wilcoxon's test, Mann-Whitney U test and the intraclass correlation coefficient (ICC). The mean CTDIvol decreased by 53 % from the FBP baseline to 8.0 ± 2.3 mGy for ASIR follow-ups; p < 0.001. The ICC was 0.70. Regarding the standard kernel, the image quality in dose-reduced studies was comparable to the baseline at ASIR 70 % in volume mode (-0.07 ± 0.29, p = 0.29). Concerning the lung kernel, every ASIR level outperformed the baseline image quality (p < 0.001), with ASIR 30 % rated best (slice: 0.70 ± 0.6, volume: 0.74 ± 0.61). Vendors' recommendation of 50 % ASIR is fair. In detail, the ASIR 70 % in volume mode for the standard kernel and ASIR 30 % for the lung kernel performed best, allowing for a dose reduction of approximately 50 %. © Georg Thieme Verlag KG Stuttgart · New York.
Lung dynamic MRI deblurring using low-rank decomposition and dictionary learning.
Gou, Shuiping; Wang, Yueyue; Wu, Jiaolong; Lee, Percy; Sheng, Ke
2015-04-01
Lung dynamic MRI (dMRI) has emerged to be an appealing tool to quantify lung motion for both planning and treatment guidance purposes. However, this modality can result in blurry images due to intrinsically low signal-to-noise ratio in the lung and spatial/temporal interpolation. The image blurring could adversely affect the image processing that depends on the availability of fine landmarks. The purpose of this study is to reduce dMRI blurring using image postprocessing. To enhance the image quality and exploit the spatiotemporal continuity of dMRI sequences, a low-rank decomposition and dictionary learning (LDDL) method was employed to deblur lung dMRI and enhance the conspicuity of lung blood vessels. Fifty frames of continuous 2D coronal dMRI frames using a steady state free precession sequence were obtained from five subjects including two healthy volunteer and three lung cancer patients. In LDDL, the lung dMRI was decomposed into sparse and low-rank components. Dictionary learning was employed to estimate the blurring kernel based on the whole image, low-rank or sparse component of the first image in the lung MRI sequence. Deblurring was performed on the whole image sequences using deconvolution based on the estimated blur kernel. The deblurring results were quantified using an automated blood vessel extraction method based on the classification of Hessian matrix filtered images. Accuracy of automated extraction was calculated using manual segmentation of the blood vessels as the ground truth. In the pilot study, LDDL based on the blurring kernel estimated from the sparse component led to performance superior to the other ways of kernel estimation. LDDL consistently improved image contrast and fine feature conspicuity of the original MRI without introducing artifacts. The accuracy of automated blood vessel extraction was on average increased by 16% using manual segmentation as the ground truth. Image blurring in dMRI images can be effectively reduced using a low-rank decomposition and dictionary learning method using kernels estimated by the sparse component.
Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E
2015-01-07
Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners-the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [(11)C]AFM rats imaged on the HRRT and [(11)C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.
Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E
2016-01-01
Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners - the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods. PMID:25490063
Alaska/Yukon Geoid Improvement by a Data-Driven Stokes's Kernel Modification Approach
NASA Astrophysics Data System (ADS)
Li, Xiaopeng; Roman, Daniel R.
2015-04-01
Geoid modeling over Alaska of USA and Yukon Canada being a trans-national issue faces a great challenge primarily due to the inhomogeneous surface gravity data (Saleh et al, 2013) and the dynamic geology (Freymueller et al, 2008) as well as its complex geological rheology. Previous study (Roman and Li 2014) used updated satellite models (Bruinsma et al 2013) and newly acquired aerogravity data from the GRAV-D project (Smith 2007) to capture the gravity field changes in the targeting areas primarily in the middle-to-long wavelength. In CONUS, the geoid model was largely improved. However, the precision of the resulted geoid model in Alaska was still in the decimeter level, 19cm at the 32 tide bench marks and 24cm on the 202 GPS/Leveling bench marks that gives a total of 23.8cm at all of these calibrated surface control points, where the datum bias was removed. Conventional kernel modification methods in this area (Li and Wang 2011) had limited effects on improving the precision of the geoid models. To compensate the geoid miss fits, a new Stokes's kernel modification method based on a data-driven technique is presented in this study. First, the method was tested on simulated data sets (Fig. 1), where the geoid errors have been reduced by 2 orders of magnitude (Fig 2). For the real data sets, some iteration steps are required to overcome the rank deficiency problem caused by the limited control data that are irregularly distributed in the target area. For instance, after 3 iterations, the standard deviation dropped about 2.7cm (Fig 3). Modification at other critical degrees can further minimize the geoid model miss fits caused either by the gravity error or the remaining datum error in the control points.
Liu, Shelley H; Bobb, Jennifer F; Lee, Kyu Ha; Gennings, Chris; Claus Henn, Birgit; Bellinger, David; Austin, Christine; Schnaas, Lourdes; Tellez-Rojo, Martha M; Hu, Howard; Wright, Robert O; Arora, Manish; Coull, Brent A
2018-07-01
The impact of neurotoxic chemical mixtures on children's health is a critical public health concern. It is well known that during early life, toxic exposures may impact cognitive function during critical time intervals of increased vulnerability, known as windows of susceptibility. Knowledge on time windows of susceptibility can help inform treatment and prevention strategies, as chemical mixtures may affect a developmental process that is operating at a specific life phase. There are several statistical challenges in estimating the health effects of time-varying exposures to multi-pollutant mixtures, such as: multi-collinearity among the exposures both within time points and across time points, and complex exposure-response relationships. To address these concerns, we develop a flexible statistical method, called lagged kernel machine regression (LKMR). LKMR identifies critical exposure windows of chemical mixtures, and accounts for complex non-linear and non-additive effects of the mixture at any given exposure window. Specifically, LKMR estimates how the effects of a mixture of exposures change with the exposure time window using a Bayesian formulation of a grouped, fused lasso penalty within a kernel machine regression (KMR) framework. A simulation study demonstrates the performance of LKMR under realistic exposure-response scenarios, and demonstrates large gains over approaches that consider each time window separately, particularly when serial correlation among the time-varying exposures is high. Furthermore, LKMR demonstrates gains over another approach that inputs all time-specific chemical concentrations together into a single KMR. We apply LKMR to estimate associations between neurodevelopment and metal mixtures in Early Life Exposures in Mexico and Neurotoxicology, a prospective cohort study of child health in Mexico City.
NASA Astrophysics Data System (ADS)
Qiu, Xiang; Dai, Ming; Yin, Chuan-li
2017-09-01
Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.
NASA Astrophysics Data System (ADS)
Askari, Omid; Beretta, Gian Paolo; Eisazadeh-Far, Kian; Metghalchi, Hameed
2016-07-01
Thermodynamic properties of hydrocarbon/air plasma mixtures at ultra-high temperatures must be precisely calculated due to important influence on the flame kernel formation and propagation in combusting flows and spark discharge applications. A new algorithm based on the complete chemical equilibrium assumption is developed to calculate the ultra-high temperature plasma composition and thermodynamic properties, including enthalpy, entropy, Gibbs free energy, specific heat at constant pressure, specific heat ratio, speed of sound, mean molar mass, and degree of ionization. The method is applied to compute the thermodynamic properties of H2/air and CH4/air plasma mixtures for different temperatures (1000-100 000 K), different pressures (10-6-100 atm), and different fuel/air equivalence ratios within flammability limit. In calculating the individual thermodynamic properties of the atomic species needed to compute the complete equilibrium composition, the Debye-Huckel cutoff criterion has been used for terminating the series expression of the electronic partition function so as to capture the reduction of the ionization potential due to pressure and the intense connection between the electronic partition function and the thermodynamic properties of the atomic species and the number of energy levels taken into account. Partition functions have been calculated using tabulated data for available atomic energy levels. The Rydberg and Ritz extrapolation and interpolation laws have been used for energy levels which are not observed. The calculated plasma properties are then presented as functions of temperature, pressure and equivalence ratio, in terms of a new set of thermodynamically self-consistent correlations that are shown to provide very accurate fits suitable for efficient use in CFD simulations. Comparisons with existing data for air plasma show excellent agreement.
Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)
NASA Astrophysics Data System (ADS)
Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.
2016-08-01
Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.
Green's functions for dislocations in bonded strips and related crack problems
NASA Technical Reports Server (NTRS)
Ballarini, R.; Luo, H. A.
1990-01-01
Green's functions are derived for the plane elastostatics problem of a dislocation in a bimaterial strip. Using these fundamental solutions as kernels, various problems involving cracks in a bimaterial strip are analyzed using singular integral equations. For each problem considered, stress intensity factors are calculated for several combinations of the parameters which describe loading, geometry and material mismatch.