Sample records for point kernel gamma

  1. Point kernel calculations of skyshine exposure rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roseberry, M.L.; Shultis, J.K.

    1982-02-01

    A simple point kernel model is presented for the calculation of skyshine exposure rates arising from the atmospheric reflection of gamma radiation produced by a vertically collimated or a shielded point source. This model is shown to be in good agreement with benchmark experimental data from a /sup 60/Co source for distances out to 700 m.

  2. GRAYSKY-A new gamma-ray skyshine code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witts, D.J.; Twardowski, T.; Watmough, M.H.

    1993-01-01

    This paper describes a new prototype gamma-ray skyshine code GRAYSKY (Gamma-RAY SKYshine) that has been developed at BNFL, as part of an industrially based master of science course, to overcome the problems encountered with SKYSHINEII and RANKERN. GRAYSKY is a point kernel code based on the use of a skyshine response function. The scattering within source or shield materials is accounted for by the use of buildup factors. This is an approximate method of solution but one that has been shown to produce results that are acceptable for dose rate predictions on operating plants. The novel features of GRAYSKY aremore » as follows: 1. The code is fully integrated with a semianalytical point kernel shielding code, currently under development at BNFL, which offers powerful solid-body modeling capabilities. 2. The geometry modeling also allows the skyshine response function to be used in a manner that accounts for the shielding of air-scattered radiation. 3. Skyshine buildup factors calculated using the skyshine response function have been used as well as dose buildup factors.« less

  3. Total Ambient Dose Equivalent Buildup Factor Determination for Nbs04 Concrete.

    PubMed

    Duckic, Paulina; Hayes, Robert B

    2018-06-01

    Buildup factors are dimensionless multiplicative factors required by the point kernel method to account for scattered radiation through a shielding material. The accuracy of the point kernel method is strongly affected by the correspondence of analyzed parameters to experimental configurations, which is attempted to be simplified here. The point kernel method has not been found to have widespread practical use for neutron shielding calculations due to the complex neutron transport behavior through shielding materials (i.e. the variety of interaction mechanisms that neutrons may undergo while traversing the shield) as well as non-linear neutron total cross section energy dependence. In this work, total ambient dose buildup factors for NBS04 concrete are calculated in terms of neutron and secondary gamma ray transmission factors. The neutron and secondary gamma ray transmission factors are calculated using MCNP6™ code with updated cross sections. Both transmission factors and buildup factors are given in a tabulated form. Practical use of neutron transmission and buildup factors warrants rigorously calculated results with all associated uncertainties. In this work, sensitivity analysis of neutron transmission factors and total buildup factors with varying water content has been conducted. The analysis showed significant impact of varying water content in concrete on both neutron transmission factors and total buildup factors. Finally, support vector regression, a machine learning technique, has been engaged to make a model based on the calculated data for calculation of the buildup factors. The developed model can predict most of the data with 20% relative error.

  4. Kernel analysis in TeV gamma-ray selection

    NASA Astrophysics Data System (ADS)

    Moriarty, P.; Samuelson, F. W.

    2000-06-01

    We discuss the use of kernel analysis as a technique for selecting gamma-ray candidates in Atmospheric Cherenkov astronomy. The method is applied to observations of the Crab Nebula and Markarian 501 recorded with the Whipple 10 m Atmospheric Cherenkov imaging system, and the results are compared with the standard Supercuts analysis. Since kernel analysis is computationally intensive, we examine approaches to reducing the computational load. Extension of the technique to estimate the energy of the gamma-ray primary is considered. .

  5. Improved response functions for gamma-ray skyshine analyses

    NASA Astrophysics Data System (ADS)

    Shultis, J. K.; Faw, R. E.; Deng, X.

    1992-09-01

    A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study, the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15 MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This re-evaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results are compared to previous calculations and benchmark data.

  6. Improved response functions for gamma-ray skyshine analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Deng, X.

    1992-09-01

    A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15more » MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This reevaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results compared to previous calculations and benchmark data.« less

  7. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays.

  8. Assessing opportunities for physical activity in the built environment of children: interrelation between kernel density and neighborhood scale.

    PubMed

    Buck, Christoph; Kneib, Thomas; Tkaczick, Tobias; Konstabel, Kenn; Pigeot, Iris

    2015-12-22

    Built environment studies provide broad evidence that urban characteristics influence physical activity (PA). However, findings are still difficult to compare, due to inconsistent measures assessing urban point characteristics and varying definitions of spatial scale. Both were found to influence the strength of the association between the built environment and PA. We simultaneously evaluated the effect of kernel approaches and network-distances to investigate the association between urban characteristics and physical activity depending on spatial scale and intensity measure. We assessed urban measures of point characteristics such as intersections, public transit stations, and public open spaces in ego-centered network-dependent neighborhoods based on geographical data of one German study region of the IDEFICS study. We calculated point intensities using the simple intensity and kernel approaches based on fixed bandwidths, cross-validated bandwidths including isotropic and anisotropic kernel functions and considering adaptive bandwidths that adjust for residential density. We distinguished six network-distances from 500 m up to 2 km to calculate each intensity measure. A log-gamma regression model was used to investigate the effect of each urban measure on moderate-to-vigorous physical activity (MVPA) of 400 2- to 9.9-year old children who participated in the IDEFICS study. Models were stratified by sex and age groups, i.e. pre-school children (2 to <6 years) and school children (6-9.9 years), and were adjusted for age, body mass index (BMI), education and safety concerns of parents, season and valid weartime of accelerometers. Association between intensity measures and MVPA strongly differed by network-distance, with stronger effects found for larger network-distances. Simple intensity revealed smaller effect estimates and smaller goodness-of-fit compared to kernel approaches. Smallest variation in effect estimates over network-distances was found for kernel intensity measures based on isotropic and anisotropic cross-validated bandwidth selection. We found a strong variation in the association between the built environment and PA of children based on the choice of intensity measure and network-distance. Kernel intensity measures provided stable results over various scales and improved the assessment compared to the simple intensity measure. Considering different spatial scales and kernel intensity methods might reduce methodological limitations in assessing opportunities for PA in the built environment.

  9. Non-destructive in-situ method and apparatus for determining radionuclide depth in media

    DOEpatents

    Xu, X. George; Naessens, Edward P.

    2003-01-01

    A non-destructive method and apparatus which is based on in-situ gamma spectroscopy is used to determine the depth of radiological contamination in media such as concrete. An algorithm, Gamma Penetration Depth Unfolding Algorithm (GPDUA), uses point kernel techniques to predict the depth of contamination based on the results of uncollided peak information from the in-situ gamma spectroscopy. The invention is better, faster, safer, and/cheaper than the current practice in decontamination and decommissioning of facilities that are slow, rough and unsafe. The invention uses a priori knowledge of the contaminant source distribution. The applicable radiological contaminants of interest are any isotopes that emit two or more gamma rays per disintegration or isotopes that emit a single gamma ray but have gamma-emitting progeny in secular equilibrium with its parent (e.g., .sup.60 Co, .sup.235 U, and .sup.137 Cs to name a few). The predicted depths from the GPDUA algorithm using Monte Carlo N-Particle Transport Code (MCNP) simulations and laboratory experiments using .sup.60 Co have consistently produced predicted depths within 20% of the actual or known depth.

  10. Symmetry preserving truncations of the gap and Bethe-Salpeter equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binosi, Daniele; Chang, Lei; Papavassiliou, Joannis

    2016-05-01

    Ward-Green-Takahashi (WGT) identities play a crucial role in hadron physics, e.g. imposing stringent relationships between the kernels of the one-and two-body problems, which must be preserved in any veracious treatment of mesons as bound states. In this connection, one may view the dressed gluon-quark vertex, Gamma(alpha)(mu), as fundamental. We use a novel representation of Gamma(alpha)(mu), in terms of the gluon-quark scattering matrix, to develop a method capable of elucidating the unique quark-antiquark Bethe-Salpeter kernel, K, that is symmetry consistent with a given quark gap equation. A strength of the scheme is its ability to expose and capitalize on graphic symmetriesmore » within the kernels. This is displayed in an analysis that reveals the origin of H-diagrams in K, which are two-particle-irreducible contributions, generated as two-loop diagrams involving the three-gluon vertex, that cannot be absorbed as a dressing of Gamma(alpha)(mu) in a Bethe-Salpeter kernel nor expressed as a member of the class of crossed-box diagrams. Thus, there are no general circumstances under which the WGT identities essential for a valid description of mesons can be preserved by a Bethe-Salpeter kernel obtained simply by dressing both gluon-quark vertices in a ladderlike truncation; and, moreover, adding any number of similarly dressed crossed-box diagrams cannot improve the situation.« less

  11. Support vector machines for prediction and analysis of beta and gamma-turns in proteins.

    PubMed

    Pham, Tho Hoan; Satou, Kenji; Ho, Tu Bao

    2005-04-01

    Tight turns have long been recognized as one of the three important features of proteins, together with alpha-helix and beta-sheet. Tight turns play an important role in globular proteins from both the structural and functional points of view. More than 90% tight turns are beta-turns and most of the rest are gamma-turns. Analysis and prediction of beta-turns and gamma-turns is very useful for design of new molecules such as drugs, pesticides, and antigens. In this paper we investigated two aspects of applying support vector machine (SVM), a promising machine learning method for bioinformatics, to prediction and analysis of beta-turns and gamma-turns. First, we developed two SVM-based methods, called BTSVM and GTSVM, which predict beta-turns and gamma-turns in a protein from its sequence. When compared with other methods, BTSVM has a superior performance and GTSVM is competitive. Second, we used SVMs with a linear kernel to estimate the support of amino acids for the formation of beta-turns and gamma-turns depending on their position in a protein. Our analysis results are more comprehensive and easier to use than the previous results in designing turns in proteins.

  12. Release of RANKERN 16A

    NASA Astrophysics Data System (ADS)

    Bird, Adam; Murphy, Christophe; Dobson, Geoff

    2017-09-01

    RANKERN 16 is the latest version of the point-kernel gamma radiation transport Monte Carlo code from AMEC Foster Wheeler's ANSWERS Software Service. RANKERN is well established in the UK shielding community for radiation shielding and dosimetry assessments. Many important developments have been made available to users in this latest release of RANKERN. The existing general 3D geometry capability has been extended to include import of CAD files in the IGES format providing efficient full CAD modelling capability without geometric approximation. Import of tetrahedral mesh and polygon surface formats has also been provided. An efficient voxel geometry type has been added suitable for representing CT data. There have been numerous input syntax enhancements and an extended actinide gamma source library. This paper describes some of the new features and compares the performance of the new geometry capabilities.

  13. Analysis of nonlocal neural fields for both general and gamma-distributed connectivities

    NASA Astrophysics Data System (ADS)

    Hutt, Axel; Atay, Fatihcan M.

    2005-04-01

    This work studies the stability of equilibria in spatially extended neuronal ensembles. We first derive the model equation from statistical properties of the neuron population. The obtained integro-differential equation includes synaptic and space-dependent transmission delay for both general and gamma-distributed synaptic connectivities. The latter connectivity type reveals infinite, finite, and vanishing self-connectivities. The work derives conditions for stationary and nonstationary instabilities for both kernel types. In addition, a nonlinear analysis for general kernels yields the order parameter equation of the Turing instability. To compare the results to findings for partial differential equations (PDEs), two typical PDE-types are derived from the examined model equation, namely the general reaction-diffusion equation and the Swift-Hohenberg equation. Hence, the discussed integro-differential equation generalizes these PDEs. In the case of the gamma-distributed kernels, the stability conditions are formulated in terms of the mean excitatory and inhibitory interaction ranges. As a novel finding, we obtain Turing instabilities in fields with local inhibition-lateral excitation, while wave instabilities occur in fields with local excitation and lateral inhibition. Numerical simulations support the analytical results.

  14. SU-F-SPS-06: Implementation of a Back-Projection Algorithm for 2D in Vivo Dosimetry with An EPID System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M

    Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results:more » A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.« less

  15. Oil point and mechanical behaviour of oil palm kernels in linear compression

    NASA Astrophysics Data System (ADS)

    Kabutey, Abraham; Herak, David; Choteborsky, Rostislav; Mizera, Čestmír; Sigalingging, Riswanti; Akangbe, Olaosebikan Layi

    2017-07-01

    The study described the oil point and mechanical properties of roasted and unroasted bulk oil palm kernels under compression loading. The literature information available is very limited. A universal compression testing machine and vessel diameter of 60 mm with a plunger were used by applying maximum force of 100 kN and speed ranging from 5 to 25 mm min-1. The initial pressing height of the bulk kernels was measured at 40 mm. The oil point was determined by a litmus test for each deformation level of 5, 10, 15, 20, and 25 mm at a minimum speed of 5 mmmin-1. The measured parameters were the deformation, deformation energy, oil yield, oil point strain and oil point pressure. Clearly, the roasted bulk kernels required less deformation energy compared to the unroasted kernels for recovering the kernel oil. However, both kernels were not permanently deformed. The average oil point strain was determined at 0.57. The study is an essential contribution to pursuing innovative methods for processing palm kernel oil in rural areas of developing countries.

  16. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  17. Deletion mutagenesis identifies a haploinsufficient role for gamma-zein in opaque-2 endosperm modification

    USDA-ARS?s Scientific Manuscript database

    Quality Protein Maize (QPM) is a hard kernel variant of the high-lysine mutant, opaque-2. Using gamma irradiation, we created opaque QPM variants to identify opaque-2 modifier genes and to investigate deletion mutagenesis combined with Illumina sequencing as a maize functional genomics tool. A K0326...

  18. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  19. Experimental pencil beam kernels derivation for 3D dose calculation in flattening filter free modulated fields

    NASA Astrophysics Data System (ADS)

    Diego Azcona, Juan; Barbés, Benigno; Wang, Lilie; Burguete, Javier

    2016-01-01

    This paper presents a method to obtain the pencil-beam kernels that characterize a megavoltage photon beam generated in a flattening filter free (FFF) linear accelerator (linac) by deconvolution from experimental measurements at different depths. The formalism is applied to perform independent dose calculations in modulated fields. In our previous work a formalism was developed for ideal flat fluences exiting the linac’s head. That framework could not deal with spatially varying energy fluences, so any deviation from the ideal flat fluence was treated as a perturbation. The present work addresses the necessity of implementing an exact analysis where any spatially varying fluence can be used such as those encountered in FFF beams. A major improvement introduced here is to handle the actual fluence in the deconvolution procedure. We studied the uncertainties associated to the kernel derivation with this method. Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from two linacs from different vendors, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water-equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50mm diameter circular field, collimated with a lead block. The 3D kernel for a FFF beam was obtained by deconvolution using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. Error uncertainty in the kernel derivation procedure was estimated to be within 0.2%. Eighteen modulated fields used clinically in different treatment localizations were irradiated at four measurement depths (total of fifty-four film measurements). Comparison through the gamma-index to their corresponding calculated absolute dose distributions showed a number of passing points (3%, 3mm) mostly above 99%. This new procedure is more reliable and robust than the previous one. Its ability to perform accurate independent dose calculations was demonstrated.

  20. Approximation of the breast height diameter distribution of two-cohort stands by mixture models III Kernel density estimators vs mixture models

    Treesearch

    Rafal Podlaski; Francis A. Roesch

    2014-01-01

    Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...

  1. LoCoH: Non-parameteric kernel methods for constructing home ranges and utilization distributions

    USGS Publications Warehouse

    Getz, Wayne M.; Fortmann-Roe, Scott; Cross, Paul C.; Lyons, Andrew J.; Ryan, Sadie J.; Wilmers, Christopher C.

    2007-01-01

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: ‘‘fixed sphere-of-influence,’’ or r -LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an ‘‘adaptive sphere-of-influence,’’ or a -LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a ), and compare them to the original ‘‘fixed-number-of-points,’’ or k -LoCoH (all kernels constructed from k -1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a -LoCoH is generally superior to k - and r -LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).

  2. Improvements to the kernel function method of steady, subsonic lifting surface theory

    NASA Technical Reports Server (NTRS)

    Medan, R. T.

    1974-01-01

    The application of a kernel function lifting surface method to three dimensional, thin wing theory is discussed. A technique for determining the influence functions is presented. The technique is shown to require fewer quadrature points, while still calculating the influence functions accurately enough to guarantee convergence with an increasing number of spanwise quadrature points. The method also treats control points on the wing leading and trailing edges. The report introduces and employs an aspect of the kernel function method which apparently has never been used before and which significantly enhances the efficiency of the kernel function approach.

  3. The effects of food irradiation on quality of pine nut kernels

    NASA Astrophysics Data System (ADS)

    Gölge, Evren; Ova, Gülden

    2008-03-01

    Pine nuts ( Pinus pinae) undergo gamma irradiation process with the doses 0.5, 1.0, 3.0, and 5.0 kGy. The changes in chemical, physical and sensory attributes were observed in the following 3 months of storage period. The data obtained from the experiments showed the peroxide values of the pine nut kernels increased proportionally to the dose. On contrary, irradiation process has no effect on the physical quality such as texture and color, fatty acid composition and sensory attributes.

  4. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigg, D.W.; Wheeler, F.J.

    1981-01-01

    The Poloidal Diverter Experiment (PDX) facility at Princeton University is the first operating tokamak to require substantial radiation shielding. A calculational model has been developed to estimate the radiation dose in the PDX control room and at the site boundary due to the skyshine effect. An efficient one-dimensional method is used to compute the neutron and capture gamma leakage currents at the top surface of the PDX roof shield. This method employs an S /SUB n/ calculation in slab geometry and, for the PDX, is superior to spherical models found in the literature. If certain conditions are met, the slabmore » model provides the exact probability of leakage out the top surface of the roof for fusion source neutrons and for capture gamma rays produced in the PDX floor and roof shield. The model also provides the correct neutron and capture gamma leakage current spectra and angular distributions, averaged over the top roof shield surface. For the PDX, this method is nearly as accurate as multidimensional techniques for computing the roof leakage and is much less costly. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab S /SUB n/ calculation. The capture gamma dose is computed using a simple point-kernel single-scatter method.« less

  5. Gamma irradiation of peanut kernel to control mold growth and to diminish aflatoxin contamination

    NASA Astrophysics Data System (ADS)

    Y.-Y. Chiou, R.

    1996-09-01

    Peanut kernel inoculated with Aspergillus parasiticus conidia were gamma irradiated with 0, 2.5, 5.0 and 10 kGy using Co60. Levels higher than 2.5 kGy were effective in retarding the outgrowth of A. parasiticus and reducing the population of natural mold contaminants. However, complete elimination of these molds was not achieved even at the dose of 10 kGy. After 4 wk incubation of the inoculated kernels in a humidified condition, aflatoxins produced by the surviving A. parasiticus were 69.12, 2.42, 57.36 and 22.28 μ/g, corresponding to the original irradiation levels. Peroxide content of peanut oils prepared from the irradiated peanuts increased with increased irradiation dosage. After storage, at each irradiation level, peroxide content in peanuts stored at -14°C was lower than that in peanuts stored at an ambient temperature. TBA values and CDHP contents of the oil increased with increased irradiation dosage and changed slightly after storage. However, fatty acid contents of the peanut oil varied in a limited range as affected by the irradiation dosage and storage temperature. The SDS-PAGE protein pattern of peanuts revealed no noticeable variation of protein subunits resulting from irradiation and storage.

  6. MO-FG-CAMPUS-TeP1-05: Rapid and Efficient 3D Dosimetry for End-To-End Patient-Specific QA of Rotational SBRT Deliveries Using a High-Resolution EPID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Han, B; Xing, L

    2016-06-15

    Purpose: EPID-based patient-specific quality assurance provides verification of the planning setup and delivery process that phantomless QA and log-file based virtual dosimetry methods cannot achieve. We present a method for EPID-based QA utilizing spatially-variant EPID response kernels that allows for direct calculation of the entrance fluence and 3D phantom dose. Methods: An EPID dosimetry system was utilized for 3D dose reconstruction in a cylindrical phantom for the purposes of end-to-end QA. Monte Carlo (MC) methods were used to generate pixel-specific point-spread functions (PSFs) characterizing the spatially non-uniform EPID portal response in the presence of phantom scatter. The spatially-variant PSFs weremore » decomposed into spatially-invariant basis PSFs with the symmetric central-axis kernel as the primary basis kernel and off-axis representing orthogonal perturbations in pixel-space. This compact and accurate characterization enables the use of a modified Richardson-Lucy deconvolution algorithm to directly reconstruct entrance fluence from EPID images without iterative scatter subtraction. High-resolution phantom dose kernels were cogenerated in MC with the PSFs enabling direct recalculation of the resulting phantom dose by rapid forward convolution once the entrance fluence was calculated. A Delta4 QA phantom was used to validate the dose reconstructed in this approach. Results: The spatially-invariant representation of the EPID response accurately reproduced the entrance fluence with >99.5% fidelity with a simultaneous reduction of >60% in computational overhead. 3D dose for 10{sub 6} voxels was reconstructed for the entire phantom geometry. A 3D global gamma analysis demonstrated a >95% pass rate at 3%/3mm. Conclusion: Our approach demonstrates the capabilities of an EPID-based end-to-end QA methodology that is more efficient than traditional EPID dosimetry methods. Displacing the point of measurement external to the QA phantom reduces the necessary complexity of the phantom itself while offering a method that is highly scalable and inherently generalizable to rotational and trajectory based deliveries. This research was partially supported by Varian.« less

  7. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  8. Proximate Nutritional Evaluation of Gamma Irradiated Black Rice (Oryza sativa L. cv. Cempo ireng)

    NASA Astrophysics Data System (ADS)

    Riyatun; Suharyana; Ramelan, A. H.; Sutarno; Saputra, O. A.; Suryanti, V.

    2018-03-01

    Black rice is a type of pigmented rice with black bran covering the endosperm of the rice kernel. The main objective of the present study was to provide details information on the proximate composition of third generation of gamma irradiated black rice (Oryza sativa L. cv. Cempo ireng). In respect to the control, generally speaking, there were no significant changes of moisture, lipids, proteins, carbohydrates and fibers contents have been observed for the both gamma irradiated black rice. However, the 200-BR has slightly better nutritional value than that of 300-BR and the control. The mineral contents of 200-BR increased significantly of about 35% than the non-gamma irradiated black rice.

  9. Accuracy of a simplified method for shielded gamma-ray skyshine sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassett, M.S.; Shultis, J.K.

    1989-11-01

    Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less

  10. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  11. Single kernel ionomic profiles are highly heritable indicators of genetic and environmental influences on elemental accumulation in maize grain (Zea mays)

    USDA-ARS?s Scientific Manuscript database

    The ionome, or elemental profile, of a maize kernel represents at least two distinct ideas. First, the collection of elements within the kernel are food, feed and feedstocks for people, animals and industrial processes. Second, the ionome of the kernel represents a developmental end point that can s...

  12. Application of the matrix exponential kernel

    NASA Technical Reports Server (NTRS)

    Rohach, A. F.

    1972-01-01

    A point matrix kernel for radiation transport, developed by the transmission matrix method, has been used to develop buildup factors and energy spectra through slab layers of different materials for a point isotropic source. Combinations of lead-water slabs were chosen for examples because of the extreme differences in shielding properties of these two materials.

  13. Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain

    ERIC Educational Resources Information Center

    Hannagan, Thomas; Grainger, Jonathan

    2012-01-01

    It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jakel, Scholkopf, & Wichmann, 2009). We point out that "String kernels," initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal…

  14. Quality changes in macadamia kernel between harvest and farm-gate.

    PubMed

    Walton, David A; Wallace, Helen M

    2011-02-01

    Macadamia integrifolia, Macadamia tetraphylla and their hybrids are cultivated for their edible kernels. After harvest, nuts-in-shell are partially dried on-farm and sorted to eliminate poor-quality kernels before consignment to a processor. During these operations, kernel quality may be lost. In this study, macadamia nuts-in-shell were sampled at five points of an on-farm postharvest handling chain from dehusking to the final storage silo to assess quality loss prior to consignment. Shoulder damage, weight of pieces and unsound kernel were assessed for raw kernels, and colour, mottled colour and surface damage for roasted kernels. Shoulder damage, weight of pieces and unsound kernel for raw kernels increased significantly between the dehusker and the final silo. Roasted kernels displayed a significant increase in dark colour, mottled colour and surface damage during on-farm handling. Significant loss of macadamia kernel quality occurred on a commercial farm during sorting and storage of nuts-in-shell before nuts were consigned to a processor. Nuts-in-shell should be dried as quickly as possible and on-farm handling minimised to maintain optimum kernel quality. 2010 Society of Chemical Industry.

  15. Alternative Derivations for the Poisson Integral Formula

    ERIC Educational Resources Information Center

    Chen, J. T.; Wu, C. S.

    2006-01-01

    Poisson integral formula is revisited. The kernel in the Poisson integral formula can be derived in a series form through the direct BEM free of the concept of image point by using the null-field integral equation in conjunction with the degenerate kernels. The degenerate kernels for the closed-form Green's function and the series form of Poisson…

  16. Viscoelastic Timoshenko Beams with Occasionally Constant Relaxation Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tatar, Nasser-eddine, E-mail: tatarn@kfupm.edu.sa

    2012-08-15

    For a prescribed desirable arbitrary decay suitable viscoelastic materials are determined through their relaxation functions. It is shown that if we wish to have a decay of order {gamma}(t) then the kernels should be of the same order. That is their product with this function should be summable.

  17. A 3D Ginibre Point Field

    NASA Astrophysics Data System (ADS)

    Kargin, Vladislav

    2018-06-01

    We introduce a family of three-dimensional random point fields using the concept of the quaternion determinant. The kernel of each field is an n-dimensional orthogonal projection on a linear space of quaternionic polynomials. We find explicit formulas for the basis of the orthogonal quaternion polynomials and for the kernel of the projection. For number of particles n → ∞, we calculate the scaling limits of the point field in the bulk and at the center of coordinates. We compare our construction with the previously introduced Fermi-sphere point field process.

  18. Initial Simulations of RF Waves in Hot Plasmas Using the FullWave Code

    NASA Astrophysics Data System (ADS)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo

    2017-10-01

    FullWave is a simulation tool that models RF fields in hot inhomogeneous magnetized plasmas. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. In an rf field, the hot plasma dielectric response is limited to the distance of a few particles' Larmor radii, near the magnetic field line passing through the test point. The localization of the hot plasma dielectric response results in a sparse matrix of the problem thus significantly reduces the size of the problem and makes the simulations faster. We will present the initial results of modeling of rf waves using the Fullwave code, including calculation of nonlocal conductivity kernel in 2D Tokamak geometry; the interpolation of conductivity kernel from test points to adaptive cloud of computational points; and the results of self-consistent simulations of 2D rf fields using calculated hot plasma conductivity kernel in a tokamak plasma with reduced parameters. Work supported by the US DOE ``SBIR program.

  19. Development of radiation indicators to distinguish between irradiated and non-irradiated herbal medicines using HPLC and GC-MS.

    PubMed

    Kim, Min Jung; Ki, Hyeon A; Kim, Won Young; Pal, Sukdeb; Kim, Byeong Keun; Kang, Woo Suk; Song, Joon Myong

    2010-09-01

    The effects of high dose γ-irradiation on six herbal medicines were investigated using gas chromatography-mass spectrometry (GC/MS) and high-performance liquid chromatography (HPLC). Herbal medicines were irradiated at 0-50 kGy with (60)Co irradiator. HPLC was used to quantify changes of major components including glycyrrhizin, cinnamic acid, poncirin, hesperidin, berberine, and amygdalin in licorice, cinnamon bark, poncirin immature fruit, citrus unshiu peel, coptis rhizome, and apricot kernel. No significant differences were found between gamma-irradiated and non-irradiated samples with regard to the amounts of glycyrrhizin, berberine, and amygdalin. However, the contents of cinnamic acid, poncirin, and hesperidin were increased after irradiation. Volatile compounds were analyzed by GC/MS. The relative proportion of ketone in licorice was diminished after irradiation. The relative amount of hydrocarbons in irradiated cinnamon bark and apricot kernel was higher than that in non-irradiated samples. Therefore, ketone in licorice and hydrocarbons in cinnamon bark and apricot kernel can be considered radiolytic markers. Three unsaturated hydrocarbons, i.e., 1,7,10-hexadecatriene, 6,9-heptadecadiene, and 8-heptadecene, were detected only in apricot kernels irradiated at 25 and 50 kGy. These three hydrocarbons could be used as radiolytic markers to distinguish between irradiated (>25 kGy) and non-irradiated apricot kernels.

  20. A comparison of skyshine computational methods.

    PubMed

    Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J

    2005-01-01

    A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.

  1. Kernel-Based Sensor Fusion With Application to Audio-Visual Voice Activity Detection

    NASA Astrophysics Data System (ADS)

    Dov, David; Talmon, Ronen; Cohen, Israel

    2016-12-01

    In this paper, we address the problem of multiple view data fusion in the presence of noise and interferences. Recent studies have approached this problem using kernel methods, by relying particularly on a product of kernels constructed separately for each view. From a graph theory point of view, we analyze this fusion approach in a discrete setting. More specifically, based on a statistical model for the connectivity between data points, we propose an algorithm for the selection of the kernel bandwidth, a parameter, which, as we show, has important implications on the robustness of this fusion approach to interferences. Then, we consider the fusion of audio-visual speech signals measured by a single microphone and by a video camera pointed to the face of the speaker. Specifically, we address the task of voice activity detection, i.e., the detection of speech and non-speech segments, in the presence of structured interferences such as keyboard taps and office noise. We propose an algorithm for voice activity detection based on the audio-visual signal. Simulation results show that the proposed algorithm outperforms competing fusion and voice activity detection approaches. In addition, we demonstrate that a proper selection of the kernel bandwidth indeed leads to improved performance.

  2. Absorbed dose kernel and self-shielding calculations for a novel radiopaque glass microsphere for transarterial radioembolization.

    PubMed

    Church, Cody; Mawko, George; Archambault, John Paul; Lewandowski, Robert; Liu, David; Kehoe, Sharon; Boyd, Daniel; Abraham, Robert; Syme, Alasdair

    2018-02-01

    Radiopaque microspheres may provide intraprocedural and postprocedural feedback during transarterial radioembolization (TARE). Furthermore, the potential to use higher resolution x-ray imaging techniques as opposed to nuclear medicine imaging suggests that significant improvements in the accuracy and precision of radiation dosimetry calculations could be realized for this type of therapy. This study investigates the absorbed dose kernel for novel radiopaque microspheres including contributions of both short and long-lived contaminant radionuclides while concurrently quantifying the self-shielding of the glass network. Monte Carlo simulations using EGSnrc were performed to determine the dose kernels for all monoenergetic electron emissions and all beta spectra for radionuclides reported in a neutron activation study of the microspheres. Simulations were benchmarked against an accepted 90 Y dose point kernel. Self-shielding was quantified for the microspheres by simulating an isotropically emitting, uniformly distributed source, in glass and in water. The ratio of the absorbed doses was scored as a function of distance from a microsphere. The absorbed dose kernel for the microspheres was calculated for (a) two bead formulations following (b) two different durations of neutron activation, at (c) various time points following activation. Self-shielding varies with time postremoval from the reactor. At early time points, it is less pronounced due to the higher energies of the emissions. It is on the order of 0.4-2.8% at a radial distance of 5.43 mm with increased size from 10 to 50 μm in diameter during the time that the microspheres would be administered to a patient. At long time points, self-shielding is more pronounced and can reach values in excess of 20% near the end of the range of the emissions. Absorbed dose kernels for 90 Y, 90m Y, 85m Sr, 85 Sr, 87m Sr, 89 Sr, 70 Ga, 72 Ga, and 31 Si are presented and used to determine an overall kernel for the microspheres based on weighted activities. The shapes of the absorbed dose kernels are dominated at short times postactivation by the contributions of 70 Ga and 72 Ga. Following decay of the short-lived contaminants, the absorbed dose kernel is effectively that of 90 Y. After approximately 1000 h postactivation, the contributions of 85 Sr and 89 Sr become increasingly dominant, though the absorbed dose-rate around the beads drops by roughly four orders of magnitude. The introduction of high atomic number elements for the purpose of increasing radiopacity necessarily leads to the production of radionuclides other than 90 Y in the microspheres. Most of the radionuclides in this study are short-lived and are likely not of any significant concern for this therapeutic agent. The presence of small quantities of longer lived radionuclides will change the shape of the absorbed dose kernel around a microsphere at long time points postadministration when activity levels are significantly reduced. © 2017 American Association of Physicists in Medicine.

  3. Noise kernels of stochastic gravity in conformally-flat spacetimes

    NASA Astrophysics Data System (ADS)

    Cho, H. T.; Hu, B. L.

    2015-03-01

    The central object in the theory of semiclassical stochastic gravity is the noise kernel, which is the symmetric two point correlation function of the stress-energy tensor. Using the corresponding Wightman functions in Minkowski, Einstein and open Einstein spaces, we construct the noise kernels of a conformally coupled scalar field in these spacetimes. From them we show that the noise kernels in conformally-flat spacetimes, including the Friedmann-Robertson-Walker universes, can be obtained in closed analytic forms by using a combination of conformal and coordinate transformations.

  4. A robust, high-throughput method for computing maize ear, cob, and kernel attributes automatically from images.

    PubMed

    Miller, Nathan D; Haase, Nicholas J; Lee, Jonghyun; Kaeppler, Shawn M; de Leon, Natalia; Spalding, Edgar P

    2017-01-01

    Grain yield of the maize plant depends on the sizes, shapes, and numbers of ears and the kernels they bear. An automated pipeline that can measure these components of yield from easily-obtained digital images is needed to advance our understanding of this globally important crop. Here we present three custom algorithms designed to compute such yield components automatically from digital images acquired by a low-cost platform. One algorithm determines the average space each kernel occupies along the cob axis using a sliding-window Fourier transform analysis of image intensity features. A second counts individual kernels removed from ears, including those in clusters. A third measures each kernel's major and minor axis after a Bayesian analysis of contour points identifies the kernel tip. Dimensionless ear and kernel shape traits that may interrelate yield components are measured by principal components analysis of contour point sets. Increased objectivity and speed compared to typical manual methods are achieved without loss of accuracy as evidenced by high correlations with ground truth measurements and simulated data. Millimeter-scale differences among ear, cob, and kernel traits that ranged more than 2.5-fold across a diverse group of inbred maize lines were resolved. This system for measuring maize ear, cob, and kernel attributes is being used by multiple research groups as an automated Web service running on community high-throughput computing and distributed data storage infrastructure. Users may create their own workflow using the source code that is staged for download on a public repository. © 2016 The Authors. The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.

  5. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  6. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    NASA Astrophysics Data System (ADS)

    Nigg, D. W.; Wheeler, F. J.

    1981-01-01

    A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and the capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.

  7. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigg, D.W.; Wheeler, F.J.

    1981-01-01

    A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and themore » capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.« less

  8. Richardson-Lucy deblurring for the star scene under a thinning motion path

    NASA Astrophysics Data System (ADS)

    Su, Laili; Shao, Xiaopeng; Wang, Lin; Wang, Haixin; Huang, Yining

    2015-05-01

    This paper puts emphasis on how to model and correct image blur that arises from a camera's ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera's path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera's ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.

  9. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  10. Validation and uncertainty analysis of a pre-treatment 2D dose prediction model

    NASA Astrophysics Data System (ADS)

    Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank

    2018-02-01

    Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.

  11. Calculating the Responses of Self-Powered Radiation Detectors.

    NASA Astrophysics Data System (ADS)

    Thornton, D. A.

    Available from UMI in association with The British Library. The aim of this research is to review and develop the theoretical understanding of the responses of Self -Powered Radiation Detectors (SPDs) in Pressurized Water Reactors (PWRs). Two very different models are considered. A simple analytic model of the responses of SPDs to neutrons and gamma radiation is presented. It is a development of the work of several previous authors and has been incorporated into a computer program (called GENSPD), the predictions of which have been compared with experimental and theoretical results reported in the literature. Generally, the comparisons show reasonable consistency; where there is poor agreement explanations have been sought and presented. Two major limitations of analytic models have been identified; neglect of current generation in insulators and over-simplified electron transport treatments. Both of these are developed in the current work. A second model based on the Explicit Representation of Radiation Sources and Transport (ERRST) is presented and evaluated for several SPDs in a PWR at beginning of life. The model incorporates simulation of the production and subsequent transport of neutrons, gamma rays and electrons, both internal and external to the detector. Neutron fluxes and fuel power ratings have been evaluated with core physics calculations. Neutron interaction rates in assembly and detector materials have been evaluated in lattice calculations employing deterministic transport and diffusion methods. The transport of the reactor gamma radiation has been calculated with Monte Carlo, adjusted diffusion and point-kernel methods. The electron flux associated with the reactor gamma field as well as the internal charge deposition effects of the transport of photons and electrons have been calculated with coupled Monte Carlo calculations of photon and electron transport. The predicted response of a SPD is evaluated as the sum of contributions from individual response mechanisms.

  12. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  13. Exploring microwave resonant multi-point ignition using high-speed schlieren imaging

    NASA Astrophysics Data System (ADS)

    Liu, Cheng; Zhang, Guixin; Xie, Hong; Deng, Lei; Wang, Zhi

    2018-03-01

    Microwave plasma offers a potential method to achieve rapid combustion in a high-speed combustor. In this paper, microwave resonant multi-point ignition and its control method have been studied via high-speed schlieren imaging. The experiment was conducted with the microwave resonant ignition system and the schlieren optical system. The microwave pulse in 2.45 GHz with 2 ms width and 3 kW peak power was employed as an ignition energy source to produce initial flame kernels in the combustion chamber. A reflective schlieren method was designed to illustrate the flame development process with a high-speed camera. The bottom of the combustion chamber was made of a quartz glass coated with indium tin oxide, which ensures sufficient microwave reflection and light penetration. Ignition experiments were conducted at 2 bars of stoichiometric methane-air mixtures. Schlieren images show that flame kernels were generated at more than one location simultaneously and flame propagated with different speeds in different flame kernels. Ignition kernels were discussed in three types according to their appearances. Pressure curves and combustion duration also show that multi-point ignition plays a significant role in accelerating combustion.

  14. Exploring microwave resonant multi-point ignition using high-speed schlieren imaging.

    PubMed

    Liu, Cheng; Zhang, Guixin; Xie, Hong; Deng, Lei; Wang, Zhi

    2018-03-01

    Microwave plasma offers a potential method to achieve rapid combustion in a high-speed combustor. In this paper, microwave resonant multi-point ignition and its control method have been studied via high-speed schlieren imaging. The experiment was conducted with the microwave resonant ignition system and the schlieren optical system. The microwave pulse in 2.45 GHz with 2 ms width and 3 kW peak power was employed as an ignition energy source to produce initial flame kernels in the combustion chamber. A reflective schlieren method was designed to illustrate the flame development process with a high-speed camera. The bottom of the combustion chamber was made of a quartz glass coated with indium tin oxide, which ensures sufficient microwave reflection and light penetration. Ignition experiments were conducted at 2 bars of stoichiometric methane-air mixtures. Schlieren images show that flame kernels were generated at more than one location simultaneously and flame propagated with different speeds in different flame kernels. Ignition kernels were discussed in three types according to their appearances. Pressure curves and combustion duration also show that multi-point ignition plays a significant role in accelerating combustion.

  15. Fission Product Inventory and Burnup Evaluation of the AGR-2 Irradiation by Gamma Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harp, Jason Michael; Stempien, John Dennis; Demkowicz, Paul Andrew

    Gamma spectrometry has been used to evaluate the burnup and fission product inventory of different components from the US Advanced Gas Reactor Fuel Development and Qualification Program's second TRISO-coated particle fuel irradiation test (AGR-2). TRISO fuel in this irradiation included both uranium carbide / uranium oxide (UCO) kernels and uranium oxide (UO 2) kernels. Four of the 6 capsules contained fuel from the US Advanced Gas Reactor program, and only those capsules will be discussed in this work. The inventories of gamma-emitting fission products from the fuel compacts, graphite compact holders, graphite spacers and test capsule shell were evaluated. Thesemore » data were used to measure the fractional release of fission products such as Cs-137, Cs-134, Eu-154, Ce-144, and Ag-110m from the compacts. The fraction of Ag-110m retained in the compacts ranged from 1.8% to full retention. Additionally, the activities of the radioactive cesium isotopes (Cs-134 and Cs-137) have been used to evaluate the burnup of all US TRISO fuel compacts in the irradiation. The experimental burnup evaluations compare favorably with burnups predicted from physics simulations. Predicted burnups for UCO compacts range from 7.26 to 13.15 % fission per initial metal atom (FIMA) and 9.01 to 10.69 % FIMA for UO 2 compacts. Measured burnup ranged from 7.3 to 13.1 % FIMA for UCO compacts and 8.5 to 10.6 % FIMA for UO 2 compacts. Results from gamma emission computed tomography performed on compacts and graphite holders that reveal the distribution of different fission products in a component will also be discussed. Gamma tomography of graphite holders was also used to locate the position of TRISO fuel particles suspected of having silicon carbide layer failures that lead to in-pile cesium release.« less

  16. Fission Product Inventory and Burnup Evaluation of the AGR-2 Irradiation by Gamma Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harp, Jason M.; Demkowicz, Paul A.; Stempien, John D.

    Gamma spectrometry has been used to evaluate the burnup and fission product inventory of different components from the US Advanced Gas Reactor Fuel Development and Qualification Program's second TRISO-coated particle fuel irradiation test (AGR-2). TRISO fuel in this irradiation included both uranium carbide / uranium oxide (UCO) kernels and uranium oxide (UO2) kernels. Four of the 6 capsules contained fuel from the US Advanced Gas Reactor program, and only those capsules will be discussed in this work. The inventories of gamma-emitting fission products from the fuel compacts, graphite compact holders, graphite spacers and test capsule shell were evaluated. These datamore » were used to measure the fractional release of fission products such as Cs-137, Cs-134, Eu-154, Ce-144, and Ag-110m from the compacts. The fraction of Ag-110m retained in the compacts ranged from 1.8% to full retention. Additionally, the activities of the radioactive cesium isotopes (Cs-134 and Cs-137) have been used to evaluate the burnup of all US TRISO fuel compacts in the irradiation. The experimental burnup evaluations compare favorably with burnups predicted from physics simulations. Predicted burnups for UCO compacts range from 7.26 to 13.15 % fission per initial metal atom (FIMA) and 9.01 to 10.69 % FIMA for UO2 compacts. Measured burnup ranged from 7.3 to 13.1 % FIMA for UCO compacts and 8.5 to 10.6 % FIMA for UO2 compacts. Results from gamma emission computed tomography performed on compacts and graphite holders that reveal the distribution of different fission products in a component will also be discussed. Gamma tomography of graphite holders was also used to locate the position of TRISO fuel particles suspected of having silicon carbide layer failures that lead to in-pile cesium release.« less

  17. Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction

    NASA Astrophysics Data System (ADS)

    Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc

    2018-02-01

    Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.

  18. The flare kernel in the impulsive phase

    NASA Technical Reports Server (NTRS)

    Dejager, C.

    1986-01-01

    The impulsive phase of a flare is characterized by impulsive bursts of X-ray and microwave radiation, related to impulsive footpoint heating up to 50 or 60 MK, by upward gas velocities (150 to 400 km/sec) and by a gradual increase of the flare's thermal energy content. These phenomena, as well as non-thermal effects, are all related to the impulsive energy injection into the flare. The available observations are also quantitatively consistent with a model in which energy is injected into the flare by beams of energetic electrons, causing ablation of chromospheric gas, followed by convective rise of gas. Thus, a hole is burned into the chromosphere; at the end of impulsive phase of an average flare the lower part of that hole is situated about 1800 km above the photosphere. H alpha and other optical and UV line emission is radiated by a thin layer (approx. 20 km) at the bottom of the flare kernel. The upward rising and outward streaming gas cools down by conduction in about 45 s. The non-thermal effects in the initial phase are due to curtailing of the energy distribution function by escape of energetic electrons. The single flux tube model of a flare does not fit with these observations; instead we propose the spaghetti-bundle model. Microwave and gamma-ray observations suggest the occurrence of dense flare knots of approx. 800 km diameter, and of high temperature. Future observations should concentrate on locating the microwave/gamma-ray sources, and on determining the kernel's fine structure and the related multi-loop structure of the flaring area.

  19. On the Floating Point Performance of the i860 Microprocessor

    NASA Technical Reports Server (NTRS)

    Lee, King; Kutler, Paul (Technical Monitor)

    1997-01-01

    The i860 microprocessor is a pipelined processor that can deliver two double precision floating point results every clock. It is being used in the Touchstone project to develop a teraflop computer by the year 2000. With such high computational capabilities it was expected that memory bandwidth would limit performance on many kernels. Measured performance of three kernels showed performance is less than what memory bandwidth limitations would predict. This paper develops a model that explains the discrepancy in terms of memory latencies and points to some problems involved in moving data from memory to the arithmetic pipelines.

  20. SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamberlain, S; Roswell Park Cancer Institute, Buffalo, NY; French, S

    Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3more » × 10{sup 6} to 3 × 10{sup 7}); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10{sup 6} was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10{sup 6} have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.« less

  1. Biochemical studies of some non-conventional sources of proteins. Part 7. Effect of detoxification treatments on the nutritional quality of apricot kernels.

    PubMed

    el-Adawy, T A; Rahma, E H; el-Badawey, A A; Gomaa, M A; Lásztity, R; Sarkadi, L

    1994-01-01

    Detoxification of apricot kernels by soaking in distilled water and ammonium hydroxide for 30 h at 47 degrees C decreased the total protein, non-protein nitrogen, total ash, glucose, sucrose, minerals, non-essential amino acids, polar amino acids, acidic amino acids, aromatic amino acids, antinutritional factors, hydrocyanic acid, tannins and phytic acid. On the other hand, removal of toxic and bitter compounds from apricot kernels increased the relative content of crude fibre, starch, total essential amino acids. Higher in-vitro protein digestibility and biological value was also observed. Generally, the detoxified apricot kernels were nutritionally well balanced. Utilization and incorporation of detoxified apricot kernel flours in food products is completely safe from the toxicity point of view.

  2. NARMER-1: a photon point-kernel code with build-up factors

    NASA Astrophysics Data System (ADS)

    Visonneau, Thierry; Pangault, Laurence; Malouch, Fadhel; Malvagi, Fausto; Dolci, Florence

    2017-09-01

    This paper presents an overview of NARMER-1, the new generation of photon point-kernel code developed by the Reactor Studies and Applied Mathematics Unit (SERMA) at CEA Saclay Center. After a short introduction giving some history points and the current context of development of the code, the paper exposes the principles implemented in the calculation, the physical quantities computed and surveys the generic features: programming language, computer platforms, geometry package, sources description, etc. Moreover, specific and recent features are also detailed: exclusion sphere, tetrahedral meshes, parallel operations. Then some points about verification and validation are presented. Finally we present some tools that can help the user for operations like visualization and pre-treatment.

  3. Fatty acid, triacylglycerol, phytosterol, and tocopherol variations in kernel oil of Malatya apricots from Turkey.

    PubMed

    Turan, Semra; Topcu, Ali; Karabulut, Ihsan; Vural, Halil; Hayaloglu, Ali Adnan

    2007-12-26

    The fatty acid, sn-2 fatty acid, triacyglycerol (TAG), tocopherol, and phytosterol compositions of kernel oils obtained from nine apricot varieties grown in the Malatya region of Turkey were determined ( P<0.05). The names of the apricot varieties were Alyanak (ALY), Cataloglu (CAT), Cöloglu (COL), Hacihaliloglu (HAC), Hacikiz (HKI), Hasanbey (HSB), Kabaasi (KAB), Soganci (SOG), and Tokaloglu (TOK). The total oil contents of apricot kernels ranged from 40.23 to 53.19%. Oleic acid contributed 70.83% to the total fatty acids, followed by linoleic (21.96%), palmitic (4.92%), and stearic (1.21%) acids. The s n-2 position is mainly occupied with oleic acid (63.54%), linoleic acid (35.0%), and palmitic acid (0.96%). Eight TAG species were identified: LLL, OLL, PLL, OOL+POL, OOO+POO, and SOO (where P, palmitoyl; S, stearoyl; O, oleoyl; and L, linoleoyl), among which mainly OOO+POO contributed to 48.64% of the total, followed by OOL+POL at 32.63% and OLL at 14.33%. Four tocopherol and six phytosterol isomers were identified and quantified; among these, gamma-tocopherol (475.11 mg/kg of oil) and beta-sitosterol (273.67 mg/100 g of oil) were predominant. Principal component analysis (PCA) was applied to the data from lipid components of apricot kernel oil in order to explore the distribution of the apricot variety according to their kernel's lipid components. PCA separated some varieties including ALY, COL, KAB, CAT, SOG, and HSB in one group and varieties TOK, HAC, and HKI in another group based on their lipid components of apricot kernel oil. So, in the present study, PCA was found to be a powerful tool for classification of the samples.

  4. Investigation of redshift- and duration-dependent clustering of gamma-ray bursts

    DOE PAGES

    Ukwatta, T. N.; Woźniak, P. R.

    2015-11-05

    Gamma-ray bursts (GRBs) are detectable out to very large distances and as such are potentially powerful cosmological probes. Historically, the angular distribution of GRBs provided important information about their origin and physical properties. As a general population, GRBs are distributed isotropically across the sky. However, there are published reports that once binned by duration or redshift, GRBs display significant clustering. We have studied the redshift- and duration-dependent clustering of GRBs using proximity measures and kernel density estimation. Utilizing bursts detected by Burst and Transient Source Experiment, Fermi/gamma-ray burst monitor, and Swift/Burst Alert Telescope, we found marginal evidence for clustering inmore » very short duration GRBs lasting less than 100 ms. As a result, our analysis provides little evidence for significant redshift-dependent clustering of GRBs.« less

  5. The spatial resolution of a rotating gamma camera tomographic facility.

    PubMed

    Webb, S; Flower, M A; Ott, R J; Leach, M O; Inamdar, R

    1983-12-01

    An important feature determining the spatial resolution in transverse sections reconstructed by convolution and back-projection is the frequency filter corresponding to the convolution kernel. Equations have been derived giving the theoretical spatial resolution, for a perfect detector and noise-free data, using four filter functions. Experiments have shown that physical constraints will always limit the resolution that can be achieved with a given system. The experiments indicate that the region of the frequency spectrum between KN/2 and KN where KN is the Nyquist frequency does not contribute significantly to resolution. In order to investigate the physical effect of these filter functions, the spatial resolution of reconstructed images obtained with a GE 400T rotating gamma camera has been measured. The results obtained serve as an aid to choosing appropriate reconstruction filters for use with a rotating gamma camera system.

  6. New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.

    PubMed

    Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto

    2017-06-21

    We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.

  7. Flexibly imposing periodicity in kernel independent FMM: A multipole-to-local operator approach

    NASA Astrophysics Data System (ADS)

    Yan, Wen; Shelley, Michael

    2018-02-01

    An important but missing component in the application of the kernel independent fast multipole method (KIFMM) is the capability for flexibly and efficiently imposing singly, doubly, and triply periodic boundary conditions. In most popular packages such periodicities are imposed with the hierarchical repetition of periodic boxes, which may give an incorrect answer due to the conditional convergence of some kernel sums. Here we present an efficient method to properly impose periodic boundary conditions using a near-far splitting scheme. The near-field contribution is directly calculated with the KIFMM method, while the far-field contribution is calculated with a multipole-to-local (M2L) operator which is independent of the source and target point distribution. The M2L operator is constructed with the far-field portion of the kernel function to generate the far-field contribution with the downward equivalent source points in KIFMM. This method guarantees the sum of the near-field & far-field converge pointwise to results satisfying periodicity and compatibility conditions. The computational cost of the far-field calculation observes the same O (N) complexity as FMM and is designed to be small by reusing the data computed by KIFMM for the near-field. The far-field calculations require no additional control parameters, and observes the same theoretical error bound as KIFMM. We present accuracy and timing test results for the Laplace kernel in singly periodic domains and the Stokes velocity kernel in doubly and triply periodic domains.

  8. Kernel-PCA data integration with enhanced interpretability

    PubMed Central

    2014-01-01

    Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge. PMID:25032747

  9. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  10. Mutual information estimation for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.

    2012-04-01

    For the automated, objective and joint analysis of time series, similarity measures are crucial. Used in the analysis of climate records, they allow for a complimentary, unbiased view onto sparse datasets. The irregular sampling of many of these time series, however, makes it necessary to either perform signal reconstruction (e.g. interpolation) or to develop and use adapted measures. Standard linear interpolation comes with an inevitable loss of information and bias effects. We have recently developed a Gaussian kernel-based correlation algorithm with which the interpolation error can be substantially lowered, but this would not work should the functional relationship in a bivariate setting be non-linear. We therefore propose an algorithm to estimate lagged auto and cross mutual information from irregularly sampled time series. We have extended the standard and adaptive binning histogram estimators and use Gaussian distributed weights in the estimation of the (joint) probabilities. To test our method we have simulated linear and nonlinear auto-regressive processes with Gamma-distributed inter-sampling intervals. We have then performed a sensitivity analysis for the estimation of actual coupling length, the lag of coupling and the decorrelation time in the synthetic time series and contrast our results to the performance of a signal reconstruction scheme. Finally we applied our estimator to speleothem records. We compare the estimated memory (or decorrelation time) to that from a least-squares estimator based on fitting an auto-regressive process of order 1. The calculated (cross) mutual information results are compared for the different estimators (standard or adaptive binning) and contrasted with results from signal reconstruction. We find that the kernel-based estimator has a significantly lower root mean square error and less systematic sampling bias than the interpolation-based method. It is possible that these encouraging results could be further improved by using non-histogram mutual information estimators, like k-Nearest Neighbor or Kernel-Density estimators, but for short (<1000 points) and irregularly sampled datasets the proposed algorithm is already a great improvement.

  11. Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate

    NASA Astrophysics Data System (ADS)

    Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.

    2008-08-01

    The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.

  12. Bioactive compounds in cashew nut (Anacardium occidentale L.) kernels: effect of different shelling methods.

    PubMed

    Trox, Jennifer; Vadivel, Vellingiri; Vetter, Walter; Stuetz, Wolfgang; Scherbaum, Veronika; Gola, Ute; Nohr, Donatus; Biesalski, Hans Konrad

    2010-05-12

    In the present study, the effects of various conventional shelling methods (oil-bath roasting, direct steam roasting, drying, and open pan roasting) as well as a novel "Flores" hand-cracking method on the levels of bioactive compounds of cashew nut kernels were investigated. The raw cashew nut kernels were found to possess appreciable levels of certain bioactive compounds such as beta-carotene (9.57 microg/100 g of DM), lutein (30.29 microg/100 g of DM), zeaxanthin (0.56 microg/100 g of DM), alpha-tocopherol (0.29 mg/100 g of DM), gamma-tocopherol (1.10 mg/100 g of DM), thiamin (1.08 mg/100 g of DM), stearic acid (4.96 g/100 g of DM), oleic acid (21.87 g/100 g of DM), and linoleic acid (5.55 g/100 g of DM). All of the conventional shelling methods including oil-bath roasting, steam roasting, drying, and open pan roasting revealed a significant reduction, whereas the Flores hand-cracking method exhibited similar levels of carotenoids, thiamin, and unsaturated fatty acids in cashew nuts when compared to raw unprocessed samples.

  13. Implementation of kernels on the Maestro processor

    NASA Astrophysics Data System (ADS)

    Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.

    Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.

  14. Design and application of process control charting methodologies to gamma irradiation practices

    NASA Astrophysics Data System (ADS)

    Saylor, M. C.; Connaghan, J. P.; Yeadon, S. C.; Herring, C. M.; Jordan, T. M.

    2002-12-01

    The relationship between the contract irradiation facility and the customer has historically been based upon a "PASS/FAIL" approach with little or no quality metrics used to gage the control of the irradiation process. Application of process control charts, designed in coordination with mathematical simulation of routine radiation processing, can provide a basis for understanding irradiation events. By using tools that simulate the physical rules associated with the irradiation process, end-users can explore process-related boundaries and the effects of process changes. Consequently, the relationship between contractor and customer can evolve based on the derived knowledge. The resulting level of mutual understanding of the irradiation process and its resultant control benefits both the customer and contract operation, and provides necessary assurances to regulators. In this article we examine the complementary nature of theoretical (point kernel) and experimental (dosimetric) process evaluation, and the resulting by-product of improved understanding, communication and control generated through the implementation of effective process control charting strategies.

  15. Skin dose from radionuclide contamination on clothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, D.C.; Hussein, E.M.A.; Yuen, P.S.

    1997-06-01

    Skin dose due to radio nuclide contamination on clothing is calculated by Monte Carlo simulation of electron and photon radiation transport. Contamination due to a hot particle on some selected clothing geometries of cotton garment is simulated. The effect of backscattering in the surrounding air is taken into account. For each combination of source-clothing geometry, the dose distribution function in the skin, including the dose at tissue depths of 7 mg cm{sup -2} and 1,000 Mg cm{sup -2}, is calculated by simulating monoenergetic photon and electron sources. Skin dose due to contamination by a radionuclide is then determined by propermore » weighting of & monoenergetic dose distribution functions. The results are compared with the VARSKIN point-kernel code for some radionuclides, indicating that the latter code tends to under-estimate the dose for gamma and high energy beta sources while it overestimates skin dose for low energy beta sources. 13 refs., 4 figs., 2 tabs.« less

  16. Blending of palm oil, palm stearin and palm kernel oil in the preparation of table and pastry margarine.

    PubMed

    Norlida, H M; Md Ali, A R; Muhadhir, I

    1996-01-01

    Palm oil (PO ; iodin value = 52), palm stearin (POs1; i.v. = 32 and POs2; i.v. = 40) and palm kernel oil (PKO; i.v. = 17) were blended in ternary systems. The blends were then studied for their physical properties such as melting point (m.p.), solid fat content (SFC), and cooling curve. Results showed that palm stearin increased the blends melting point while palm kernel oil reduced it. To produce table margarine with melting point (m.p.) below 40 degrees C, the POs1 should be added at level of < or = 16%, while POs2 at level of < or = 20%. At 10 degrees C, eutectic interaction occur between PO and PKO which reach their maximum at about 60:40 blending ratio. Within the eutectic region, to maintain the SFC at 10 degrees C to be < or = 50%, POs1 may be added at level of < or = 7%, while POs2 at level of < or = 12%. The addition of palm stearin increased the blends solidification Tmin and Tmax values, while PKO reduced them. Blends which contained high amount of palm stearin showed melting point and cooling curves quite similar to that of pastry margarine.

  17. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  18. Benchmarking of MCNP for calculating dose rates at an interim storage facility for nuclear waste.

    PubMed

    Heuel-Fabianek, Burkhard; Hille, Ralf

    2005-01-01

    During the operation of research facilities at Research Centre Jülich, Germany, nuclear waste is stored in drums and other vessels in an interim storage building on-site, which has a concrete shielding at the side walls. Owing to the lack of a well-defined source, measured gamma spectra were unfolded to determine the photon flux on the surface of the containers. The dose rate simulation, including the effects of skyshine, using the Monte Carlo transport code MCNP is compared with the measured dosimetric data at some locations in the vicinity of the interim storage building. The MCNP data for direct radiation confirm the data calculated using a point-kernel method. However, a comparison of the modelled dose rates for direct radiation and skyshine with the measured data demonstrate the need for a more precise definition of the source. Both the measured and the modelled dose rates verified the fact that the legal limits (<1 mSv a(-1)) are met in the area outside the perimeter fence of the storage building to which members of the public have access. Using container surface data (gamma spectra) to define the source may be a useful tool for practical calculations and additionally for benchmarking of computer codes if the discussed critical aspects with respect to the source can be addressed adequately.

  19. Dose Calculations for [131I] Meta-Iodobenzylguanidine-Induced Bystander Effects

    PubMed Central

    Gow, M. D.; Seymour, C. B.; Boyd, M.; Mairs, R. J.; Prestiwch, W. V.; Mothersill, C. E.

    2014-01-01

    Targeted radiotherapy is a potentially useful treatment for some cancers and may be potentiated by bystander effects. However, without estimation of absorbed dose, it is difficult to compare the effects with conventional external radiation treatment. Methods: Using the Vynckier – Wambersie dose point kernel, a model for dose rate evaluation was created allowing for calculation of absorbed dose values to two cell lines transfected with the noradrenaline transporter (NAT) gene and treated with [131I]MIBG. Results: The mean doses required to decrease surviving fractions of UVW/NAT and EJ138/NAT cells, which received medium from [131I]MIBG-treated cells, to 25 – 30% were 1.6 and 1.7 Gy respectively. The maximum mean dose rates achieved during [131I]MIBG treatment were 0.09 – 0.75 Gy/h for UVW/NAT and 0.07 – 0.78 Gy/h for EJ138/NAT. These were significantly lower than the external beam gamma radiation dose rate of 15 Gy/h. In the case of control lines which were incapable of [131I]MIBG uptake the mean absorbed doses following radiopharmaceutical were 0.03 – 0.23 Gy for UVW and 0.03 – 0.32 Gy for EJ138. Conclusion: [131I]MIBG treatment for ICCM production elicited a bystander dose-response profile similar to that generated by external beam gamma irradiation but with significantly greater cell death. PMID:24659931

  20. 40 CFR 180.438 - Lambda-cyhalothrin and an isomer gamma-cyhalothrin; tolerances for residues.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Corn, pop, grain 0.05 Corn, pop, stover 1.0 Corn, sweet, forage 6.0 Corn, sweet, kernel plus cob with..., seed 1.0 Cattle, fat 3.0 Cattle, meat 0.2 Cattle, meat byproducts 0.2 Corn, field, flour 0.15 Corn, field, forage 6.0 Corn, field, grain 0.05 Corn, field, stover 1.0 Corn, pop, grain 0.05 Corn, pop, grain...

  1. 40 CFR 180.438 - Lambda-cyhalothrin and an isomer gamma-cyhalothrin; tolerances for residues.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Corn, pop, grain 0.05 Corn, pop, stover 1.0 Corn, sweet, forage 6.0 Corn, sweet, kernel plus cob with..., seed 1.0 Cattle, fat 3.0 Cattle, meat 0.2 Cattle, meat byproducts 0.2 Corn, field, flour 0.15 Corn, field, forage 6.0 Corn, field, grain 0.05 Corn, field, stover 1.0 Corn, pop, grain 0.05 Corn, pop, grain...

  2. 40 CFR 180.438 - Lambda-cyhalothrin and an isomer gamma-cyhalothrin; tolerances for residues.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Corn, pop, grain 0.05 Corn, pop, stover 1.0 Corn, sweet, forage 6.0 Corn, sweet, kernel plus cob with..., seed 1.0 Cattle, fat 3.0 Cattle, meat 0.2 Cattle, meat byproducts 0.2 Corn, field, flour 0.15 Corn, field, forage 6.0 Corn, field, grain 0.05 Corn, field, stover 1.0 Corn, pop, grain 0.05 Corn, pop, grain...

  3. Mathematical inference in one point microrheology

    NASA Astrophysics Data System (ADS)

    Hohenegger, Christel; McKinley, Scott

    2016-11-01

    Pioneered by the work of Mason and Weitz, one point passive microrheology has been successfully applied to obtaining estimates of the loss and storage modulus of viscoelastic fluids when the mean-square displacement obeys a local power law. Using numerical simulations of a fluctuating viscoelastic fluid model, we study the problem of recovering the mechanical parameters of the fluid's memory kernel using statistical inference like mean-square displacements and increment auto-correlation functions. Seeking a better understanding of the influence of the assumptions made in the inversion process, we mathematically quantify the uncertainty in traditional one point microrheology for simulated data and demonstrate that a large family of memory kernels yields the same statistical signature. We consider both simulated data obtained from a full viscoelastic fluid simulation of the unsteady Stokes equations with fluctuations and from a Generalized Langevin Equation of the particle's motion described by the same memory kernel. From the theory of inverse problems, we propose an alternative method that can be used to recover information about the loss and storage modulus and discuss its limitations and uncertainties. NSF-DMS 1412998.

  4. Scanning Apollo Flight Films and Reconstructing CSM Trajectories

    NASA Astrophysics Data System (ADS)

    Speyerer, E.; Robinson, M. S.; Grunsfeld, J. M.; Locke, S. D.; White, M.

    2006-12-01

    Over thirty years ago, the astronauts of the Apollo program made the journey from the Earth to the Moon and back. To record their historic voyages and collect scientific observations many thousands of photographs were acquired with handheld and automated cameras. After returning to Earth, these films were developed and stored at the film archive at Johnson Space Center (JSC), where they still reside. Due to the historical significance of the original flight films typically only duplicate (2nd or 3rd generation) film products are studied and used to make prints. To allow full access to the original flight films for both researchers and the general public, JSC and Arizona State University are scanning and creating an online digital archive. A Leica photogrammetric scanner is being used to insure geometric and radiometric fidelity. Scanning resolution will preserve the grain of the film. Color frames are being scanned and archived as 48 bit pixels to insure capture of the full dynamic range of the film (16 bit for BW). The raw scans will consist of 70 Terabytes of data (10,000 BW Hasselblad, 10,000 color Hasselblad, 10,000 Metric frames, 4500 Pan frames, 620 35mm frames counts; are estimates). All the scanned films will be made available for download through a searchable database. Special tools are being developed to locate images based on various search parameters. To geolocate metric and panoramic frames acquired during Apollos 15\\-17, prototype SPICE kernels are being generated from existing photographic support data by entering state vectors and timestamps from multiple points throughout each orbit into the NAIF toolkit to create a type 9 Spacecraft and Planet Ephemeris Kernel (SPK), a nadir pointing C\\- matrix Kernel (CK), and a Spacecraft Clock Kernel (SCLK). These SPICE kernels, in addition to the Instrument Kernel (IK) and Frames Kernel (FK) that also under development, will be archived along with the scanned images. From the generated kernels, several IDL programs have been designed to display orbital tracks, produce footprint plots, and create image projections. Using the output from these SPICE based programs enables accurate geolocating of SIM bay photography as well as providing potential data from lunar gravitational studies.

  5. Fruit position within the canopy affects kernel lipid composition of hazelnuts.

    PubMed

    Pannico, Antonio; Cirillo, Chiara; Giaccone, Matteo; Scognamiglio, Pasquale; Romano, Raffaele; Caporaso, Nicola; Sacchi, Raffaele; Basile, Boris

    2017-11-01

    The aim of this research was to study the variability in kernel composition within the canopy of hazelnut trees. Kernel fresh and dry weight increased linearly with fruit height above the ground. Fat content decreased, while protein and ash content increased, from the bottom to the top layers of the canopy. The level of unsaturation of fatty acids decreased from the bottom to the top of the canopy. Thus, the kernels located in the bottom layers of the canopy appear to be more interesting from a nutritional point of view, but their lipids may be more exposed to oxidation. The content of different phytosterols increased progressively from bottom to top canopy layers. Most of these effects correlated with the pattern in light distribution inside the canopy. The results of this study indicate that fruit position within the canopy is an important factor in determining hazelnut kernel growth and composition. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  6. Many Molecular Properties from One Kernel in Chemical Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole

    We introduce property-independent kernels for machine learning modeling of arbitrarily many molecular properties. The kernels encode molecular structures for training sets of varying size, as well as similarity measures sufficiently diffuse in chemical space to sample over all training molecules. Corresponding molecular reference properties provided, they enable the instantaneous generation of ML models which can systematically be improved through the addition of more data. This idea is exemplified for single kernel based modeling of internal energy, enthalpy, free energy, heat capacity, polarizability, electronic spread, zero-point vibrational energy, energies of frontier orbitals, HOMOLUMO gap, and the highest fundamental vibrational wavenumber. Modelsmore » of these properties are trained and tested using 112 kilo organic molecules of similar size. Resulting models are discussed as well as the kernels’ use for generating and using other property models.« less

  7. Numerical method for solving the nonlinear four-point boundary value problems

    NASA Astrophysics Data System (ADS)

    Lin, Yingzhen; Lin, Jinnan

    2010-12-01

    In this paper, a new reproducing kernel space is constructed skillfully in order to solve a class of nonlinear four-point boundary value problems. The exact solution of the linear problem can be expressed in the form of series and the approximate solution of the nonlinear problem is given by the iterative formula. Compared with known investigations, the advantages of our method are that the representation of exact solution is obtained in a new reproducing kernel Hilbert space and accuracy of numerical computation is higher. Meanwhile we present the convergent theorem, complexity analysis and error estimation. The performance of the new method is illustrated with several numerical examples.

  8. A dose assessment method for arbitrary geometries with virtual reality in the nuclear facilities decommissioning

    NASA Astrophysics Data System (ADS)

    Chao, Nan; Liu, Yong-kuo; Xia, Hong; Ayodeji, Abiodun; Bai, Lu

    2018-03-01

    During the decommissioning of nuclear facilities, a large number of cutting and demolition activities are performed, which results in a frequent change in the structure and produce many irregular objects. In order to assess dose rates during the cutting and demolition process, a flexible dose assessment method for arbitrary geometries and radiation sources was proposed based on virtual reality technology and Point-Kernel method. The initial geometry is designed with the three-dimensional computer-aided design tools. An approximate model is built automatically in the process of geometric modeling via three procedures namely: space division, rough modeling of the body and fine modeling of the surface, all in combination with collision detection of virtual reality technology. Then point kernels are generated by sampling within the approximate model, and when the material and radiometric attributes are inputted, dose rates can be calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The effectiveness and accuracy of the proposed method was verified by means of simulations using different geometries and the dose rate results were compared with that derived from CIDEC code, MCNP code and experimental measurements.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaskar,; Kumari, Neeti; Goyal, Neena, E-mail: neenacdri@yahoo.com

    Highlights: Black-Right-Pointing-Pointer The study presents cloning and characterization of TCP1{gamma} gene from L. donovani. Black-Right-Pointing-Pointer TCP1{gamma} is a subunit of T-complex protein-1 (TCP1), a chaperonin class of protein. Black-Right-Pointing-Pointer LdTCP{gamma} exhibited differential expression in different stages of promastigotes. Black-Right-Pointing-Pointer LdTCP{gamma} co-localized with actin, a cytoskeleton protein. Black-Right-Pointing-Pointer The data suggests that this gene may have a role in differentiation/biogenesis. Black-Right-Pointing-Pointer First report on this chapronin in Leishmania. -- Abstract: T-complex protein-1 (TCP1) complex, a chaperonin class of protein, ubiquitous in all genera of life, is involved in intracellular assembly and folding of various proteins. The gamma subunit of TCP1 complexmore » (TCP1{gamma}), plays a pivotal role in the folding and assembly of cytoskeleton protein(s) as an individual or complexed with other subunits. Here, we report for the first time cloning, characterization and expression of the TCP1{gamma} of Leishmania donovani (LdTCP1{gamma}), the causative agent of Indian Kala-azar. Primary sequence analysis of LdTCP1{gamma} revealed the presence of all the characteristic features of TCP1{gamma}. However, leishmanial TCP1{gamma} represents a distinct kinetoplastid group, clustered in a separate branch of the phylogenic tree. LdTCP1{gamma} exhibited differential expression in different stages of promastigotes. The non-dividing stationary phase promastigotes exhibited 2.5-fold less expression of LdTCP1{gamma} as compared to rapidly dividing log phase parasites. The sub-cellular distribution of LdTCP1{gamma} was studied in log phase promastigotes by employing indirect immunofluorescence microscopy. The protein was present not only in cytoplasm but it was also localized in nucleus, peri-nuclear region, flagella, flagellar pocket and apical region. Co-localization of LdTCP1{gamma} with actin suggests that, this gene may have a role in maintaining the structural dynamics of cytoskeleton of parasite.« less

  10. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less

  11. Gamma scintigraphic evaluation of floating gastroretentive tablets of metformin HCl using a combination of three natural polymers in rabbits

    PubMed Central

    Razavi, Mahboubeh; Karimian, Hamed; Yeong, Chai Hong; Chung, Lip Yong; Nyamathulla, Shaik; Noordin, Mohamed Ibrahim

    2015-01-01

    The present research was aimed at formulating a metformin HCl sustained-release formulation from a combination of polymers, using the wet granulation technique. A total of 16 formulations (F1–F16) were produced using different combinations of the gel-forming polymers: tamarind kernel powder, salep (palmate tubers of Orchis morio), and xanthan. Post-compression studies showed that there were no interactions between the active drug and the polymers. Results of in vitro drug-release studies indicated that the F10 formulation which contained 5 mg of tamarind kernel powder, 33.33 mg of xanthan, and 61.67 mg of salep could sustain a 95% release in 12 hours. The results also showed that F2 had a 55% similarity factor with the commercial formulation (C-ER), and the release kinetics were explained with zero order and Higuchi models. The in vivo study was performed in New Zealand White rabbits by gamma scintigraphy; the F10 formulation was radiolabeled using samarium (III) oxide (153Sm2O3) to trace transit of the tablets in the gastrointestinal tract. The in vivo data supported the retention of F10 formulation in the gastric region for 12 hours. In conclusion, the use of a combination of polymers in this study helped to develop an optimal gastroretentive drug-delivery system with improved bioavailability, swelling, and floating characteristics. PMID:26273196

  12. Electron beam lithographic modeling assisted by artificial intelligence technology

    NASA Astrophysics Data System (ADS)

    Nakayamada, Noriaki; Nishimura, Rieko; Miura, Satoru; Nomura, Haruyuki; Kamikubo, Takashi

    2017-07-01

    We propose a new concept of tuning a point-spread function (a "kernel" function) in the modeling of electron beam lithography using the machine learning scheme. Normally in the work of artificial intelligence, the researchers focus on the output results from a neural network, such as success ratio in image recognition or improved production yield, etc. In this work, we put more focus on the weights connecting the nodes in a convolutional neural network, which are naturally the fractions of a point-spread function, and take out those weighted fractions after learning to be utilized as a tuned kernel. Proof-of-concept of the kernel tuning has been demonstrated using the examples of proximity effect correction with 2-layer network, and charging effect correction with 3-layer network. This type of new tuning method can be beneficial to give researchers more insights to come up with a better model, yet it might be too early to be deployed to production to give better critical dimension (CD) and positional accuracy almost instantly.

  13. Multiple kernel SVR based on the MRE for remote sensing water depth fusion detection

    NASA Astrophysics Data System (ADS)

    Wang, Jinjin; Ma, Yi; Zhang, Jingyu

    2018-03-01

    Remote sensing has an important means of water depth detection in coastal shallow waters and reefs. Support vector regression (SVR) is a machine learning method which is widely used in data regression. In this paper, SVR is used to remote sensing multispectral bathymetry. Aiming at the problem that the single-kernel SVR method has a large error in shallow water depth inversion, the mean relative error (MRE) of different water depth is retrieved as a decision fusion factor with single kernel SVR method, a multi kernel SVR fusion method based on the MRE is put forward. And taking the North Island of the Xisha Islands in China as an experimentation area, the comparison experiments with the single kernel SVR method and the traditional multi-bands bathymetric method are carried out. The results show that: 1) In range of 0 to 25 meters, the mean absolute error(MAE)of the multi kernel SVR fusion method is 1.5m,the MRE is 13.2%; 2) Compared to the 4 single kernel SVR method, the MRE of the fusion method reduced 1.2% (1.9%) 3.4% (1.8%), and compared to traditional multi-bands method, the MRE reduced 1.9%; 3) In 0-5m depth section, compared to the single kernel method and the multi-bands method, the MRE of fusion method reduced 13.5% to 44.4%, and the distribution of points is more concentrated relative to y=x.

  14. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  15. Transient and asymptotic behaviour of the binary breakage problem

    NASA Astrophysics Data System (ADS)

    Mantzaris, Nikos V.

    2005-06-01

    The general binary breakage problem with power-law breakage functions and two families of symmetric and asymmetric breakage kernels is studied in this work. A useful transformation leads to an equation that predicts self-similar solutions in its asymptotic limit and offers explicit knowledge of the mean size and particle density at each point in dimensionless time. A novel moving boundary algorithm in the transformed coordinate system is developed, allowing the accurate prediction of the full transient behaviour of the system from the initial condition up to the point where self-similarity is achieved, and beyond if necessary. The numerical algorithm is very rapid and its results are in excellent agreement with known analytical solutions. In the case of the symmetric breakage kernels only unimodal, self-similar number density functions are obtained asymptotically for all parameter values and independent of the initial conditions, while in the case of asymmetric breakage kernels, bimodality appears for high degrees of asymmetry and sharp breakage functions. For symmetric and discrete breakage kernels, self-similarity is not achieved. The solution exhibits sustained oscillations with amplitude that depends on the initial condition and the sharpness of the breakage mechanism, while the period is always fixed and equal to ln 2 with respect to dimensionless time.

  16. Suitability of point kernel dose calculation techniques in brachytherapy treatment planning

    PubMed Central

    Lakshminarayanan, Thilagam; Subbaiah, K. V.; Thayalan, K.; Kannan, S. E.

    2010-01-01

    Brachytherapy treatment planning system (TPS) is necessary to estimate the dose to target volume and organ at risk (OAR). TPS is always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in applicators. However, most brachytherapy TPS software packages estimate the absorbed dose at a point, taking care of only the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. There are some degrees of uncertainties in dose rate estimations under realistic clinical conditions. In this regard, an attempt is made to explore the suitability of point kernels for brachytherapy dose rate calculations and develop new interactive brachytherapy package, named as BrachyTPS, to suit the clinical conditions. BrachyTPS is an interactive point kernel code package developed to perform independent dose rate calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. The primary aim of this study is to validate the developed point kernel code package integrated with treatment planning computational systems against the Monte Carlo (MC) results. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, namely (i) Board of Radiation Isotope and Technology (BRIT) low dose rate (LDR) applicator and (ii) Fletcher Green type LDR applicator (iii) Fletcher Williamson high dose rate (HDR) applicator, are studied to test the accuracy of the software. Dose rates computed using the developed code are compared with the relevant results of the MC simulations. Further, attempts are also made to study the dose rate distribution around the commercially available shielded vaginal applicator set (Nucletron). The percentage deviations of BrachyTPS computed dose rate values from the MC results are observed to be within plus/minus 5.5% for BRIT LDR applicator, found to vary from 2.6 to 5.1% for Fletcher green type LDR applicator and are up to −4.7% for Fletcher-Williamson HDR applicator. The isodose distribution plots also show good agreements with the results of previous literatures. The isodose distributions around the shielded vaginal cylinder computed using BrachyTPS code show better agreement (less than two per cent deviation) with MC results in the unshielded region compared to shielded region, where the deviations are observed up to five per cent. The present study implies that the accurate and fast validation of complicated treatment planning calculations is possible with the point kernel code package. PMID:20589118

  17. SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriya, S; Sato, M; Tachibana, H

    Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation runningmore » on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.« less

  18. Coronary Stent Artifact Reduction with an Edge-Enhancing Reconstruction Kernel - A Prospective Cross-Sectional Study with 256-Slice CT.

    PubMed

    Tan, Stéphanie; Soulez, Gilles; Diez Martinez, Patricia; Larrivée, Sandra; Stevens, Louis-Mathieu; Goussard, Yves; Mansour, Samer; Chartrand-Lefebvre, Carl

    2016-01-01

    Metallic artifacts can result in an artificial thickening of the coronary stent wall which can significantly impair computed tomography (CT) imaging in patients with coronary stents. The objective of this study is to assess in vivo visualization of coronary stent wall and lumen with an edge-enhancing CT reconstruction kernel, as compared to a standard kernel. This is a prospective cross-sectional study involving the assessment of 71 coronary stents (24 patients), with blinded observers. After 256-slice CT angiography, image reconstruction was done with medium-smooth and edge-enhancing kernels. Stent wall thickness was measured with both orthogonal and circumference methods, averaging thickness from diameter and circumference measurements, respectively. Image quality was assessed quantitatively using objective parameters (noise, signal to noise (SNR) and contrast to noise (CNR) ratios), as well as visually using a 5-point Likert scale. Stent wall thickness was decreased with the edge-enhancing kernel in comparison to the standard kernel, either with the orthogonal (0.97 ± 0.02 versus 1.09 ± 0.03 mm, respectively; p<0.001) or the circumference method (1.13 ± 0.02 versus 1.21 ± 0.02 mm, respectively; p = 0.001). The edge-enhancing kernel generated less overestimation from nominal thickness compared to the standard kernel, both with the orthogonal (0.89 ± 0.19 versus 1.00 ± 0.26 mm, respectively; p<0.001) and the circumference (1.06 ± 0.26 versus 1.13 ± 0.31 mm, respectively; p = 0.005) methods. The edge-enhancing kernel was associated with lower SNR and CNR, as well as higher background noise (all p < 0.001), in comparison to the medium-smooth kernel. Stent visual scores were higher with the edge-enhancing kernel (p<0.001). In vivo 256-slice CT assessment of coronary stents shows that the edge-enhancing CT reconstruction kernel generates thinner stent walls, less overestimation from nominal thickness, and better image quality scores than the standard kernel.

  19. A Experimental Study of the Growth of Laser Spark and Electric Spark Ignited Flame Kernels.

    NASA Astrophysics Data System (ADS)

    Ho, Chi Ming

    1995-01-01

    Better ignition sources are constantly in demand for enhancing the spark ignition in practical applications such as automotive and liquid rocket engines. In response to this practical challenge, the present experimental study was conducted with the major objective to obtain a better understanding on how spark formation and hence spark characteristics affect the flame kernel growth. Two laser sparks and one electric spark were studied in air, propane-air, propane -air-nitrogen, methane-air, and methane-oxygen mixtures that were initially at ambient pressure and temperature. The growth of the kernels was monitored by imaging the kernels with shadowgraph systems, and by imaging the planar laser -induced fluorescence of the hydroxyl radicals inside the kernels. Characteristic dimensions and kernel structures were obtained from these images. Since different energy transfer mechanisms are involved in the formation of a laser spark as compared to that of an electric spark; a laser spark is insensitive to changes in mixture ratio and mixture type, while an electric spark is sensitive to changes in both. The detailed structures of the kernels in air and propane-air mixtures primarily depend on the spark characteristics. But the combustion heat released rapidly in methane-oxygen mixtures significantly modifies the kernel structure. Uneven spark energy distribution causes remarkably asymmetric kernel structure. The breakdown energy of a spark creates a blast wave that shows good agreement with the numerical point blast solution, and a succeeding complex spark-induced flow that agrees reasonably well with a simple puff model. The transient growth rates of the propane-air, propane-air -nitrogen, and methane-air flame kernels can be interpreted in terms of spark effects, flame stretch, and preferential diffusion. For a given mixture, a spark with higher breakdown energy produces a greater and longer-lasting enhancing effect on the kernel growth rate. By comparing the growth rates of the appropriate mixtures, the positive and negative effects of preferential diffusion and flame stretch on the developing flame are clearly demonstrated.

  20. Observation of a 3D Magnetic Null Point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, P.; Falco, M.; Guglielmino, S. L.

    2017-03-10

    We describe high-resolution observations of a GOES B-class flare characterized by a circular ribbon at the chromospheric level, corresponding to the network at the photospheric level. We interpret the flare as a consequence of a magnetic reconnection event that occurred at a three-dimensional (3D) coronal null point located above the supergranular cell. The potential field extrapolation of the photospheric magnetic field indicates that the circular chromospheric ribbon is cospatial with the fan footpoints, while the ribbons of the inner and outer spines look like compact kernels. We found new interesting observational aspects that need to be explained by models: (1)more » a loop corresponding to the outer spine became brighter a few minutes before the onset of the flare; (2) the circular ribbon was formed by several adjacent compact kernels characterized by a size of 1″–2″; (3) the kernels with a stronger intensity emission were located at the outer footpoint of the darker filaments, departing radially from the center of the supergranular cell; (4) these kernels started to brighten sequentially in clockwise direction; and (5) the site of the 3D null point and the shape of the outer spine were detected by RHESSI in the low-energy channel between 6.0 and 12.0 keV. Taking into account all these features and the length scales of the magnetic systems involved in the event, we argue that the low intensity of the flare may be ascribed to the low amount of magnetic flux and to its symmetric configuration.« less

  1. Introducing etch kernels for efficient pattern sampling and etch bias prediction

    NASA Astrophysics Data System (ADS)

    Weisbuch, François; Lutich, Andrey; Schatz, Jirka

    2018-01-01

    Successful patterning requires good control of the photolithography and etch processes. While compact litho models, mainly based on rigorous physics, can predict very well the contours printed in photoresist, pure empirical etch models are less accurate and more unstable. Compact etch models are based on geometrical kernels to compute the litho-etch biases that measure the distance between litho and etch contours. The definition of the kernels, as well as the choice of calibration patterns, is critical to get a robust etch model. This work proposes to define a set of independent and anisotropic etch kernels-"internal, external, curvature, Gaussian, z_profile"-designed to represent the finest details of the resist geometry to characterize precisely the etch bias at any point along a resist contour. By evaluating the etch kernels on various structures, it is possible to map their etch signatures in a multidimensional space and analyze them to find an optimal sampling of structures. The etch kernels evaluated on these structures were combined with experimental etch bias derived from scanning electron microscope contours to train artificial neural networks to predict etch bias. The method applied to contact and line/space layers shows an improvement in etch model prediction accuracy over standard etch model. This work emphasizes the importance of the etch kernel definition to characterize and predict complex etch effects.

  2. Proteome analysis of the almond kernel (Prunus dulcis).

    PubMed

    Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu

    2016-08-01

    Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  3. Assessment of the microbiological safety of edible roasted nut kernels on retail sale in England, with a focus on Salmonella.

    PubMed

    Little, C L; Jemmott, W; Surman-Lee, S; Hucklesby, L; de Pinnal, E

    2009-04-01

    There is little published information on the prevalence of Salmonella in edible nut kernels. A study in early 2008 of edible roasted nut kernels on retail sale in England was undertaken to assess the microbiological safety of this product. A total of 727 nut kernel samples of different varieties were examined. Overall, Salmonella and Escherichia coli were detected from 0.2 and 0.4% of edible roasted nut kernels. Of the nut varieties examined, Salmonella Havana was detected from 1 (4.0%) sample of pistachio nuts, indicating a risk to health. The United Kingdom Food Standards Agency was immediately informed, and full investigations were undertaken. Further examination established the contamination to be associated with the pistachio kernels and not the partly opened shells. Salmonella was not detected in other varieties tested (almonds, Brazils, cashews, hazelnuts, macadamia, peanuts, pecans, pine nuts, and walnuts). E. coli was found at low levels (range of 3.6 to 4/g) in walnuts (1.4%), almonds (1.2%), and Brazils (0.5%). The presence of Salmonella is unacceptable in edible nut kernels. Prevention of microbial contamination in these products lies in the application of good agricultural, manufacturing, and storage practices together with a hazard analysis and critical control points system that encompass all stages of production, processing, and distribution.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    Open Computing Language (OpenCL) is a high-level language that enables software programmers to explore Field Programmable Gate Arrays (FPGAs) for application acceleration. The Intel FPGA software development kit (SDK) for OpenCL allows a user to specify applications at a high level and explore the performance of low-level hardware acceleration. In this report, we present the FPGA performance and power consumption results of the single-precision floating-point vector add OpenCL kernel using the Intel FPGA SDK for OpenCL on the Nallatech 385A FPGA board. The board features an Arria 10 FPGA. We evaluate the FPGA implementations using the compute unit duplication andmore » kernel vectorization optimization techniques. On the Nallatech 385A FPGA board, the maximum compute kernel bandwidth we achieve is 25.8 GB/s, approximately 76% of the peak memory bandwidth. The power consumption of the FPGA device when running the kernels ranges from 29W to 42W.« less

  5. Hyperspectral imaging for detection of black tip damage in wheat kernels

    NASA Astrophysics Data System (ADS)

    Delwiche, Stephen R.; Yang, I.-Chang; Kim, Moon S.

    2009-05-01

    A feasibility study was conducted on the use of hyperspectral imaging to differentiate sound wheat kernels from those with the fungal condition called black point or black tip. Individual kernels of hard red spring wheat were loaded in indented slots on a blackened machined aluminum plate. Damage conditions, determined by official (USDA) inspection, were either sound (no damage) or damaged by the black tip condition alone. Hyperspectral imaging was separately performed under modes of reflectance from white light illumination and fluorescence from UV light (~380 nm) illumination. By cursory inspection of wavelength images, one fluorescence wavelength (531 nm) was selected for image processing and classification analysis. Results indicated that with this one wavelength alone, classification accuracy can be as high as 95% when kernels are oriented with their dorsal side toward the camera. It is suggested that improvement in classification can be made through the inclusion of multiple wavelength images.

  6. Sensitivity Kernels for the Cross-Convolution Measure: Eliminate the Source in Waveform Tomography

    NASA Astrophysics Data System (ADS)

    Menke, W. H.

    2017-12-01

    We use the adjoint method to derive sensitivity kernels for the cross-convolution measure, a goodness-of-fit criterion that is applicable to seismic data containing closely-spaced multiple arrivals, such as reverberating compressional waves and split shear waves. In addition to a general formulation, specific expressions for sensitivity with respect to density, Lamé parameter and shear modulus are derived for a isotropic elastic solid. As is typical of adjoint methods, the kernels depend upon an adjoint field, the source of which, in this case, is the reference displacement field, pre-multiplied by a matrix of cross-correlations of components of the observed field. We use a numerical simulation to evaluate the resolving power of a topographic inversion that employs the cross-convolution measure. The estimated resolving kernel shows is point-like, indicating that the cross-convolution measure will perform well in waveform tomography settings.

  7. Kernel-Phase Interferometry for Super-Resolution Detection of Faint Companions

    NASA Astrophysics Data System (ADS)

    Factor, Samuel M.; Kraus, Adam L.

    2017-01-01

    Direct detection of close in companions (exoplanets or binary systems) is notoriously difficult. While coronagraphs and point spread function (PSF) subtraction can be used to reduce contrast and dig out signals of companions under the PSF, there are still significant limitations in separation and contrast. Non-redundant aperture masking (NRM) interferometry can be used to detect companions well inside the PSF of a diffraction limited image, though the mask discards ˜95% of the light gathered by the telescope and thus the technique is severely flux limited. Kernel-phase analysis applies interferometric techniques similar to NRM to a diffraction limited image utilizing the full aperture. Instead of non-redundant closure-phases, kernel-phases are constructed from a grid of points on the full aperture, simulating a redundant interferometer. I have developed my own faint companion detection pipeline which utilizes an Bayesian analysis of kernel-phases. I have used this pipeline to search for new companions in archival images from HST/NICMOS in order to constrain planet and binary formation models at separations inaccessible to previous techniques. Using this method, it is possible to detect a companion well within the classical λ/D Rayleigh diffraction limit using a fraction of the telescope time as NRM. This technique can easily be applied to archival data as no mask is needed and will thus make the detection of close in companions cheap and simple as no additional observations are needed. Since the James Webb Space Telescope (JWST) will be able to perform NRM observations, further development and characterization of kernel-phase analysis will allow efficient use of highly competitive JWST telescope time.

  8. Kernel-Phase Interferometry for Super-Resolution Detection of Faint Companions

    NASA Astrophysics Data System (ADS)

    Factor, Samuel

    2016-10-01

    Direct detection of close in companions (binary systems or exoplanets) is notoriously difficult. While chronagraphs and point spread function (PSF) subtraction can be used to reduce contrast and dig out signals of companions under the PSF, there are still significant limitations in separation and contrast. While non-redundant aperture masking (NRM) interferometry can be used to detect companions well inside the PSF of a diffraction limited image, the mask discards 95% of the light gathered by the telescope and thus the technique is severely flux limited. Kernel-phase analysis applies interferometric techniques similar to NRM though utilizing the full aperture. Instead of closure-phases, kernel-phases are constructed from a grid of points on the full aperture, simulating a redundant interferometer. I propose to develop my own faint companion detection pipeline which utilizes an MCMC analysis of kernel-phases. I will search for new companions in archival images from NIC1 and ACS/HRC in order to constrain binary and planet formation models at separations inaccessible to previous techniques. Using this method, it is possible to detect a companion well within the classical l/D Rayleigh diffraction limit using a fraction of the telescope time as NRM. This technique can easily be applied to archival data as no mask is needed and will thus make the detection of close in companions cheap and simple as no additional observations are needed. Since the James Webb Space Telescope (JWST) will be able to perform NRM observations, further development and characterization of kernel-phase analysis will allow efficient use of highly competitive JWST telescope time.

  9. Calculation of plasma dielectric response in inhomogeneous magnetic field near electron cyclotron resonance

    NASA Astrophysics Data System (ADS)

    Evstatiev, Evstati; Svidzinski, Vladimir; Spencer, Andy; Galkin, Sergei

    2014-10-01

    Full wave 3-D modeling of RF fields in hot magnetized nonuniform plasma requires calculation of nonlocal conductivity kernel describing the dielectric response of such plasma to the RF field. In many cases, the conductivity kernel is a localized function near the test point which significantly simplifies numerical solution of the full wave 3-D problem. Preliminary results of feasibility analysis of numerical calculation of the conductivity kernel in a 3-D hot nonuniform magnetized plasma in the electron cyclotron frequency range will be reported. This case is relevant to modeling of ECRH in ITER. The kernel is calculated by integrating the linearized Vlasov equation along the unperturbed particle's orbits. Particle's orbits in the nonuniform equilibrium magnetic field are calculated numerically by one of the Runge-Kutta methods. RF electric field is interpolated on a specified grid on which the conductivity kernel is discretized. The resulting integrals in the particle's initial velocity and time are then calculated numerically. Different optimization approaches of the integration are tested in this feasibility analysis. Work is supported by the U.S. DOE SBIR program.

  10. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  11. Evaluation of the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels using particle and heavy ion transport code system: PHITS.

    PubMed

    Shiiba, Takuro; Kuga, Naoya; Kuroiwa, Yasuyoshi; Sato, Tatsuhiko

    2017-10-01

    We assessed the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels (DPKs) calculated using the particle and heavy ion transport code system (PHITS) for patient-specific dosimetry in targeted radionuclide treatment (TRT) and compared our data with published data. All mono-energetic and beta-emitting isotope DPKs calculated using PHITS, both in water and compact bone, were in good agreement with those in literature using other MC codes. PHITS provided reliable mono-energetic electron and beta-emitting isotope scaled DPKs for patient-specific dosimetry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Effect of Local TOF Kernel Miscalibrations on Contrast-Noise in TOF PET

    NASA Astrophysics Data System (ADS)

    Clementel, Enrico; Mollet, Pieter; Vandenberghe, Stefaan

    2013-06-01

    TOF PET imaging requires specific calibrations: accurate characterization of the system timing resolution and timing offset is required to achieve the full potential image quality. Current system models used in image reconstruction assume a spatially uniform timing resolution kernel. Furthermore, although the timing offset errors are often pre-corrected, this correction becomes less accurate with the time since, especially in older scanners, the timing offsets are often calibrated only during the installation, as the procedure is time-consuming. In this study, we investigate and compare the effects of local mismatch of timing resolution when a uniform kernel is applied to systems with local variations in timing resolution and the effects of uncorrected time offset errors on image quality. A ring-like phantom was acquired on a Philips Gemini TF scanner and timing histograms were obtained from coincidence events to measure timing resolution along all sets of LORs crossing the scanner center. In addition, multiple acquisitions of a cylindrical phantom, 20 cm in diameter with spherical inserts, and a point source were simulated. A location-dependent timing resolution was simulated, with a median value of 500 ps and increasingly large local variations, and timing offset errors ranging from 0 to 350 ps were also simulated. Images were reconstructed with TOF MLEM with a uniform kernel corresponding to the effective timing resolution of the data, as well as with purposefully mismatched kernels. To CRC vs noise curves were measured over the simulated cylinder realizations, while the simulated point source was processed to generate timing histograms of the data. Results show that timing resolution is not uniform over the FOV of the considered scanner. The simulated phantom data indicate that CRC is moderately reduced in data sets with locally varying timing resolution reconstructed with a uniform kernel, while still performing better than non-TOF reconstruction. On the other hand, uncorrected offset errors in our setup have a larger potential for decreasing image quality and can lead to a reduction of CRC of up to 15% and an increase in the measured timing resolution kernel up to 40%. However, in realistic conditions in frequently calibrated systems, using a larger effective timing kernel in image reconstruction can compensate uncorrected offset errors.

  13. Temporal Effects on Internal Fluorescence Emissions Associated with Aflatoxin Contamination from Corn Kernel Cross-Sections Inoculated with Toxigenic and Atoxigenic Aspergillus flavus.

    PubMed

    Hruska, Zuzana; Yao, Haibo; Kincaid, Russell; Brown, Robert L; Bhatnagar, Deepak; Cleveland, Thomas E

    2017-01-01

    Non-invasive, easy to use and cost-effective technology offers a valuable alternative for rapid detection of carcinogenic fungal metabolites, namely aflatoxins, in commodities. One relatively recent development in this area is the use of spectral technology. Fluorescence hyperspectral imaging, in particular, offers a potential rapid and non-invasive method for detecting the presence of aflatoxins in maize infected with the toxigenic fungus Aspergillus flavus . Earlier studies have shown that whole maize kernels contaminated with aflatoxins exhibit different spectral signatures from uncontaminated kernels based on the external fluorescence emission of the whole kernels. Here, the effect of time on the internal fluorescence spectral emissions from cross-sections of kernels infected with toxigenic and atoxigenic A. flavus , were examined in order to elucidate the interaction between the fluorescence signals emitted by some aflatoxin contaminated maize kernels and the fungal invasion resulting in the production of aflatoxins. First, the difference in internal fluorescence emissions between cross-sections of kernels incubated in toxigenic and atoxigenic inoculum was assessed. Kernels were inoculated with each strain for 5, 7, and 9 days before cross-sectioning and imaging. There were 270 kernels (540 halves) imaged, including controls. Second, in a different set of kernels (15 kernels/group; 135 total), the germ of each kernel was separated from the endosperm to determine the major areas of aflatoxin accumulation and progression over nine growth days. Kernels were inoculated with toxigenic and atoxigenic fungal strains for 5, 7, and 9 days before the endosperm and germ were separated, followed by fluorescence hyperspectral imaging and chemical aflatoxin determination. A marked difference in fluorescence intensity was shown between the toxigenic and atoxigenic strains on day nine post-inoculation, which may be a useful indicator of the location of aflatoxin contamination. This finding suggests that both, the fluorescence peak shift and intensity as well as timing, may be essential in distinguishing toxigenic and atoxigenic fungi based on spectral features. Results also reveal a possible preferential difference in the internal colonization of maize kernels between the toxigenic and atoxigenic strains of A. flavus suggesting a potential window for differentiating the strains based on fluorescence spectra at specific time points.

  14. Temporal Effects on Internal Fluorescence Emissions Associated with Aflatoxin Contamination from Corn Kernel Cross-Sections Inoculated with Toxigenic and Atoxigenic Aspergillus flavus

    PubMed Central

    Hruska, Zuzana; Yao, Haibo; Kincaid, Russell; Brown, Robert L.; Bhatnagar, Deepak; Cleveland, Thomas E.

    2017-01-01

    Non-invasive, easy to use and cost-effective technology offers a valuable alternative for rapid detection of carcinogenic fungal metabolites, namely aflatoxins, in commodities. One relatively recent development in this area is the use of spectral technology. Fluorescence hyperspectral imaging, in particular, offers a potential rapid and non-invasive method for detecting the presence of aflatoxins in maize infected with the toxigenic fungus Aspergillus flavus. Earlier studies have shown that whole maize kernels contaminated with aflatoxins exhibit different spectral signatures from uncontaminated kernels based on the external fluorescence emission of the whole kernels. Here, the effect of time on the internal fluorescence spectral emissions from cross-sections of kernels infected with toxigenic and atoxigenic A. flavus, were examined in order to elucidate the interaction between the fluorescence signals emitted by some aflatoxin contaminated maize kernels and the fungal invasion resulting in the production of aflatoxins. First, the difference in internal fluorescence emissions between cross-sections of kernels incubated in toxigenic and atoxigenic inoculum was assessed. Kernels were inoculated with each strain for 5, 7, and 9 days before cross-sectioning and imaging. There were 270 kernels (540 halves) imaged, including controls. Second, in a different set of kernels (15 kernels/group; 135 total), the germ of each kernel was separated from the endosperm to determine the major areas of aflatoxin accumulation and progression over nine growth days. Kernels were inoculated with toxigenic and atoxigenic fungal strains for 5, 7, and 9 days before the endosperm and germ were separated, followed by fluorescence hyperspectral imaging and chemical aflatoxin determination. A marked difference in fluorescence intensity was shown between the toxigenic and atoxigenic strains on day nine post-inoculation, which may be a useful indicator of the location of aflatoxin contamination. This finding suggests that both, the fluorescence peak shift and intensity as well as timing, may be essential in distinguishing toxigenic and atoxigenic fungi based on spectral features. Results also reveal a possible preferential difference in the internal colonization of maize kernels between the toxigenic and atoxigenic strains of A. flavus suggesting a potential window for differentiating the strains based on fluorescence spectra at specific time points. PMID:28966606

  15. Generalized and efficient algorithm for computing multipole energies and gradients based on Cartesian tensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Dejun, E-mail: dejun.lin@gmail.com

    2015-09-21

    Accurate representation of intermolecular forces has been the central task of classical atomic simulations, known as molecular mechanics. Recent advancements in molecular mechanics models have put forward the explicit representation of permanent and/or induced electric multipole (EMP) moments. The formulas developed so far to calculate EMP interactions tend to have complicated expressions, especially in Cartesian coordinates, which can only be applied to a specific kernel potential function. For example, one needs to develop a new formula each time a new kernel function is encountered. The complication of these formalisms arises from an intriguing and yet obscured mathematical relation between themore » kernel functions and the gradient operators. Here, I uncover this relation via rigorous derivation and find that the formula to calculate EMP interactions is basically invariant to the potential kernel functions as long as they are of the form f(r), i.e., any Green’s function that depends on inter-particle distance. I provide an algorithm for efficient evaluation of EMP interaction energies, forces, and torques for any kernel f(r) up to any arbitrary rank of EMP moments in Cartesian coordinates. The working equations of this algorithm are essentially the same for any kernel f(r). Recently, a few recursive algorithms were proposed to calculate EMP interactions. Depending on the kernel functions, the algorithm here is about 4–16 times faster than these algorithms in terms of the required number of floating point operations and is much more memory efficient. I show that it is even faster than a theoretically ideal recursion scheme, i.e., one that requires 1 floating point multiplication and 1 addition per recursion step. This algorithm has a compact vector-based expression that is optimal for computer programming. The Cartesian nature of this algorithm makes it fit easily into modern molecular simulation packages as compared with spherical coordinate-based algorithms. A software library based on this algorithm has been implemented in C++11 and has been released.« less

  16. 3D local feature BKD to extract road information from mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Dong, Zhen; Liang, Fuxun; Li, Bijun; Peng, Xiangyang

    2017-08-01

    Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise.

  17. Flood susceptibility mapping using a novel ensemble weights-of-evidence and support vector machine models in GIS

    NASA Astrophysics Data System (ADS)

    Tehrany, Mahyat Shafapour; Pradhan, Biswajeet; Jebur, Mustafa Neamah

    2014-05-01

    Flood is one of the most devastating natural disasters that occur frequently in Terengganu, Malaysia. Recently, ensemble based techniques are getting extremely popular in flood modeling. In this paper, weights-of-evidence (WoE) model was utilized first, to assess the impact of classes of each conditioning factor on flooding through bivariate statistical analysis (BSA). Then, these factors were reclassified using the acquired weights and entered into the support vector machine (SVM) model to evaluate the correlation between flood occurrence and each conditioning factor. Through this integration, the weak point of WoE can be solved and the performance of the SVM will be enhanced. The spatial database included flood inventory, slope, stream power index (SPI), topographic wetness index (TWI), altitude, curvature, distance from the river, geology, rainfall, land use/cover (LULC), and soil type. Four kernel types of SVM (linear kernel (LN), polynomial kernel (PL), radial basis function kernel (RBF), and sigmoid kernel (SIG)) were used to investigate the performance of each kernel type. The efficiency of the new ensemble WoE and SVM method was tested using area under curve (AUC) which measured the prediction and success rates. The validation results proved the strength and efficiency of the ensemble method over the individual methods. The best results were obtained from RBF kernel when compared with the other kernel types. Success rate and prediction rate for ensemble WoE and RBF-SVM method were 96.48% and 95.67% respectively. The proposed ensemble flood susceptibility mapping method could assist researchers and local governments in flood mitigation strategies.

  18. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Spatial patterns of aflatoxin levels in relation to ear-feeding insect damage in pre-harvest corn.

    PubMed

    Ni, Xinzhi; Wilson, Jeffrey P; Buntin, G David; Guo, Baozhu; Krakowsky, Matthew D; Lee, R Dewey; Cottrell, Ted E; Scully, Brian T; Huffaker, Alisa; Schmelz, Eric A

    2011-07-01

    Key impediments to increased corn yield and quality in the southeastern US coastal plain region are damage by ear-feeding insects and aflatoxin contamination caused by infection of Aspergillus flavus. Key ear-feeding insects are corn earworm, Helicoverpa zea, fall armyworm, Spodoptera frugiperda, maize weevil, Sitophilus zeamais, and brown stink bug, Euschistus servus. In 2006 and 2007, aflatoxin contamination and insect damage were sampled before harvest in three 0.4-hectare corn fields using a grid sampling method. The feeding damage by each of ear/kernel-feeding insects (i.e., corn earworm/fall armyworm damage on the silk/cob, and discoloration of corn kernels by stink bugs), and maize weevil population were assessed at each grid point with five ears. The spatial distribution pattern of aflatoxin contamination was also assessed using the corn samples collected at each sampling point. Aflatoxin level was correlated to the number of maize weevils and stink bug-discolored kernels, but not closely correlated to either husk coverage or corn earworm damage. Contour maps of the maize weevil populations, stink bug-damaged kernels, and aflatoxin levels exhibited an aggregated distribution pattern with a strong edge effect on all three parameters. The separation of silk- and cob-feeding insects from kernel-feeding insects, as well as chewing (i.e., the corn earworm and maize weevil) and piercing-sucking insects (i.e., the stink bugs) and their damage in relation to aflatoxin accumulation is economically important. Both theoretic and applied ramifications of this study were discussed by proposing a hypothesis on the underlying mechanisms of the aggregated distribution patterns and strong edge effect of insect damage and aflatoxin contamination, and by discussing possible management tactics for aflatoxin reduction by proper management of kernel-feeding insects. Future directions on basic and applied research related to aflatoxin contamination are also discussed.

  20. Spatial Patterns of Aflatoxin Levels in Relation to Ear-Feeding Insect Damage in Pre-Harvest Corn

    PubMed Central

    Ni, Xinzhi; Wilson, Jeffrey P.; Buntin, G. David; Guo, Baozhu; Krakowsky, Matthew D.; Lee, R. Dewey; Cottrell, Ted E.; Scully, Brian T.; Huffaker, Alisa; Schmelz, Eric A.

    2011-01-01

    Key impediments to increased corn yield and quality in the southeastern US coastal plain region are damage by ear-feeding insects and aflatoxin contamination caused by infection of Aspergillus flavus. Key ear-feeding insects are corn earworm, Helicoverpa zea, fall armyworm, Spodoptera frugiperda, maize weevil, Sitophilus zeamais, and brown stink bug, Euschistus servus. In 2006 and 2007, aflatoxin contamination and insect damage were sampled before harvest in three 0.4-hectare corn fields using a grid sampling method. The feeding damage by each of ear/kernel-feeding insects (i.e., corn earworm/fall armyworm damage on the silk/cob, and discoloration of corn kernels by stink bugs), and maize weevil population were assessed at each grid point with five ears. The spatial distribution pattern of aflatoxin contamination was also assessed using the corn samples collected at each sampling point. Aflatoxin level was correlated to the number of maize weevils and stink bug-discolored kernels, but not closely correlated to either husk coverage or corn earworm damage. Contour maps of the maize weevil populations, stink bug-damaged kernels, and aflatoxin levels exhibited an aggregated distribution pattern with a strong edge effect on all three parameters. The separation of silk- and cob-feeding insects from kernel-feeding insects, as well as chewing (i.e., the corn earworm and maize weevil) and piercing-sucking insects (i.e., the stink bugs) and their damage in relation to aflatoxin accumulation is economically important. Both theoretic and applied ramifications of this study were discussed by proposing a hypothesis on the underlying mechanisms of the aggregated distribution patterns and strong edge effect of insect damage and aflatoxin contamination, and by discussing possible management tactics for aflatoxin reduction by proper management of kernel-feeding insects. Future directions on basic and applied research related to aflatoxin contamination are also discussed. PMID:22069748

  1. Efficient protein structure search using indexing methods

    PubMed Central

    2013-01-01

    Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively. PMID:23691543

  2. Efficient protein structure search using indexing methods.

    PubMed

    Kim, Sungchul; Sael, Lee; Yu, Hwanjo

    2013-01-01

    Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively.

  3. Refinement of Methods for Evaluation of Near-Hypersingular Integrals in BEM Formulations

    NASA Technical Reports Server (NTRS)

    Fink, Patricia W.; Khayat, Michael A.; Wilton, Donald R.

    2006-01-01

    In this paper, we present advances in singularity cancellation techniques applied to integrals in BEM formulations that are nearly hypersingular. Significant advances have been made recently in singularity cancellation techniques applied to 1 R type kernels [M. Khayat, D. Wilton, IEEE Trans. Antennas and Prop., 53, pp. 3180-3190, 2005], as well as to the gradients of these kernels [P. Fink, D. Wilton, and M. Khayat, Proc. ICEAA, pp. 861-864, Torino, Italy, 2005] on curved subdomains. In these approaches, the source triangle is divided into three tangent subtriangles with a common vertex at the normal projection of the observation point onto the source element or the extended surface containing it. The geometry of a typical tangent subtriangle and its local rectangular coordinate system with origin at the projected observation point is shown in Fig. 1. Whereas singularity cancellation techniques for 1 R type kernels are now nearing maturity, the efficient handling of near-hypersingular kernels still needs attention. For example, in the gradient reference above, techniques are presented for computing the normal component of the gradient relative to the plane containing the tangent subtriangle. These techniques, summarized in the transformations in Table 1, are applied at the sub-triangle level and correspond particularly to the case in which the normal projection of the observation point lies within the boundary of the source element. They are found to be highly efficient as z approaches zero. Here, we extend the approach to cover two instances not previously addressed. First, we consider the case in which the normal projection of the observation point lies external to the source element. For such cases, we find that simple modifications to the transformations of Table 1 permit significant savings in computational cost. Second, we present techniques that permit accurate computation of the tangential components of the gradient; i.e., tangent to the plane containing the source element.

  4. Comparison of GATE/GEANT4 with EGSnrc and MCNP for electron dose calculations at energies between 15 keV and 20 MeV.

    PubMed

    Maigne, L; Perrot, Y; Schaart, D R; Donnarieix, D; Breton, V

    2011-02-07

    The GATE Monte Carlo simulation platform based on the GEANT4 toolkit has come into widespread use for simulating positron emission tomography (PET) and single photon emission computed tomography (SPECT) imaging devices. Here, we explore its use for calculating electron dose distributions in water. Mono-energetic electron dose point kernels and pencil beam kernels in water are calculated for different energies between 15 keV and 20 MeV by means of GATE 6.0, which makes use of the GEANT4 version 9.2 Standard Electromagnetic Physics Package. The results are compared to the well-validated codes EGSnrc and MCNP4C. It is shown that recent improvements made to the GEANT4/GATE software result in significantly better agreement with the other codes. We furthermore illustrate several issues of general interest to GATE and GEANT4 users who wish to perform accurate simulations involving electrons. Provided that the electron step size is sufficiently restricted, GATE 6.0 and EGSnrc dose point kernels are shown to agree to within less than 3% of the maximum dose between 50 keV and 4 MeV, while pencil beam kernels are found to agree to within less than 4% of the maximum dose between 15 keV and 20 MeV.

  5. Critical environmental and genotypic factors for Fusarium verticillioides infection, fungal growth and fumonisin contamination in maize grown in northwestern Spain.

    PubMed

    Cao, Ana; Santiago, Rogelio; Ramos, Antonio J; Souto, Xosé C; Aguín, Olga; Malvar, Rosa Ana; Butrón, Ana

    2014-05-02

    In northwestern Spain, where weather is rainy and mild throughout the year, Fusarium verticillioides is the most prevalent fungus in kernels and a significant risk of fumonisin contamination has been exposed. In this study, detailed information about environmental and maize genotypic factors affecting F. verticillioides infection, fungal growth and fumonisin content in maize kernels was obtained in order to establish control points to reduce fumonisin contamination. Evaluations were conducted in a total of 36 environments and factorial regression analyses were performed to determine the contribution of each factor to variability among environments, genotypes, and genotype × environment interactions for F. verticillioides infection, fungal growth and fumonisin content. Flowering and kernel drying were the most critical periods throughout the growing season for F. verticillioides infection and fumonisin contamination. Around flowering, wetter and cooler conditions limited F. verticillioides infection and growth, and high temperatures increased fumonisin contents. During kernel drying, increased damaged kernels favored fungal growth, and higher ear damage by corn borers and hard rainfall favored fumonisin accumulation. Later planting dates and especially earlier harvest dates reduced the risk of fumonisin contamination, possibly due to reduced incidence of insects and accumulation of rainfall during the kernel drying period. The use of maize varieties resistant to Sitotroga cerealella, with good husk coverage and non-excessive pericarp thickness could also be useful to reduce fumonisin contamination of maize kernels. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.

    PubMed

    Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D

    2016-04-01

    Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the kernel fraction was redried at 60°C for 48 h in a forced-air oven and dry sieved to determine GMPS and surface area. Linear relationships between CSPS from WPCS (n=80) and kernel fraction GMPS, surface area, and proportion passing through the 4.75-mm screen were poor. Strong quadratic relationships between proportion of kernel fraction passing through the 4.75-mm screen and kernel fraction GMPS and surface area were observed. These findings suggest that hydrodynamic separation and dry sieving of the kernel fraction may provide a better assessment of kernel breakage in WPCS than CSPS. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less

  8. Data-based diffraction kernels for surface waves from convolution and correlation processes through active seismic interferometry

    NASA Astrophysics Data System (ADS)

    Chmiel, Malgorzata; Roux, Philippe; Herrmann, Philippe; Rondeleux, Baptiste; Wathelet, Marc

    2018-05-01

    We investigated the construction of diffraction kernels for surface waves using two-point convolution and/or correlation from land active seismic data recorded in the context of exploration geophysics. The high density of controlled sources and receivers, combined with the application of the reciprocity principle, allows us to retrieve two-dimensional phase-oscillation diffraction kernels (DKs) of surface waves between any two source or receiver points in the medium at each frequency (up to 15 Hz, at least). These DKs are purely data-based as no model calculations and no synthetic data are needed. They naturally emerge from the interference patterns of the recorded wavefields projected on the dense array of sources and/or receivers. The DKs are used to obtain multi-mode dispersion relations of Rayleigh waves, from which near-surface shear velocity can be extracted. Using convolution versus correlation with a grid of active sources is an important step in understanding the physics of the retrieval of surface wave Green's functions. This provides the foundation for future studies based on noise sources or active sources with a sparse spatial distribution.

  9. Accurately estimating PSF with straight lines detected by Hough transform

    NASA Astrophysics Data System (ADS)

    Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong

    2018-04-01

    This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.

  10. Kernel-Phase Interferometry for Super-Resolution Detection of Faint Companions

    NASA Astrophysics Data System (ADS)

    Factor, Samuel M.; Kraus, Adam L.

    2017-06-01

    Direct detection of close in companions (exoplanets or binary systems) is notoriously difficult. While coronagraphs and point spread function (PSF) subtraction can be used to reduce contrast and dig out signals of companions under the PSF, there are still significant limitations in separation and contrast near λ/D. Non-redundant aperture masking (NRM) interferometry can be used to detect companions well inside the PSF of a diffraction limited image, though the mask discards ˜ 95% of the light gathered by the telescope and thus the technique is severely flux limited. Kernel-phase analysis applies interferometric techniques similar to NRM to a diffraction limited image utilizing the full aperture. Instead of non-redundant closure-phases, kernel-phases are constructed from a grid of points on the full aperture, simulating a redundant interferometer. I have developed a new, easy to use, faint companion detection pipeline which analyzes kernel-phases utilizing Bayesian model comparison. I demonstrate this pipeline on archival images from HST/NICMOS, searching for new companions in order to constrain binary formation models at separations inaccessible to previous techniques. Using this method, it is possible to detect a companion well within the classical λ/D Rayleigh diffraction limit using a fraction of the telescope time as NRM. Since the James Webb Space Telescope (JWST) will be able to perform NRM observations, further development and characterization of kernel-phase analysis will allow efficient use of highly competitive JWST telescope time. As no mask is needed, this technique can easily be applied to archival data and even target acquisition images (e.g. from JWST), making the detection of close in companions cheap and simple as no additional observations are needed.

  11. Peroxisome proliferator-activated receptor gamma overexpression suppresses proliferation of human colon cancer cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsukahara, Tamotsu, E-mail: ttamotsu@shinshu-u.ac.jp; Haniu, Hisao

    2012-08-03

    Highlights: Black-Right-Pointing-Pointer We examined the correlation between PPAR{gamma} expression and cell proliferation. Black-Right-Pointing-Pointer PPAR{gamma} overexpression reduces cell viability. Black-Right-Pointing-Pointer We show the synergistic effect of cell growth inhibition by a PPAR{gamma} agonist. -- Abstract: Peroxisome proliferator-activated receptor gamma (PPAR{gamma}) plays an important role in the differentiation of intestinal cells and tissues. Our previous reports indicate that PPAR{gamma} is expressed at considerable levels in human colon cancer cells. This suggests that PPAR{gamma} expression may be an important factor for cell growth regulation in colon cancer. In this study, we investigated PPAR{gamma} expression in 4 human colon cancer cell lines, HT-29, LOVO,more » DLD-1, and Caco-2. Real-time polymerase chain reaction (PCR) and Western blot analysis revealed that the relative levels of PPAR{gamma} mRNA and protein in these cells were in the order HT-29 > LOVO > Caco-2 > DLD-1. We also found that PPAR{gamma} overexpression promoted cell growth inhibition in PPAR{gamma} lower-expressing cell lines (Caco-2 and DLD-1), but not in higher-expressing cells (HT-29 and LOVO). We observed a correlation between the level of PPAR{gamma} expression and the cells' sensitivity for proliferation.« less

  12. TU-D-207B-01: A Prediction Model for Distinguishing Radiation Necrosis From Tumor Progression After Gamma Knife Radiosurgery Based On Radiomics Features From MR Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Z; MD Anderson Cancer Center, Houston, TX; Ho, A

    Purpose: To develop and validate a prediction model using radiomics features extracted from MR images to distinguish radiation necrosis from tumor progression for brain metastases treated with Gamma knife radiosurgery. Methods: The images used to develop the model were T1 post-contrast MR scans from 71 patients who had had pathologic confirmation of necrosis or progression; 1 lesion was identified per patient (17 necrosis and 54 progression). Radiomics features were extracted from 2 images at 2 time points per patient, both obtained prior to resection. Each lesion was manually contoured on each image, and 282 radiomics features were calculated for eachmore » lesion. The correlation for each radiomics feature between two time points was calculated within each group to identify a subset of features with distinct values between two groups. The delta of this subset of radiomics features, characterizing changes from the earlier time to the later one, was included as a covariate to build a prediction model using support vector machines with a cubic polynomial kernel function. The model was evaluated with a 10-fold cross-validation. Results: Forty radiomics features were selected based on consistent correlation values of approximately 0 for the necrosis group and >0.2 for the progression group. In performing the 10-fold cross-validation, we narrowed this number down to 11 delta radiomics features for the model. This 11-delta-feature model showed an overall prediction accuracy of 83.1%, with a true positive rate of 58.8% in predicting necrosis and 90.7% for predicting tumor progression. The area under the curve for the prediction model was 0.79. Conclusion: These delta radiomics features extracted from MR scans showed potential for distinguishing radiation necrosis from tumor progression. This tool may be a useful, noninvasive means of determining the status of an enlarging lesion after radiosurgery, aiding decision-making regarding surgical resection versus conservative medical management.« less

  13. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint

    PubMed Central

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2015-01-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575

  14. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  15. A Precise Drunk Driving Detection Using Weighted Kernel Based on Electrocardiogram.

    PubMed

    Wu, Chung Kit; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei

    2016-05-09

    Globally, 1.2 million people die and 50 million people are injured annually due to traffic accidents. These traffic accidents cost $500 billion dollars. Drunk drivers are found in 40% of the traffic crashes. Existing drunk driving detection (DDD) systems do not provide accurate detection and pre-warning concurrently. Electrocardiogram (ECG) is a proven biosignal that accurately and simultaneously reflects human's biological status. In this letter, a classifier for DDD based on ECG is investigated in an attempt to reduce traffic accidents caused by drunk drivers. At this point, it appears that there is no known research or literature found on ECG classifier for DDD. To identify drunk syndromes, the ECG signals from drunk drivers are studied and analyzed. As such, a precise ECG-based DDD (ECG-DDD) using a weighted kernel is developed. From the measurements, 10 key features of ECG signals were identified. To incorporate the important features, the feature vectors are weighted in the customization of kernel functions. Four commonly adopted kernel functions are studied. Results reveal that weighted feature vectors improve the accuracy by 11% compared to the computation using the prime kernel. Evaluation shows that ECG-DDD improved the accuracy by 8% to 18% compared to prevailing methods.

  16. BSD Portals for LINUX 2.0

    NASA Technical Reports Server (NTRS)

    McNab, A. David; woo, Alex (Technical Monitor)

    1999-01-01

    Portals, an experimental feature of 4.4BSD, extend the file system name space by exporting certain open () requests to a user-space daemon. A portal daemon is mounted into the file name space as if it were a standard file system. When the kernel resolves a pathname and encounters a portal mount point, the remainder of the path is passed to the portal daemon. Depending on the portal "pathname" and the daemon's configuration, some type of open (2) is performed. The resulting file descriptor is passed back to the kernel which eventually returns it to the user, to whom it appears that a "normal" open has occurred. A proxy portalfs file system is responsible for kernel interaction with the daemon. The overall effect is that the portal daemon performs an open (2) on behalf of the kernel, possibly hiding substantial complexity from the calling process. One particularly useful application is implementing a connection service that allows simple scripts to open network sockets. This paper describes the implementation of portals for LINUX 2.0.

  17. Accurate interatomic force fields via machine learning with covariant kernels

    NASA Astrophysics Data System (ADS)

    Glielmo, Aldo; Sollich, Peter; De Vita, Alessandro

    2017-06-01

    We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian process (GP) regression. This is based on matrix-valued kernel functions, on which we impose the requirements that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such covariant GP kernels can be obtained by integration over the elements of the rotation group SO (d ) for the relevant dimensionality d . Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni, Fe, and Si crystalline systems.

  18. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization

    PubMed Central

    Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

  19. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization.

    PubMed

    Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms.

  20. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  1. Norm overlap between many-body states: Uncorrelated overlap between arbitrary Bogoliubov product states

    NASA Astrophysics Data System (ADS)

    Bally, B.; Duguet, T.

    2018-02-01

    Background: State-of-the-art multi-reference energy density functional calculations require the computation of norm overlaps between different Bogoliubov quasiparticle many-body states. It is only recently that the efficient and unambiguous calculation of such norm kernels has become available under the form of Pfaffians [L. M. Robledo, Phys. Rev. C 79, 021302 (2009), 10.1103/PhysRevC.79.021302]. Recently developed particle-number-restored Bogoliubov coupled-cluster (PNR-BCC) and particle-number-restored Bogoliubov many-body perturbation (PNR-BMBPT) ab initio theories [T. Duguet and A. Signoracci, J. Phys. G 44, 015103 (2017), 10.1088/0954-3899/44/1/015103] make use of generalized norm kernels incorporating explicit many-body correlations. In PNR-BCC and PNR-BMBPT, the Bogoliubov states involved in the norm kernels differ specifically via a global gauge rotation. Purpose: The goal of this work is threefold. We wish (i) to propose and implement an alternative to the Pfaffian method to compute unambiguously the norm overlap between arbitrary Bogoliubov quasiparticle states, (ii) to extend the first point to explicitly correlated norm kernels, and (iii) to scrutinize the analytical content of the correlated norm kernels employed in PNR-BMBPT. Point (i) constitutes the purpose of the present paper while points (ii) and (iii) are addressed in a forthcoming paper. Methods: We generalize the method used in another work [T. Duguet and A. Signoracci, J. Phys. G 44, 015103 (2017), 10.1088/0954-3899/44/1/015103] in such a way that it is applicable to kernels involving arbitrary pairs of Bogoliubov states. The formalism is presently explicated in detail in the case of the uncorrelated overlap between arbitrary Bogoliubov states. The power of the method is numerically illustrated and benchmarked against known results on the basis of toy models of increasing complexity. Results: The norm overlap between arbitrary Bogoliubov product states is obtained under a closed-form expression allowing its computation without any phase ambiguity. The formula is physically intuitive, accurate, and versatile. It equally applies to norm overlaps between Bogoliubov states of even or odd number parity. Numerical applications illustrate these features and provide a transparent representation of the content of the norm overlaps. Conclusions: The complex norm overlap between arbitrary Bogoliubov states is computed, without any phase ambiguity, via elementary linear algebra operations. The method can be used in any configuration mixing of orthogonal and non-orthogonal product states. Furthermore, the closed-form expression extends naturally to correlated overlaps at play in PNR-BCC and PNR-BMBPT. As such, the straight overlap between Bogoliubov states is the zero-order reduction of more involved norm kernels to be studied in a forthcoming paper.

  2. On Pfaffian Random Point Fields

    NASA Astrophysics Data System (ADS)

    Kargin, V.

    2014-02-01

    We study Pfaffian random point fields by using the Moore-Dyson quaternion determinants. First, we give sufficient conditions that ensure that a self-dual quaternion kernel defines a valid random point field, and then we prove a CLT for Pfaffian point fields. The proofs are based on a new quaternion extension of the Cauchy-Binet determinantal identity. In addition, we derive the Fredholm determinantal formulas for the Pfaffian point fields which use the quaternion determinant.

  3. StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.

    PubMed

    Li, Chenhui; Baciu, George; Han, Yu

    2018-03-01

    Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.

  4. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions.

    PubMed

    Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F

    2016-10-07

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  5. Development of FullWave : Hot Plasma RF Simulation Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Kim, Jin-Soo; Spencer, J. Andrew; Zhao, Liangji; Galkin, Sergei

    2017-10-01

    Full wave simulation tool, modeling RF fields in hot inhomogeneous magnetized plasma, is being developed. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated in configuration space without limiting approximations by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. This approach allows for better resolution of plasma resonances, antenna structures and complex boundaries. The formulation of FullWave and preliminary results will be presented: construction of the finite differences for approximation of derivatives on adaptive cloud of computational points; model and results of nonlocal conductivity kernel calculation in tokamak geometry; results of 2-D full wave simulations in the cold plasma model in tokamak geometry using the formulated approach; results of self-consistent calculations of hot plasma dielectric response and RF fields in 1-D mirror magnetic field; preliminary results of self-consistent simulations of 2-D RF fields in tokamak using the calculated hot plasma conductivity kernel; development of iterative solver for wave equations. Work is supported by the U.S. DOE SBIR program.

  6. Development of full wave code for modeling RF fields in hot non-uniform plasmas

    NASA Astrophysics Data System (ADS)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo

    2016-10-01

    FAR-TECH, Inc. is developing a full wave RF modeling code to model RF fields in fusion devices and in general plasma applications. As an important component of the code, an adaptive meshless technique is introduced to solve the wave equations, which allows resolving plasma resonances efficiently and adapting to the complexity of antenna geometry and device boundary. The computational points are generated using either a point elimination method or a force balancing method based on the monitor function, which is calculated by solving the cold plasma dispersion equation locally. Another part of the code is the conductivity kernel calculation, used for modeling the nonlocal hot plasma dielectric response. The conductivity kernel is calculated on a coarse grid of test points and then interpolated linearly onto the computational points. All the components of the code are parallelized using MPI and OpenMP libraries to optimize the execution speed and memory. The algorithm and the results of our numerical approach to solving 2-D wave equations in a tokamak geometry will be presented. Work is supported by the U.S. DOE SBIR program.

  7. Voronoi cell patterns: Theoretical model and applications

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Einstein, T. L.

    2011-11-01

    We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.

  8. Voronoi Cell Patterns: theoretical model and application to submonolayer growth

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Einstein, T. L.

    2012-02-01

    We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We apply our model to describe the Voronoi cell patterns of island nucleation for critical island sizes i=0,1,2,3. Experimental results for the Voronoi cells of InAs/GaAs quantum dots are also described by our model.

  9. Classification of Microarray Data Using Kernel Fuzzy Inference System

    PubMed Central

    Kumar Rath, Santanu

    2014-01-01

    The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature selection method. Kernel functions are used to map original data points into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function ϕ through a mathematical process called the kernel trick. This paper also presents a comparative study for classification using K-FIS along with support vector machine (SVM) for different set of features (genes). Performance parameters available in the literature such as precision, recall, specificity, F-measure, ROC curve, and accuracy are considered to analyze the efficiency of the classification model. From the proposed approach, it is apparent that K-FIS model obtains similar results when compared with SVM model. This is an indication that the proposed approach relies on kernel function. PMID:27433543

  10. Comparative investigation of vibration and current monitoring for prediction of mechanical and electrical faults in induction motor based on multiclass-support vector machine algorithms

    NASA Astrophysics Data System (ADS)

    Gangsar, Purushottam; Tiwari, Rajiv

    2017-09-01

    This paper presents an investigation of vibration and current monitoring for effective fault prediction in induction motor (IM) by using multiclass support vector machine (MSVM) algorithms. Failures of IM may occur due to propagation of a mechanical or electrical fault. Hence, for timely detection of these faults, the vibration as well as current signals was acquired after multiple experiments of varying speeds and external torques from an experimental test rig. Here, total ten different fault conditions that frequently encountered in IM (four mechanical fault, five electrical fault conditions and one no defect condition) have been considered. In the case of stator winding fault, and phase unbalance and single phasing fault, different level of severity were also considered for the prediction. In this study, the identification has been performed of the mechanical and electrical faults, individually and collectively. Fault predictions have been performed using vibration signal alone, current signal alone and vibration-current signal concurrently. The one-versus-one MSVM has been trained at various operating conditions of IM using the radial basis function (RBF) kernel and tested for same conditions, which gives the result in the form of percentage fault prediction. The prediction performance is investigated for the wide range of RBF kernel parameter, i.e. gamma, and selected the best result for one optimal value of gamma for each case. Fault predictions has been performed and investigated for the wide range of operational speeds of the IM as well as external torques on the IM.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz-Cruz, J. L.

    We discuss several aspects of the flavor problem in the Supersymmetry. First, in order to quantify the SUSY flavor problem, we generate randomly the entries of the sfermion mass matrices and determine which percentage of the points are consistent with current bounds on the flavor violating transitions, for which we take as an illustration the lepton flavor violating (LFV) decays li {yields} lj{gamma}. In the first instance we apply the mass-insertion method, and study how this percentage changes as one varies the parameters of the model. It is found that for 105 points about 10% of points pass current LFVmore » bounds on {mu} {yields} e{gamma} provided the sleptons masses are {approx} 10 TeV. While bounds on {tau} {yields} {mu}{gamma}, e{gamma} are satisfied for almost 100% of points even for sleptons masses as low as 360 GeV. Then, we consider an ansatz for sfermion masses that can be diagonalized exactly, and compare the results obtained previously for {tau} {yields} {mu}{gamma}. Now, we get that 100% of points satisfy the experimental bounds but with sleptons masses larger than 460 GeV.« less

  12. Analysis of the cable equation with non-local and non-singular kernel fractional derivative

    NASA Astrophysics Data System (ADS)

    Karaagac, Berat

    2018-02-01

    Recently a new concept of differentiation was introduced in the literature where the kernel was converted from non-local singular to non-local and non-singular. One of the great advantages of this new kernel is its ability to portray fading memory and also well defined memory of the system under investigation. In this paper the cable equation which is used to develop mathematical models of signal decay in submarine or underwater telegraphic cables will be analysed using the Atangana-Baleanu fractional derivative due to the ability of the new fractional derivative to describe non-local fading memory. The existence and uniqueness of the more generalized model is presented in detail via the fixed point theorem. A new numerical scheme is used to solve the new equation. In addition, stability, convergence and numerical simulations are presented.

  13. An orthogonal oriented quadrature hexagonal image pyramid

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1987-01-01

    An image pyramid has been developed with basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The pyramid operates on a hexagonal sample lattice. The set of seven basis functions consist of three even high-pass kernels, three odd high-pass kernels, and one low-pass kernel. The three even kernels are identified when rotated by 60 or 120 deg, and likewise for the odd. The seven basis functions occupy a point and a hexagon of six nearest neighbors on a hexagonal sample lattice. At the lowest level of the pyramid, the input lattice is the image sample lattice. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing sq rt 7 larger than the previous level, so that the number of coefficients is reduced by a factor of 7 at each level. The relationship between this image code and the processing architecture of the primate visual cortex is discussed.

  14. Methods for compressible fluid simulation on GPUs using high-order finite differences

    NASA Astrophysics Data System (ADS)

    Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer

    2017-08-01

    We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.

  15. Sub-arcminute pointing from a balloonborne platform

    NASA Astrophysics Data System (ADS)

    Craig, William W.; McLean, Ryan; Hailey, Charles J.

    1998-07-01

    We describe the design and performance of the pointing and aspect reconstruction system on the Gamma-Ray Arcminute Telescope Imaging System. The payload consists of a 4m long gamma-ray telescope, capable of producing images of the gamma-ray sky at an angular resolution of 2 arcminutes. The telescope is operated at an altitude of 40km in azimuth/elevation pointing mode. Using a variety of sensor, including attitude GPS, fiber optic gyroscopes, star and sun trackers, the system is capable of pointing the gamma-ray payload to within an arc-minute from the balloon borne platform. The system is designed for long-term autonomous operation and performed to specification throughout a recent 36 hour flight from Alice Springs, Australia. A star tracker and pattern recognition software developed for the mission permit aspect reconstruction to better than 10 arcseconds. The narrow field star tracker system is capable of acquiring and identifying a star field without external input. We present flight data form all sensors and the resultant gamma-ray source localizations.

  16. Structural phase transition in deuterated benzil C{sub 14}D{sub 10}O{sub 2}: Neutron inelastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goossens, D. J.; Welberry, T. R.; Hagen, M. E.

    2006-04-01

    Neutron inelastic scattering has been used to examine the structural phase transition in deuterated benzil C{sub 14}D{sub 10}O{sub 2}. The transition in benzil, in which the unit cell goes from a trigonal P3{sub 1}21 unit cell above T{sub C} to a cell doubled P2{sub 1} unit cell below T{sub C}, leads to the emergence of a Bragg peak at the M-point of the high temperature Brillouin zone. It has previously been suggested that the softening of a transverse optic phonon at the {gamma}-point leads to the triggering of an instability at the M-point causing the transition to occur. This suggestionmore » has been investigated by measuring the phonon spectrum at the M-point for a range of temperatures above T{sub C} and the phonon dispersion relation along the {gamma}-M direction just above T{sub C}. It is found that the transverse acoustic phonon at the M-point is of lower energy than the {gamma}-point optic mode and has a softening with temperature as T approaches T{sub C} from above that is much faster than that of the {gamma}-point optic mode. This behavior is inconsistent with the view that the {gamma}-point mode is responsible for triggering the phase transition. Rather the structural phase transition in benzil appears to be driven by a conventional soft TA mode at the M-point.« less

  17. Calculation of electron Dose Point Kernel in water with GEANT4 for medical application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guimaraes, C. C.; Sene, F. F.; Martinelli, J. R.

    2009-06-03

    The rapid insertion of new technologies in medical physics in the last years, especially in nuclear medicine, has been followed by a great development of faster Monte Carlo algorithms. GEANT4 is a Monte Carlo toolkit that contains the tools to simulate the problems of particle transport through matter. In this work, GEANT4 was used to calculate the dose-point-kernel (DPK) for monoenergetic electrons in water, which is an important reference medium for nuclear medicine. The three different physical models of electromagnetic interactions provided by GEANT4 - Low Energy, Penelope and Standard - were employed. To verify the adequacy of these models,more » the results were compared with references from the literature. For all energies and physical models, the agreement between calculated DPKs and reported values is satisfactory.« less

  18. Towards a Holistic Cortical Thickness Descriptor: Heat Kernel-Based Grey Matter Morphology Signatures.

    PubMed

    Wang, Gang; Wang, Yalin

    2017-02-15

    In this paper, we propose a heat kernel based regional shape descriptor that may be capable of better exploiting volumetric morphological information than other available methods, thereby improving statistical power on brain magnetic resonance imaging (MRI) analysis. The mechanism of our analysis is driven by the graph spectrum and the heat kernel theory, to capture the volumetric geometry information in the constructed tetrahedral meshes. In order to capture profound brain grey matter shape changes, we first use the volumetric Laplace-Beltrami operator to determine the point pair correspondence between white-grey matter and CSF-grey matter boundary surfaces by computing the streamlines in a tetrahedral mesh. Secondly, we propose multi-scale grey matter morphology signatures to describe the transition probability by random walk between the point pairs, which reflects the inherent geometric characteristics. Thirdly, a point distribution model is applied to reduce the dimensionality of the grey matter morphology signatures and generate the internal structure features. With the sparse linear discriminant analysis, we select a concise morphology feature set with improved classification accuracies. In our experiments, the proposed work outperformed the cortical thickness features computed by FreeSurfer software in the classification of Alzheimer's disease and its prodromal stage, i.e., mild cognitive impairment, on publicly available data from the Alzheimer's Disease Neuroimaging Initiative. The multi-scale and physics based volumetric structure feature may bring stronger statistical power than some traditional methods for MRI-based grey matter morphology analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Praseodymium-142 glass seeds for the brachytherapy of prostate cancer

    NASA Astrophysics Data System (ADS)

    Jung, Jae Won

    A beta-emitting glass seed was proposed for the brachytherapy treatment of prostate cancer. Criteria for seed design were derived and several beta-emitting nuclides were examined for suitability. 142Pr was selected as the isotope of choice. Seeds 0.08 cm in diameter and 0.9 cm long were manufactured for testing. The seeds were activated in the Texas A&M University research reactor. The activity produced was as expected when considering the meta-stable state and epi-thermal neutron flux. The MCNP5 Monte Carlo code was used to calculate the quantitative dosimetric parameters suggested in the American Association of Physicists in Medicine (AAPM) TG-43/60. The Monte Carlo calculation results were compared with those from a dose point kernel code. The dose profiles agree well with each other. The gamma dose of 142Pr was evaluated. The gamma dose is 0.3 Gy at 1.0 cm with initial activity of 5.95 mCi and is insignificant to other organs. Measurements were performed to assess the 2-dimensional axial dose distributions using Gafchromic radiochromic film. The radiochromic film was calibrated using an X-ray machine calibrated against a National Institute of Standards and Technology (NIST) traceable ion chamber. A calibration curve was derived using a least squares fit of a second order polynomial. The measured dose distribution agrees well with results from the Monte Carlo simulation. The dose was 130.8 Gy at 6 mm from the seed center with initial activity of 5.95 mCi. AAPM TG-43/60 parameters were determined. The reference dose rate for 2 mm and 6 mm were 0.67 and 0.02 cGy/s/mCi, respectively. The geometry function, radial dose function and anisotropy function were generated.

  20. A locally adaptive kernel regression method for facies delineation

    NASA Astrophysics Data System (ADS)

    Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.

    2015-12-01

    Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.

  1. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  2. On the solution of integral equations with a generalized cauchy kernel

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    In this paper a certain class of singular integral equations that may arise from the mixed boundary value problems in nonhomogeneous materials is considered. The distinguishing feature of these equations is that in addition to the Cauchy singularity, the kernels contain terms that are singular only at the end points. In the form of the singular integral equations adopted, the density function is a potential or a displacement and consequently the kernel has strong singularities of the form (t-x) sup-2, x sup n-2 (t+x) sup n, (n or = 2, 0x,tb). The complex function theory is used to determine the fundamental function of the problem for the general case and a simple numerical technique is described to solve the integral equation. Two examples from the theory of elasticity are then considered to show the application of the technique.

  3. Automatically detect and track infrared small targets with kernel Fukunaga-Koontz transform and Kalman prediction.

    PubMed

    Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan

    2007-11-01

    Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.

  4. Automatically detect and track infrared small targets with kernel Fukunaga-Koontz transform and Kalman prediction

    NASA Astrophysics Data System (ADS)

    Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan

    2007-11-01

    Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.

  5. O-GlcNAc modification of PPAR{gamma} reduces its transcriptional activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Suena; Park, Sang Yoon; Roth, Juergen

    2012-01-27

    Highlights: Black-Right-Pointing-Pointer We found that PPAR{gamma} is modified by O-GlcNAc in 3T3-L1 adipocytes. Black-Right-Pointing-Pointer The Thr54 of PPAR{gamma}1 is the major O-GlcNAc site. Black-Right-Pointing-Pointer Transcriptional activity of PPAR{gamma}1 was decreased on treatment with the OGA inhibitor. -- Abstract: The peroxisome proliferator-activated receptor {gamma} (PPAR{gamma}), a member of the nuclear receptor superfamily, is a key regulator of adipogenesis and is important for the homeostasis of the adipose tissue. The {beta}-O-linked N-acetylglucosamine (O-GlcNAc) modification, a posttranslational modification on various nuclear and cytoplasmic proteins, is involved in the regulation of protein function. Here, we report that PPAR{gamma} is modified by O-GlcNAc in 3T3-L1more » adipocytes. Mass spectrometric analysis and mutant studies revealed that the threonine 54 of the N-terminal AF-1 domain of PPAR{gamma} is the major O-GlcNAc site. Transcriptional activity of wild type PPAR{gamma} was decreased 30% by treatment with the specific O-GlcNAcase (OGA) inhibitor, but the T54A mutant of PPAR{gamma} did not respond to inhibitor treatment. In 3T3-L1 cells, an increase in O-GlcNAc modification by OGA inhibitor reduced PPAR{gamma} transcriptional activity and terminal adipocyte differentiation. Our results suggest that the O-GlcNAc state of PPAR{gamma} influences its transcriptional activity and is involved in adipocyte differentiation.« less

  6. Pixel-based meshfree modelling of skeletal muscles.

    PubMed

    Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu

    2016-01-01

    This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.

  7. Estimating average growth trajectories in shape-space using kernel smoothing.

    PubMed

    Hutton, Tim J; Buxton, Bernard F; Hammond, Peter; Potts, Henry W W

    2003-06-01

    In this paper, we show how a dense surface point distribution model of the human face can be computed and demonstrate the usefulness of the high-dimensional shape-space for expressing the shape changes associated with growth and aging. We show how average growth trajectories for the human face can be computed in the absence of longitudinal data by using kernel smoothing across a population. A training set of three-dimensional surface scans of 199 male and 201 female subjects of between 0 and 50 years of age is used to build the model.

  8. On the solution of integral equations with strongly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m ,m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup -m , terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  9. On the solution of integral equations with strong ly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1985-01-01

    In this paper some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m or = 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t,x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  10. On the solution of integral equations with strongly singular kernels

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1987-01-01

    Some useful formulas are developed to evaluate integrals having a singularity of the form (t-x) sup-m, m greater than or equal 1. Interpreting the integrals with strong singularities in Hadamard sense, the results are used to obtain approximate solutions of singular integral equations. A mixed boundary value problem from the theory of elasticity is considered as an example. Particularly for integral equations where the kernel contains, in addition to the dominant term (t-x) sup-m, terms which become unbounded at the end points, the present technique appears to be extremely effective to obtain rapidly converging numerical results.

  11. A shock-capturing SPH scheme based on adaptive kernel estimation

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime

    2006-02-01

    Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.

  12. Providing the Fire Risk Map in Forest Area Using a Geographically Weighted Regression Model with Gaussin Kernel and Modis Images, a Case Study: Golestan Province

    NASA Astrophysics Data System (ADS)

    Shah-Heydari pour, A.; Pahlavani, P.; Bigdeli, B.

    2017-09-01

    According to the industrialization of cities and the apparent increase in pollutants and greenhouse gases, the importance of forests as the natural lungs of the earth is felt more than ever to clean these pollutants. Annually, a large part of the forests is destroyed due to the lack of timely action during the fire. Knowledge about areas with a high-risk of fire and equipping these areas by constructing access routes and allocating the fire-fighting equipment can help to eliminate the destruction of the forest. In this research, the fire risk of region was forecasted and the risk map of that was provided using MODIS images by applying geographically weighted regression model with Gaussian kernel and ordinary least squares over the effective parameters in forest fire including distance from residential areas, distance from the river, distance from the road, height, slope, aspect, soil type, land use, average temperature, wind speed, and rainfall. After the evaluation, it was found that the geographically weighted regression model with Gaussian kernel forecasted 93.4% of the all fire points properly, however the ordinary least squares method could forecast properly only 66% of the fire points.

  13. Fast dose kernel interpolation using Fourier transform with application to permanent prostate brachytherapy dosimetry.

    PubMed

    Liu, Derek; Sloboda, Ron S

    2014-05-01

    Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.

  14. GRIS observations of Al-26 gamma-ray line emission from two points in the Galactic plane

    NASA Technical Reports Server (NTRS)

    Teegarden, B. J.; Barthelmy, S. D.; Gehrels, N.; Tueller, J.; Leventhal, M.

    1991-01-01

    Both of the Gamma-Ray Imaging Spectrometer (GRIS) experiment's two observations of the Galactic center region, at l = zero and 335 deg respectively, detected Al-26 gamma-ray line emission. While these observations are consistent with the assumed high-energy gamma-ray distribution, they are consistent with other distributions as well. The data suggest that the Al-26 emission is distributed over Galactic longitude rather than being confined to a point source. The GRIS data also indicate that the 1809 keV line is broadened.

  15. Rediscovering the Kernels of Truth in the Urban Legends of the Freshman Composition Classroom

    ERIC Educational Resources Information Center

    Lovoy, Thomas

    2004-01-01

    English teachers, as well as teachers within other disciplines, often boil down abstract principles to easily explainable bullet points. Students often pick up and retain these points but fail to grasp the broader contexts that make them relevant. It is therefore sometimes helpful to revisit some of the more common of these "rules of thumb" to…

  16. Blur kernel estimation with algebraic tomography technique and intensity profiles of object boundaries

    NASA Astrophysics Data System (ADS)

    Ingacheva, Anastasia; Chukalina, Marina; Khanipov, Timur; Nikolaev, Dmitry

    2018-04-01

    Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.

  17. Discrimination of irradiated MOX fuel from UOX fuel by multivariate statistical analysis of simulated activities of gamma-emitting isotopes

    NASA Astrophysics Data System (ADS)

    Åberg Lindell, M.; Andersson, P.; Grape, S.; Hellesen, C.; Håkansson, A.; Thulin, M.

    2018-03-01

    This paper investigates how concentrations of certain fission products and their related gamma-ray emissions can be used to discriminate between uranium oxide (UOX) and mixed oxide (MOX) type fuel. Discrimination of irradiated MOX fuel from irradiated UOX fuel is important in nuclear facilities and for transport of nuclear fuel, for purposes of both criticality safety and nuclear safeguards. Although facility operators keep records on the identity and properties of each fuel, tools for nuclear safeguards inspectors that enable independent verification of the fuel are critical in the recovery of continuity of knowledge, should it be lost. A discrimination methodology for classification of UOX and MOX fuel, based on passive gamma-ray spectroscopy data and multivariate analysis methods, is presented. Nuclear fuels and their gamma-ray emissions were simulated in the Monte Carlo code Serpent, and the resulting data was used as input to train seven different multivariate classification techniques. The trained classifiers were subsequently implemented and evaluated with respect to their capabilities to correctly predict the classes of unknown fuel items. The best results concerning successful discrimination of UOX and MOX-fuel were acquired when using non-linear classification techniques, such as the k nearest neighbors method and the Gaussian kernel support vector machine. For fuel with cooling times up to 20 years, when it is considered that gamma-rays from the isotope 134Cs can still be efficiently measured, success rates of 100% were obtained. A sensitivity analysis indicated that these methods were also robust.

  18. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.

    PubMed

    Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis

    2017-10-16

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.

  19. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods

    PubMed Central

    Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.

    2017-01-01

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alimirah, Fatouma; Peng, Xinjian; Yuan, Liang

    Heterodimerization and cross-talk between nuclear hormone receptors often occurs. For example, estrogen receptor alpha (ER{alpha}) physically binds to peroxisome proliferator-activated receptor gamma (PPAR{gamma}) and inhibits its transcriptional activity. The interaction between PPAR{gamma} and the vitamin D receptor (VDR) however, is unknown. Here, we elucidate the molecular mechanisms linking PPAR{gamma} and VDR signaling, and for the first time we show that PPAR{gamma} physically associates with VDR in human breast cancer cells. We found that overexpression of PPAR{gamma} decreased 1{alpha},25-dihydroxyvitamin D{sub 3} (1,25D{sub 3}) mediated transcriptional activity of the vitamin D target gene, CYP24A1, by 49% and the activity of VDRE-luc, amore » vitamin D responsive reporter, by 75% in T47D human breast cancer cells. Deletion mutation experiments illustrated that helices 1 and 4 of PPAR{gamma}'s hinge and ligand binding domains, respectively, governed this suppressive function. Additionally, abrogation of PPAR{gamma}'s AF2 domain attenuated its repressive action on 1,25D{sub 3} transactivation, indicating that this domain is integral in inhibiting VDR signaling. PPAR{gamma} was also found to compete with VDR for their binding partner retinoid X receptor alpha (RXR{alpha}). Overexpression of RXR{alpha} blocked PPAR{gamma}'s suppressive effect on 1,25D{sub 3} action, enhancing VDR signaling. In conclusion, these observations uncover molecular mechanisms connecting the PPAR{gamma} and VDR pathways. -- Highlights: PPAR{gamma}'s role on 1{alpha},25-dihydroxyvitamin D{sub 3} transcriptional activity is examined. Black-Right-Pointing-Pointer PPAR{gamma} physically binds to VDR and inhibits 1{alpha},25-dihydroxyvitamin D{sub 3} action. Black-Right-Pointing-Pointer PPAR{gamma}'s hinge and ligand binding domains are important for this inhibitory effect. Black-Right-Pointing-Pointer PPAR{gamma} competes with VDR for the availability of their binding partner, RXR{alpha}.« less

  1. Lossy Wavefield Compression for Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Boehm, C.; Fichtner, A.; de la Puente, J.; Hanzich, M.

    2015-12-01

    We present lossy compression techniques, tailored to the inexact computation of sensitivity kernels, that significantly reduce the memory requirements of adjoint-based minimization schemes. Adjoint methods are a powerful tool to solve tomography problems in full-waveform inversion (FWI). Yet they face the challenge of massive memory requirements caused by the opposite directions of forward and adjoint simulations and the necessity to access both wavefields simultaneously during the computation of the sensitivity kernel. Thus, storage, I/O operations, and memory bandwidth become key topics in FWI. In this talk, we present strategies for the temporal and spatial compression of the forward wavefield. This comprises re-interpolation with coarse time steps and an adaptive polynomial degree of the spectral element shape functions. In addition, we predict the projection errors on a hierarchy of grids and re-quantize the residuals with an adaptive floating-point accuracy to improve the approximation. Furthermore, we use the first arrivals of adjoint waves to identify "shadow zones" that do not contribute to the sensitivity kernel at all. Updating and storing the wavefield within these shadow zones is skipped, which reduces memory requirements and computational costs at the same time. Compared to check-pointing, our approach has only a negligible computational overhead, utilizing the fact that a sufficiently accurate sensitivity kernel does not require a fully resolved forward wavefield. Furthermore, we use adaptive compression thresholds during the FWI iterations to ensure convergence. Numerical experiments on the reservoir scale and for the Western Mediterranean prove the high potential of this approach with an effective compression factor of 500-1000. Furthermore, it is computationally cheap and easy to integrate in both, finite-differences and finite-element wave propagation codes.

  2. Computational techniques in gamma-ray skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, D.L.

    1988-12-01

    Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified tomore » use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs.« less

  3. Biologically-Inspired Spike-Based Automatic Speech Recognition of Isolated Digits Over a Reproducing Kernel Hilbert Space

    PubMed Central

    Li, Kan; Príncipe, José C.

    2018-01-01

    This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime. PMID:29666568

  4. Biologically-Inspired Spike-Based Automatic Speech Recognition of Isolated Digits Over a Reproducing Kernel Hilbert Space.

    PubMed

    Li, Kan; Príncipe, José C

    2018-01-01

    This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime.

  5. On- and off-axis spectral emission features from laser-produced gas breakdown plasmas

    NASA Astrophysics Data System (ADS)

    Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.; Brumfield, B. E.; Phillips, M. C.; Miloshevsky, G.

    2017-06-01

    Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during their early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of the surrounding ambient: photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early times of their creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with a pulse duration of 6 ns are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density, and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times, while space and time resolved spectroscopy is used for evaluating the emission features and for inferring plasma physical conditions at on- and off-axis positions. The structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using the computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms, and molecules are separated in time with early time temperatures and densities in excess of 35 000 K and 4 × 1018/cm3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N2 bands and is represented by non-local thermodynamic equilibrium (non-LTE) conditions. Our results also highlight that the ultraviolet radiation emitted during the early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.

  6. On- and off-axis spectral emission features from laser-produced gas breakdown plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.

    Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during its early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of surrounding ambient: viz. photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early timesmore » of its creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission features of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with 6 ns pulse duration are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times while space and time resolved spectroscopy is used for evaluating the emission features as well as for inferring plasma fundaments at on- and off-axis. Structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms and molecules are separated in time with an early time temperatures and densities in excess of 35000 K and 4×10 18 /cm 3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N 2 bands and represented by non-LTE conditions. Finally, our results also highlight that the ultraviolet radiation emitted during early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.« less

  7. On- and off-axis spectral emission features from laser-produced gas breakdown plasmas

    DOE PAGES

    Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.; ...

    2017-06-01

    Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during its early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of surrounding ambient: viz. photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early timesmore » of its creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission features of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with 6 ns pulse duration are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times while space and time resolved spectroscopy is used for evaluating the emission features as well as for inferring plasma fundaments at on- and off-axis. Structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms and molecules are separated in time with an early time temperatures and densities in excess of 35000 K and 4×1018 /cm3 with an existence of thermal equilibrium. However, the emission from the off-kernel positions from the breakdown plasmas showed enhanced ultraviolet radiation with the presence of N2 bands and represented by non-LTE conditions. Our results also highlight that the ultraviolet radiation emitted during early time of spark evolution is the predominant source of the photo-excitation of the surrounding medium.« less

  8. Ninteenth International Cosmic Ray Conference. OG Sessions, Volume 1

    NASA Technical Reports Server (NTRS)

    Jones, F. C. (Compiler)

    1985-01-01

    Contributed papers addressing cosmic ray origin and galactic phenomena are compiled. The topic areas covered in this volume include gamma ray bursts, gamma rays from point sources, and diffuse gamma ray emission.

  9. Analysis and Implementation of Particle-to-Particle (P2P) Graphics Processor Unit (GPU) Kernel for Black-Box Adaptive Fast Multipole Method

    DTIC Science & Technology

    2015-06-01

    5110P and 16 dx360M4 nodes each with one NVIDIA Kepler K20M/K40M GPU. Each node contained dual Intel Xeon E5-2670 (Sandy Bridge) central processing...kernel and as such does not employ multiple processors. This work makes use of a single processing core and a single NVIDIA Kepler K40 GK110...bandwidth (2 × 16 slot), 7.877 GFloat/s; Kepler K40 peak, 4,290 × 1 billion floating-point operations (GFLOPs), and 288 GB/s Kepler K40 memory

  10. The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware

    NASA Astrophysics Data System (ADS)

    Kathiara, Jainik

    There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.

  11. Wavelet-based techniques for the gamma-ray sky

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDermott, Samuel D.; Fox, Patrick J.; Cholis, Ilias

    2016-07-01

    Here, we demonstrate how the image analysis technique of wavelet decomposition can be applied to the gamma-ray sky to separate emission on different angular scales. New structures on scales that differ from the scales of the conventional astrophysical foreground and background uncertainties can be robustly extracted, allowing a model-independent characterization with no presumption of exact signal morphology. As a test case, we generate mock gamma-ray data to demonstrate our ability to extract extended signals without assuming a fixed spatial template. For some point source luminosity functions, our technique also allows us to differentiate a diffuse signal in gamma-rays from darkmore » matter annihilation and extended gamma-ray point source populations in a data-driven way.« less

  12. Motion estimation accuracy for visible-light/gamma-ray imaging fusion for portable portal monitoring

    NASA Astrophysics Data System (ADS)

    Karnowski, Thomas P.; Cunningham, Mark F.; Goddard, James S.; Cheriyadat, Anil M.; Hornback, Donald E.; Fabris, Lorenzo; Kerekes, Ryan A.; Ziock, Klaus-Peter; Gee, Timothy F.

    2010-01-01

    The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Portable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest. We have constructed a prototype, rapid-deployment portal gamma-ray imaging portal monitor that uses machine vision and gamma-ray imaging to monitor multiple lanes of traffic. Vehicles are detected and tracked by using point detection and optical flow methods as implemented in the OpenCV software library. Points are clustered together but imperfections in the detected points and tracks cause errors in the accuracy of the vehicle position estimates. The resulting errors cause a "blurring" effect in the gamma image of the vehicle. To minimize these errors, we have compared a variety of motion estimation techniques including an estimate using the median of the clustered points, a "best-track" filtering algorithm, and a constant velocity motion estimation model. The accuracy of these methods are contrasted and compared to a manually verified ground-truth measurement by quantifying the rootmean- square differences in the times the vehicles cross the gamma-ray image pixel boundaries compared with a groundtruth manual measurement.

  13. Optimized formulas for the gravitational field of a tesseroid

    NASA Astrophysics Data System (ADS)

    Grombein, Thomas; Seitz, Kurt; Heck, Bernhard

    2013-07-01

    Various tasks in geodesy, geophysics, and related geosciences require precise information on the impact of mass distributions on gravity field-related quantities, such as the gravitational potential and its partial derivatives. Using forward modeling based on Newton's integral, mass distributions are generally decomposed into regular elementary bodies. In classical approaches, prisms or point mass approximations are mostly utilized. Considering the effect of the sphericity of the Earth, alternative mass modeling methods based on tesseroid bodies (spherical prisms) should be taken into account, particularly in regional and global applications. Expressions for the gravitational field of a point mass are relatively simple when formulated in Cartesian coordinates. In the case of integrating over a tesseroid volume bounded by geocentric spherical coordinates, it will be shown that it is also beneficial to represent the integral kernel in terms of Cartesian coordinates. This considerably simplifies the determination of the tesseroid's potential derivatives in comparison with previously published methodologies that make use of integral kernels expressed in spherical coordinates. Based on this idea, optimized formulas for the gravitational potential of a homogeneous tesseroid and its derivatives up to second-order are elaborated in this paper. These new formulas do not suffer from the polar singularity of the spherical coordinate system and can, therefore, be evaluated for any position on the globe. Since integrals over tesseroid volumes cannot be solved analytically, the numerical evaluation is achieved by means of expanding the integral kernel in a Taylor series with fourth-order error in the spatial coordinates of the integration point. As the structure of the Cartesian integral kernel is substantially simplified, Taylor coefficients can be represented in a compact and computationally attractive form. Thus, the use of the optimized tesseroid formulas particularly benefits from a significant decrease in computation time by about 45 % compared to previously used algorithms. In order to show the computational efficiency and to validate the mathematical derivations, the new tesseroid formulas are applied to two realistic numerical experiments and are compared to previously published tesseroid methods and the conventional prism approach.

  14. SU-F-T-284: The Effect of Linear Accelerator Output Variation On the Quality of Patient Specific Rapid Arc Verification Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandhu, G; Cao, F; Szpala, S

    2016-06-15

    Purpose: The aim of the current study is to investigate the effect of machine output variation on the delivery of the RapidArc verification plans. Methods: Three verification plans were generated using Eclipse™ treatment planning system (V11.031) with plan normalization value 100.0%. These plans were delivered on the linear accelerators using ArcCHECK− device, with machine output 1.000 cGy/MU at calibration point. These planned and delivered dose distributions were used as reference plans. Additional plans were created in Eclipse− with normalization values ranging 92.80%–102% to mimic the machine output ranging 1.072cGy/MU-0.980cGy/MU, at the calibration point. These plans were compared against the referencemore » plans using gamma indices (3%, 3mm) and (2%, 2mm). Calculated gammas were studied for its dependence on machine output. Plans were considered passed if 90% of the points satisfy the defined gamma criteria. Results: The gamma index (3%, 3mm) was insensitive to output fluctuation within the output tolerance level (2% of calibration), and showed failures, when the machine output exceeds ≥3%. Gamma (2%, 2mm) was found to be more sensitive to the output variation compared to the gamma (3%, 3mm), and showed failures, when output exceeds ≥1.7%. The variation of the gamma indices with output variability also showed dependence upon the plan parameters (e.g. MLC movement and gantry rotation). The variation of the percentage points passing gamma criteria with output variation followed a non-linear decrease beyond the output tolerance level. Conclusion: Data from the limited plans and output conditions showed that gamma (2%, 2mm) is more sensitive to the output fluctuations compared to Gamma (3%,3mm). Work under progress, including detail data from a large number of plans and a wide range of output conditions, may be able to conclude the quantitative dependence of gammas on machine output, and hence the effect on the quality of delivered rapid arc plans.« less

  15. Leptokurtic portfolio theory

    NASA Astrophysics Data System (ADS)

    Kitt, R.; Kalda, J.

    2006-03-01

    The question of optimal portfolio is addressed. The conventional Markowitz portfolio optimisation is discussed and the shortcomings due to non-Gaussian security returns are outlined. A method is proposed to minimise the likelihood of extreme non-Gaussian drawdowns of the portfolio value. The theory is called Leptokurtic, because it minimises the effects from “fat tails” of returns. The leptokurtic portfolio theory provides an optimal portfolio for investors, who define their risk-aversion as unwillingness to experience sharp drawdowns in asset prices. Two types of risks in asset returns are defined: a fluctuation risk, that has Gaussian distribution, and a drawdown risk, that deals with distribution tails. These risks are quantitatively measured by defining the “noise kernel” — an ellipsoidal cloud of points in the space of asset returns. The size of the ellipse is controlled with the threshold parameter: the larger the threshold parameter, the larger return are accepted for investors as normal fluctuations. The return vectors falling into the kernel are used for calculation of fluctuation risk. Analogously, the data points falling outside the kernel are used for the calculation of drawdown risks. As a result the portfolio optimisation problem becomes three-dimensional: in addition to the return, there are two types of risks involved. Optimal portfolio for drawdown-averse investors is the portfolio minimising variance outside the noise kernel. The theory has been tested with MSCI North America, Europe and Pacific total return stock indices.

  16. ATP-gamma-S shifts the operating point of outer hair cell transduction towards scala tympani.

    PubMed

    Bobbin, Richard P; Salt, Alec N

    2005-07-01

    ATP receptor agonists and antagonists alter cochlear mechanics as measured by changes in distortion product otoacoustic emissions (DPOAE). Some of the effects on DPOAEs are consistent with the hypothesis that ATP affects mechano-electrical transduction and the operating point of the outer hair cells (OHCs). This hypothesis was tested by monitoring the effect of ATP-gamma-S on the operating point of the OHCs. Guinea pigs anesthetized with urethane and with sectioned middle ear muscles were used. The cochlear microphonic (CM) was recorded differentially (scala vestibuli referenced to scala tympani) across the basal turn before and after perfusion (20 min) of the perilymph compartment with artificial perilymph (AP) and ATP-gamma-S dissolved in AP. The operating point was derived from the cochlear microphonics (CM) recorded in response low frequency (200 Hz) tones at high level (106, 112 and 118 dB SPL). The analysis procedure used a Boltzmann function to simulate the CM waveform and the Boltzmann parameters were adjusted to best-fit the calculated waveform to the CM. Compared to the initial perfusion with AP, ATP-gamma-S (333 microM) enhanced peak clipping of the positive peak of the CM (that occurs during organ of Corti displacements towards scala tympani), which was in keeping with ATP-induced displacement of the transducer towards scala tympani. CM waveform analysis quantified the degree of displacement and showed that the changes were consistent with the stimulus being centered on a different region of the transducer curve. The change of operating point meant that the stimulus was applied to a region of the transducer curve where there was greater saturation of the output on excursions towards scala tympani and less saturation towards scala vestibuli. A significant degree of recovery of the operating point was observed after washing with AP. Dose response curves generated by perfusing ATP-gamma-S (333 microM) in a cumulative manner yielded an EC(50) of 19.8 microM. The ATP antagonist PPADS (0.1 mM) failed to block the effect of ATP-gamma-S on operating point, suggesting the response was due to activation of metabotropic and not ionotropic ATP receptors. Multiple perfusions of AP had no significant effect (118 and 112 dB) or moved the operating point slightly (106 dB) in the direction opposite of ATP-gamma-S. Results are consistent with an ATP-gamma-S induced transducer change comparable to a static movement of the organ of Corti or reticular lamina towards scala tympani.

  17. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    NASA Astrophysics Data System (ADS)

    Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.

    2017-01-01

    Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.

  18. Benchmark Testing of the Largest Titanium Aluminide Sheet Subelement Conducted

    NASA Technical Reports Server (NTRS)

    Bartolotta, Paul A.; Krause, David L.

    2000-01-01

    To evaluate wrought titanium aluminide (gamma TiAl) as a viable candidate material for the High-Speed Civil Transport (HSCT) exhaust nozzle, an international team led by the NASA Glenn Research Center at Lewis Field successfully fabricated and tested the largest gamma TiAl sheet structure ever manufactured. The gamma TiAl sheet structure, a 56-percent subscale divergent flap subelement, was fabricated for benchmark testing in three-point bending. Overall, the subelement was 84-cm (33-in.) long by 13-cm (5-in.) wide by 8-cm (3-in.) deep. Incorporated into the subelement were features that might be used in the fabrication of a full-scale divergent flap. These features include the use of: (1) gamma TiAl shear clips to join together sections of corrugations, (2) multiple gamma TiAl face sheets, (3) double hot-formed gamma TiAl corrugations, and (4) brazed joints. The structural integrity of the gamma TiAl sheet subelement was evaluated by conducting a room-temperature three-point static bend test.

  19. Universal energy spectrum from point sources

    NASA Technical Reports Server (NTRS)

    Tomozawa, Yukio

    1992-01-01

    The suggestion is made that the energy spectrum from point sources such as galactic black hole candidates (GBHC) and active galactic nuclei (AGN) is universal on the average, irrespective of the species of the emitted particles, photons, nucleons, or others. The similarity between the observed energy spectra of cosmic rays, gamma-rays, and X-rays is discussed. In other words, the existing data for gamma-rays and X-rays seem to support the prediction. The expected data from the Gamma Ray Observatory are to provide a further test.

  20. Retrieval of the aerosol size distribution in the complex anomalous diffraction approximation

    NASA Astrophysics Data System (ADS)

    Franssens, Ghislain R.

    This contribution reports some recently achieved results in aerosol size distribution retrieval in the complex anomalous diffraction approximation (ADA) to MIE scattering theory. This approximation is valid for spherical particles that are large compared to the wavelength and have a refractive index close to 1. The ADA kernel is compared with the exact MIE kernel. Despite being a simple approximation, the ADA seems to have some practical value for the retrieval of the larger modes of tropospheric and lower stratospheric aerosols. The ADA has the advantage over MIE theory that an analytic inversion of the associated Fredholm integral equation becomes possible. In addition, spectral inversion in the ADA can be formulated as a well-posed problem. In this way, a new inverse formula was obtained, which allows the direct computation of the size distribution as an integral over the spectral extinction function. This formula is valid for particles that both scatter and absorb light and it also takes the spectral dispersion of the refractive index into account. Some details of the numerical implementation of the inverse formula are illustrated using a modified gamma test distribution. Special attention is given to the integration of spectrally truncated discrete extinction data with errors.

  1. Tcl as a Software Environment for a TCS

    NASA Astrophysics Data System (ADS)

    Terrett, David L.

    2002-12-01

    This paper describes how the Tcl scripting language and C API has been used as the software environment for a telescope pointing kernel so that new pointing algorithms and software architectures can be developed and tested without needing a real-time operating system or real-time software environment. It has enabled development to continue outside the framework of a specific telescope project while continuing to build a system that is sufficiently complete to be capable of controlling real hardware but expending minimum effort on replacing the services that would normally by provided by a real-time software environment. Tcl is used as a scripting language for configuring the system at startup and then as the command interface for controlling the running system; the Tcl C language API is used to provided a system independent interface to file and socket I/O and other operating system services. The pointing algorithms themselves are implemented as a set of C++ objects calling C library functions that implement the algorithms described in [2]. Although originally designed as a test and development environment, the system, running as a soft real-time process on Linux, has been used to test the SOAR mount control system and will be used as the pointing kernel of the SOAR telescope control system

  2. Infrared microspectroscopic imaging of plant tissues: spectral visualization of Triticum aestivum kernel and Arabidopsis leaf microstructure

    PubMed Central

    Warren, Frederick J; Perston, Benjamin B; Galindez-Najera, Silvia P; Edwards, Cathrina H; Powell, Prudence O; Mandalari, Giusy; Campbell, Grant M; Butterworth, Peter J; Ellis, Peter R

    2015-01-01

    Infrared microspectroscopy is a tool with potential for studies of the microstructure, chemical composition and functionality of plants at a subcellular level. Here we present the use of high-resolution bench top-based infrared microspectroscopy to investigate the microstructure of Triticum aestivum L. (wheat) kernels and Arabidopsis leaves. Images of isolated wheat kernel tissues and whole wheat kernels following hydrothermal processing and simulated gastric and duodenal digestion were generated, as well as images of Arabidopsis leaves at different points during a diurnal cycle. Individual cells and cell walls were resolved, and large structures within cells, such as starch granules and protein bodies, were clearly identified. Contrast was provided by converting the hyperspectral image cubes into false-colour images using either principal component analysis (PCA) overlays or by correlation analysis. The unsupervised PCA approach provided a clear view of the sample microstructure, whereas the correlation analysis was used to confirm the identity of different anatomical structures using the spectra from isolated components. It was then demonstrated that gelatinized and native starch within cells could be distinguished, and that the loss of starch during wheat digestion could be observed, as well as the accumulation of starch in leaves during a diurnal period. PMID:26400058

  3. Modeling RF Fields in Hot Plasmas with Parallel Full Wave Code

    NASA Astrophysics Data System (ADS)

    Spencer, Andrew; Svidzinski, Vladimir; Zhao, Liangji; Galkin, Sergei; Kim, Jin-Soo

    2016-10-01

    FAR-TECH, Inc. is developing a suite of full wave RF plasma codes. It is based on a meshless formulation in configuration space with adapted cloud of computational points (CCP) capability and using the hot plasma conductivity kernel to model the nonlocal plasma dielectric response. The conductivity kernel is calculated by numerically integrating the linearized Vlasov equation along unperturbed particle trajectories. Work has been done on the following calculations: 1) the conductivity kernel in hot plasmas, 2) a monitor function based on analytic solutions of the cold-plasma dispersion relation, 3) an adaptive CCP based on the monitor function, 4) stencils to approximate the wave equations on the CCP, 5) the solution to the full wave equations in the cold-plasma model in tokamak geometry for ECRH and ICRH range of frequencies, and 6) the solution to the wave equations using the calculated hot plasma conductivity kernel. We will present results on using a meshless formulation on adaptive CCP to solve the wave equations and on implementing the non-local hot plasma dielectric response to the wave equations. The presentation will include numerical results of wave propagation and absorption in the cold and hot tokamak plasma RF models, using DIII-D geometry and plasma parameters. Work is supported by the U.S. DOE SBIR program.

  4. Factors affecting cadmium absorbed by pistachio kernel in calcareous soils, southeast of Iran.

    PubMed

    Shirani, H; Hosseinifard, S J; Hashemipour, H

    2018-03-01

    Cadmium (Cd) which does not have a biological role is one of the most toxic heavy metals for organisms. This metal enters environment through industrial processes and fertilizers. The main objective of this study was to determine the relationships between absorbed Cd by pistachio kernel and some of soil physical and chemical characteristics using modeling by stepwise regression and Artificial Neural Network (ANN), in calcareous soils in Rafsanjan region, southeast of Iran. For these purposes, 220 pistachio orchards were selected, and soil samples were taken from two depths of 0-40 and 40-80cm. Besides, fruit and leaf samples from branches with and without fruit were taken in each sampling point. The results showed that affecting factors on absorbed Cd by pistachio kernel which were obtained by regression method (pH and clay percent) were not interpretable, and considering unsuitable vales of determinant coefficient (R 2 ) and Root Mean Squares Error (RMSE), the model did not have sufficient validity. However, ANN modeling was highly accurate and reliable. Based on its results, soil available P and Zn and soil salinity were the most important factors affecting the concentration of Cd in pistachio kernel in pistachio growing areas of Rafsanjan. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  6. Speeding Up the Bilateral Filter: A Joint Acceleration Way.

    PubMed

    Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng

    2016-06-01

    Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.

  7. Gamma-Ray Astronomy Across 6 Decades of Energy: Synergy between Fermi, IACTs, and HAWC

    NASA Technical Reports Server (NTRS)

    Hui, C. Michelle

    2017-01-01

    Gamma Ray Observatories, Gamma-Ray Astrophysics, GeV TeV Sky Survey, Galaxy, Galactic Plane, Source Distribution, The gamma-ray sky is currently well-monitored with good survey coverage. Many instruments from different waveband/messenger (X rays, gamma rays, neutrinos, gravitational waves) available for simultaneous observations. Both wide-field and pointing instruments in development and coming online in the next decade LIGO

  8. Microstructural response to heat affected zone cracking of prewelding heat-treated Inconel 939 superalloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez, M.A., E-mail: mgonzalez@comimsa.com.mx; Martinez, D.I., E-mail: dorairma@yahoo.com; Perez, A., E-mail: betinperez@hotmail.com

    2011-12-15

    The microstructural response to cracking in the heat-affected zone (HAZ) of a nickel-based IN 939 superalloy after prewelding heat treatments (PWHT) was investigated. The PWHT specimens showed two different microstructures: 1) spherical ordered {gamma} Prime precipitates (357-442 nm), with blocky MC and discreet M{sub 23}C{sub 6} carbides dispersed within the coarse dendrites and in the interdendritic regions; and 2) ordered {gamma} Prime precipitates in 'ogdoadically' diced cube shapes and coarse MC carbides within the dendrites and in the interdendritic regions. After being tungsten inert gas welded (TIG) applying low heat input, welding speed and using a more ductile filler alloy,more » specimens with microstructures consisting of spherical {gamma} Prime precipitate particles and dispersed discreet MC carbides along the grain boundaries, displayed a considerably improved weldability due to a strong reduction of the intergranular HAZ cracking associated with the liquation microfissuring phenomena. - Highlights: Black-Right-Pointing-Pointer Homogeneous microstructures of {gamma} Prime spheroids and discreet MC carbides of Ni base superalloys through preweld heat treatments. Black-Right-Pointing-Pointer {gamma} Prime spheroids and discreet MC carbides reduce the intergranular HAZ liquation and microfissuring of Nickel base superalloys. Black-Right-Pointing-Pointer Microstructure {gamma} Prime spheroids and discreet blocky type MC carbides, capable to relax the stress generated during weld cooling. Black-Right-Pointing-Pointer Low welding heat input welding speeds and ductile filler alloys reduce the HAZ cracking susceptibility.« less

  9. Ford Motor Company NDE facility shielding design.

    PubMed

    Metzger, Robert L; Van Riper, Kenneth A; Jones, Martin H

    2005-01-01

    Ford Motor Company proposed the construction of a large non-destructive evaluation laboratory for radiography of automotive power train components. The authors were commissioned to design the shielding and to survey the completed facility for compliance with radiation doses for occupationally and non-occupationally exposed personnel. The two X-ray sources are Varian Linatron 3000 accelerators operating at 9-11 MV. One performs computed tomography of automotive transmissions, while the other does real-time radiography of operating engines and transmissions. The shield thickness for the primary barrier and all secondary barriers were determined by point-kernel techniques. Point-kernel techniques did not work well for skyshine calculations and locations where multiple sources (e.g. tube head leakage and various scatter fields) impacted doses. Shielding for these areas was determined using transport calculations. A number of MCNP [Briesmeister, J. F. MCNPCA general Monte Carlo N-particle transport code version 4B. Los Alamos National Laboratory Manual (1997)] calculations focused on skyshine estimates and the office areas. Measurements on the operational facility confirmed the shielding calculations.

  10. Correction of measured Gamma-Knife output factors for angular dependence of diode detectors and PinPoint ionization chamber.

    PubMed

    Hršak, Hrvoje; Majer, Marija; Grego, Timor; Bibić, Juraj; Heinrich, Zdravko

    2014-12-01

    Dosimetry for Gamma-Knife requires detectors with high spatial resolution and minimal angular dependence of response. Angular dependence and end effect time for p-type silicon detectors (PTW Diode P and Diode E) and PTW PinPoint ionization chamber were measured with Gamma-Knife beams. Weighted angular dependence correction factors were calculated for each detector. The Gamma-Knife output factors were corrected for angular dependence and end effect time. For Gamma-Knife beams angle range of 84°-54°. Diode P shows considerable angular dependence of 9% and 8% for the 18 mm and 14, 8, 4 mm collimator, respectively. For Diode E this dependence is about 4% for all collimators. PinPoint ionization chamber shows angular dependence of less than 3% for 18, 14 and 8 mm helmet and 10% for 4 mm collimator due to volumetric averaging effect in a small photon beam. Corrected output factors for 14 mm helmet are in very good agreement (within ±0.3%) with published data and values recommended by vendor (Elekta AB, Stockholm, Sweden). For the 8 mm collimator diodes are still in good agreement with recommended values (within ±0.6%), while PinPoint gives 3% less value. For the 4 mm helmet Diodes P and E show over-response of 2.8% and 1.8%, respectively. For PinPoint chamber output factor of 4 mm collimator is 25% lower than Elekta value which is generally not consequence of angular dependence, but of volumetric averaging effect and lack of lateral electronic equilibrium. Diodes P and E represent good choice for Gamma-Knife dosimetry. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. A novel gamma-fitting statistical method for anti-drug antibody assays to establish assay cut points for data with non-normal distribution.

    PubMed

    Schlain, Brian; Amaravadi, Lakshmi; Donley, Jean; Wickramasekera, Ananda; Bennett, Donald; Subramanyam, Meena

    2010-01-31

    In recent years there has been growing recognition of the impact of anti-drug or anti-therapeutic antibodies (ADAs, ATAs) on the pharmacokinetic and pharmacodynamic behavior of the drug, which ultimately affects drug exposure and activity. These anti-drug antibodies can also impact safety of the therapeutic by inducing a range of reactions from hypersensitivity to neutralization of the activity of an endogenous protein. Assessments of immunogenicity, therefore, are critically dependent on the bioanalytical method used to test samples, in which a positive versus negative reactivity is determined by a statistically derived cut point based on the distribution of drug naïve samples. For non-normally distributed data, a novel gamma-fitting method for obtaining assay cut points is presented. Non-normal immunogenicity data distributions, which tend to be unimodal and positively skewed, can often be modeled by 3-parameter gamma fits. Under a gamma regime, gamma based cut points were found to be more accurate (closer to their targeted false positive rates) compared to normal or log-normal methods and more precise (smaller standard errors of cut point estimators) compared with the nonparametric percentile method. Under a gamma regime, normal theory based methods for estimating cut points targeting a 5% false positive rate were found in computer simulation experiments to have, on average, false positive rates ranging from 6.2 to 8.3% (or positive biases between +1.2 and +3.3%) with bias decreasing with the magnitude of the gamma shape parameter. The log-normal fits tended, on average, to underestimate false positive rates with negative biases as large a -2.3% with absolute bias decreasing with the shape parameter. These results were consistent with the well known fact that gamma distributions become less skewed and closer to a normal distribution as their shape parameters increase. Inflated false positive rates, especially in a screening assay, shifts the emphasis to confirm test results in a subsequent test (confirmatory assay). On the other hand, deflated false positive rates in the case of screening immunogenicity assays will not meet the minimum 5% false positive target as proposed in the immunogenicity assay guidance white papers. Copyright 2009 Elsevier B.V. All rights reserved.

  12. Intelligent Design of Metal Oxide Gas Sensor Arrays Using Reciprocal Kernel Support Vector Regression

    NASA Astrophysics Data System (ADS)

    Dougherty, Andrew W.

    Metal oxides are a staple of the sensor industry. The combination of their sensitivity to a number of gases, and the electrical nature of their sensing mechanism, make the particularly attractive in solid state devices. The high temperature stability of the ceramic material also make them ideal for detecting combustion byproducts where exhaust temperatures can be high. However, problems do exist with metal oxide sensors. They are not very selective as they all tend to be sensitive to a number of reduction and oxidation reactions on the oxide's surface. This makes sensors with large numbers of sensors interesting to study as a method for introducing orthogonality to the system. Also, the sensors tend to suffer from long term drift for a number of reasons. In this thesis I will develop a system for intelligently modeling metal oxide sensors and determining their suitability for use in large arrays designed to analyze exhaust gas streams. It will introduce prior knowledge of the metal oxide sensors' response mechanisms in order to produce a response function for each sensor from sparse training data. The system will use the same technique to model and remove any long term drift from the sensor response. It will also provide an efficient means for determining the orthogonality of the sensor to determine whether they are useful in gas sensing arrays. The system is based on least squares support vector regression using the reciprocal kernel. The reciprocal kernel is introduced along with a method of optimizing the free parameters of the reciprocal kernel support vector machine. The reciprocal kernel is shown to be simpler and to perform better than an earlier kernel, the modified reciprocal kernel. Least squares support vector regression is chosen as it uses all of the training points and an emphasis was placed throughout this research for extracting the maximum information from very sparse data. The reciprocal kernel is shown to be effective in modeling the sensor responses in the time, gas and temperature domains, and the dual representation of the support vector regression solution is shown to provide insight into the sensor's sensitivity and potential orthogonality. Finally, the dual weights of the support vector regression solution to the sensor's response are suggested as a fitness function for a genetic algorithm, or some other method for efficiently searching large parameter spaces.

  13. Graviton 1-loop partition function for 3-dimensional massive gravity

    NASA Astrophysics Data System (ADS)

    Gaberdiel, Matthias R.; Grumiller, Daniel; Vassilevich, Dmitri

    2010-11-01

    Thegraviton1-loop partition function in Euclidean topologically massivegravity (TMG) is calculated using heat kernel techniques. The partition function does not factorize holomorphically, and at the chiral point it has the structure expected from a logarithmic conformal field theory. This gives strong evidence for the proposal that the dual conformal field theory to TMG at the chiral point is indeed logarithmic. We also generalize our results to new massive gravity.

  14. EGRET Diffuse Gamma Ray Maps Between 30 MeV and 10 GeV

    NASA Technical Reports Server (NTRS)

    Cillis, A, N.; Hartman, R. C.

    2004-01-01

    This paper presents all-sky maps of diffuse gamma radiation in various energy ranges between 30 MeV and 10 GeV, based on data collected by the EGRET instrument on the Compton Gamma Ray Observatory. Although the maps can be used for a variety of applications. the immediate goal is the generation of diffuse gamma-ray maps which can be used as a diffuse background/foreground for point source analysis of the data to be obtained from new high-energy gamma-ray missions like GLAST and AGILE. To generate the diffuse gamma maps from the raw EGRET maps, the point sources in the Third EGRET Catalog were subtracted out using the appropriate point spread function for each energy range. After that, smoothing was performed to minimize the effects of photon statistical noise. A smoothing length of 1 deg vas used for the Galactic plane maps. For the all-sky maps, a procedure was used which resulted in a smoothing length roughly equivalent to 4 deg. The result of this work is 16 maps of different energy intervals for absolute value of b < or equal to 20 deg, and 32 all-sky maps, 16 in equatorial coordinates (J2000) and 16 in Galactic coordinates.

  15. EGRET Diffuse Gamma Ray Maps Between 30 MeV and 10 GeV

    NASA Technical Reports Server (NTRS)

    Cillis, A. N.; Hartman, R. C.

    2004-01-01

    This paper presents all-sky maps of diffuse gamma radiation in various energy ranges between 30 MeV and 10 GeV, based on data collected by the EGRET instrument on the Compton Gamma Ray Observatory. Although the maps can be used for a variety of applications, the immediate goal is the generation of diffuse gamma-ray maps which can be used as a diffuse background/foreground for point source analysis of the data to be obtained from new high-energy gamma-ray missions like GLAST and AGILE. To generate the diffuse gamma maps from the raw EGRET maps, the point sources in the Third EGRET Catalog were subtracted out using the appropriate point spread function for each energy range. After that, smoothing was performed to minimize the effects of photon statistical noise. A smoothing length of 1deg was used for the Galactic plane maps. For the all-sky maps, a procedure was used which resulted in a smoothing length roughly equivalent to 4deg. The result of this work is 16 maps of different energy intervals for [b]less than or equal to 20deg, and 32 all-sky maps, 16 in equatorial coordinates (J2000) and 16 in Galactic coordinates.

  16. GIS-based support vector machine modeling of earthquake-triggered landslide susceptibility in the Jianjiang River watershed, China

    NASA Astrophysics Data System (ADS)

    Xu, Chong; Dai, Fuchu; Xu, Xiwei; Lee, Yuan Hsi

    2012-04-01

    Support vector machine (SVM) modeling is based on statistical learning theory. It involves a training phase with associated input and target output values. In recent years, the method has become increasingly popular. The main purpose of this study is to evaluate the mapping power of SVM modeling in earthquake triggered landslide-susceptibility mapping for a section of the Jianjiang River watershed using a Geographic Information System (GIS) software. The river was affected by the Wenchuan earthquake of May 12, 2008. Visual interpretation of colored aerial photographs of 1-m resolution and extensive field surveys provided a detailed landslide inventory map containing 3147 landslides related to the 2008 Wenchuan earthquake. Elevation, slope angle, slope aspect, distance from seismogenic faults, distance from drainages, and lithology were used as the controlling parameters. For modeling, three groups of positive and negative training samples were used in concert with four different kernel functions. Positive training samples include the centroids of 500 large landslides, those of all 3147 landslides, and 5000 randomly selected points in landslide polygons. Negative training samples include 500, 3147, and 5000 randomly selected points on slopes that remained stable during the Wenchuan earthquake. The four kernel functions are linear, polynomial, radial basis, and sigmoid. In total, 12 cases of landslide susceptibility were mapped. Comparative analyses of landslide-susceptibility probability and area relation curves show that both the polynomial and radial basis functions suitably classified the input data as either landslide positive or negative though the radial basis function was more successful. The 12 generated landslide-susceptibility maps were compared with known landslide centroid locations and landslide polygons to verify the success rate and predictive accuracy of each model. The 12 results were further validated using area-under-curve analysis. Group 3 with 5000 randomly selected points on the landslide polygons, and 5000 randomly selected points along stable slopes gave the best results with a success rate of 79.20% and predictive accuracy of 79.13% under the radial basis function. Of all the results, the sigmoid kernel function was the least skillful when used in concert with the centroid data of all 3147 landslides as positive training samples, and the negative training samples of 3147 randomly selected points in regions of stable slope (success rate = 54.95%; predictive accuracy = 61.85%). This paper also provides suggestions and reference data for selecting appropriate training samples and kernel function types for earthquake triggered landslide-susceptibility mapping using SVM modeling. Predictive landslide-susceptibility maps could be useful in hazard mitigation by helping planners understand the probability of landslides in different regions.

  17. Accelerator test of the coded aperture mask technique for gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Jenkins, T. L.; Frye, G. M., Jr.; Owens, A.; Carter, J. N.; Ramsden, D.

    1982-01-01

    A prototype gamma-ray telescope employing the coded aperture mask technique has been constructed and its response to a point source of 20 MeV gamma-rays has been measured. The point spread function is approximately a Gaussian with a standard deviation of 12 arc minutes. This resolution is consistent with the cell size of the mask used and the spatial resolution of the detector. In the context of the present experiment, the error radius of the source position (90 percent confidence level) is 6.1 arc minutes.

  18. Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?

    NASA Astrophysics Data System (ADS)

    Bertoni, Bridget; Hooper, Dan; Linden, Tim

    2016-05-01

    In a previous paper, we pointed out that the gamma-ray source 3FGL J2212.5+\\linebreak 0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18-33 GeV and an annihilation cross section on the order of σ v ~ 10-26 cm3/s (for the representative case of annihilations to bbar b), similar to the values required to generate the Galactic Center gamma-ray excess.

  19. Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?

    DOE PAGES

    Bertoni, Bridget; Hooper, Dan; Linden, Tim

    2016-05-23

    In a previous study, we pointed out that the gamma-ray source 3FGL J2212.5+0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18–33 GeV and an annihilation cross section on the order of σv ~ 10 –26 cm(3)/s (for the representative case of annihilations tomore » $$b\\bar{b}$$), similar to the values required to generate the Galactic Center gamma-ray excess.« less

  20. Carbonic anhydrase III regulates peroxisome proliferator-activated receptor-{gamma}2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitterberger, Maria C.; Kim, Geumsoo; Rostek, Ursula

    2012-05-01

    Carbonic anhydrase III (CAIII) is an isoenzyme of the CA family. Because of its low specific anhydrase activity, physiological functions in addition to hydrating CO{sub 2} have been proposed. CAIII expression is highly induced in adipogenesis and CAIII is the most abundant protein in adipose tissues. The function of CAIII in both preadipocytes and adipocytes is however unknown. In the present study we demonstrate that adipogenesis is greatly increased in mouse embryonic fibroblasts (MEFs) from CAIII knockout (KO) mice, as demonstrated by a greater than 10-fold increase in the induction of fatty acid-binding protein-4 (FABP4) and increased triglyceride formation inmore » CAIII{sup -/-} MEFs compared with CAIII{sup +/+} cells. To address the underlying mechanism, we investigated the expression of the two adipogenic key regulators, peroxisome proliferator-activated receptor-{gamma}2 (PPAR{gamma}2) and CCAAT/enhancer binding protein-{alpha}. We found a considerable (approximately 1000-fold) increase in the PPAR{gamma}2 expression in the CAIII{sup -/-} MEFs. Furthermore, RNAi-mediated knockdown of endogenous CAIII in NIH 3T3-L1 preadipocytes resulted in a significant increase in the induction of PPAR{gamma}2 and FABP4. When both CAIII and PPAR{gamma}2 were knocked down, FABP4 was not induced. We conclude that down-regulation of CAIII in preadipocytes enhances adipogenesis and that CAIII is a regulator of adipogenic differentiation which acts at the level of PPAR{gamma}2 gene expression. -- Highlights: Black-Right-Pointing-Pointer We discover a novel function of Carbonic anhydrase III (CAIII). Black-Right-Pointing-Pointer We show that CAIII is a regulator of adipogenesis. Black-Right-Pointing-Pointer We demonstrate that CAIII acts at the level of PPAR{gamma}2 gene expression. Black-Right-Pointing-Pointer Our data contribute to a better understanding of the role of CAIII in fat tissue.« less

  1. Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertoni, Bridget; Hooper, Dan; Linden, Tim

    In a previous study, we pointed out that the gamma-ray source 3FGL J2212.5+0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18–33 GeV and an annihilation cross section on the order of σv ~ 10 –26 cm(3)/s (for the representative case of annihilations tomore » $$b\\bar{b}$$), similar to the values required to generate the Galactic Center gamma-ray excess.« less

  2. Southern Analysis of Genomic Alterations in Gamma-Ray-Induced Aprt- Hamster Cell Mutants

    PubMed Central

    Grosovsky, Andrew J.; Drobetsky, Elliot A.; deJong, Pieter J.; Glickman, Barry W.

    1986-01-01

    The role of genomic alterations in mutagenesis induced by ionizing radiation has been the subject of considerable speculation. By Southern blotting analysis we show here that 9 of 55 (approximately 1/6) gamma-ray-induced mutants at the adenine phosphoribosyl transferase (aprt) locus of Chinese hamster ovary (CHO) cells have a detectable genomic rearrangement. These fall into two classes: intragenic deletions and chromosomal rearrangements. In contrast, no major genomic alterations were detected among 67 spontaneous mutants, although two restriction site loss events were observed. Three gamma-ray-induced mutants were found to be intragenic deletions; all may have identical break-points. The remaining six gamma-ray-induced mutants demonstrating a genomic alteration appear to be the result of chromosomal rearrangements, possibly translocation or inversion events. None of the remaining gamma-ray-induced mutants showed any observable alteration in blotting pattern indicating a substantial role for point mutation in gamma-ray-induced mutagenesis at the aprt locus. PMID:3013724

  3. P- and S-wave Receiver Function Imaging with Scattering Kernels

    NASA Astrophysics Data System (ADS)

    Hansen, S. M.; Schmandt, B.

    2017-12-01

    Full waveform inversion provides a flexible approach to the seismic parameter estimation problem and can account for the full physics of wave propagation using numeric simulations. However, this approach requires significant computational resources due to the demanding nature of solving the forward and adjoint problems. This issue is particularly acute for temporary passive-source seismic experiments (e.g. PASSCAL) that have traditionally relied on teleseismic earthquakes as sources resulting in a global scale forward problem. Various approximation strategies have been proposed to reduce the computational burden such as hybrid methods that embed a heterogeneous regional scale model in a 1D global model. In this study, we focus specifically on the problem of scattered wave imaging (migration) using both P- and S-wave receiver function data. The proposed method relies on body-wave scattering kernels that are derived from the adjoint data sensitivity kernels which are typically used for full waveform inversion. The forward problem is approximated using ray theory yielding a computationally efficient imaging algorithm that can resolve dipping and discontinuous velocity interfaces in 3D. From the imaging perspective, this approach is closely related to elastic reverse time migration. An energy stable finite-difference method is used to simulate elastic wave propagation in a 2D hypothetical subduction zone model. The resulting synthetic P- and S-wave receiver function datasets are used to validate the imaging method. The kernel images are compared with those generated by the Generalized Radon Transform (GRT) and Common Conversion Point stacking (CCP) methods. These results demonstrate the potential of the kernel imaging approach to constrain lithospheric structure in complex geologic environments with sufficiently dense recordings of teleseismic data. This is demonstrated using a receiver function dataset from the Central California Seismic Experiment which shows several dipping interfaces related to the tectonic assembly of this region. Figure 1. Scattering kernel examples for three receiver function phases. A) direct P-to-s (Ps), B) direct S-to-p and C) free-surface PP-to-s (PPs).

  4. Producing data-based sensitivity kernels from convolution and correlation in exploration geophysics.

    NASA Astrophysics Data System (ADS)

    Chmiel, M. J.; Roux, P.; Herrmann, P.; Rondeleux, B.

    2016-12-01

    Many studies have shown that seismic interferometry can be used to estimate surface wave arrivals by correlation of seismic signals recorded at a pair of locations. In the case of ambient noise sources, the convergence towards the surface wave Green's functions is obtained with the criterion of equipartitioned energy. However, seismic acquisition with active, controlled sources gives more possibilities when it comes to interferometry. The use of controlled sources makes it possible to recover the surface wave Green's function between two points using either correlation or convolution. We investigate the convolutional and correlational approaches using land active-seismic data from exploration geophysics. The data were recorded on 10,710 vertical receivers using 51,808 sources (seismic vibrator trucks). The sources spacing is the same in both X and Y directions (30 m) which is known as a "carpet shooting". The receivers are placed in parallel lines with a spacing 150 m in the X direction and 30 m in the Y direction. Invoking spatial reciprocity between sources and receivers, correlation and convolution functions can thus be constructed between either pairs of receivers or pairs of sources. Benefiting from the dense acquisition, we extract sensitivity kernels from correlation and convolution measurements of the seismic data. These sensitivity kernels are subsequently used to produce phase-velocity dispersion curves between two points and to separate the higher mode from the fundamental mode for surface waves. Potential application to surface wave cancellation is also envisaged.

  5. Mycophenolic acid induces ATP-binding cassette transporter A1 (ABCA1) expression through the PPAR{gamma}-LXR{alpha}-ABCA1 pathway

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Yanni; Lai, Fangfang; Xu, Yang

    2011-11-04

    Highlights: Black-Right-Pointing-Pointer Using an ABCA1p-LUC HepG2 cell line, we found that MPA upregulated ABCA1 expression. Black-Right-Pointing-Pointer MPA induced ABCA1 and LXR{alpha} protein expression in HepG2 cells. Black-Right-Pointing-Pointer PPAR{gamma} antagonist GW9662 markedly inhibited MPA-induced ABCA1 and LXR{alpha} protein expression. Black-Right-Pointing-Pointer The effect of MPA upregulating ABCA1 was due mainly to activation of the PPAR{gamma}-LXR{alpha}-ABCA1 pathway. -- Abstract: ATP-binding cassette transporter A1 (ABCA1) promotes cholesterol and phospholipid efflux from cells to lipid-poor apolipoprotein A-I and plays an important role in atherosclerosis. In a previous study, we developed a high-throughput screening method using an ABCA1p-LUC HepG2 cell line to find upregulators of ABCA1.more » Using this method in the present study, we found that mycophenolic acid (MPA) upregulated ABCA1 expression (EC50 = 0.09 {mu}M). MPA upregulation of ABCA1 expression was confirmed by real-time quantitative reverse transcription-PCR and Western blot analysis in HepG2 cells. Previous work has indicated that MPA is a potent agonist of peroxisome proliferator-activated receptor gamma (PPAR{gamma}; EC50 = 5.2-9.3 {mu}M). Liver X receptor {alpha} (LXR{alpha}) is a target gene of PPAR{gamma} and may directly regulate ABCA1 expression. Western blot analysis showed that MPA induced LXR{alpha} protein expression in HepG2 cells. Addition of PPAR{gamma} antagonist GW9662 markedly inhibited MPA-induced ABCA1 and LXR{alpha} protein expression. These data suggest that MPA increased ABCA1 expression mainly through activation of PPAR{gamma}. Thus, the effects of MPA on upregulation of ABCA1 expression were due mainly to activation of the PPAR{gamma}-LXR{alpha}-ABCA1 signaling pathway. This is the first report that the antiatherosclerosis activity of MPA is due to this mechanism.« less

  6. STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu

    2011-09-10

    An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less

  7. Implementation of radiation shielding calculation methods. Volume 2: Seminar/Workshop notes

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    Detailed descriptions are presented of the input data for each of the MSFC computer codes applied to the analysis of a realistic nuclear propelled vehicle. The analytical techniques employed include cross section data, preparation, one and two dimensional discrete ordinates transport, point kernel, and single scatter methods.

  8. Virtual reality based adaptive dose assessment method for arbitrary geometries in nuclear facility decommissioning.

    PubMed

    Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun

    2018-05-17

    This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.

  9. A voting-based statistical cylinder detection framework applied to fallen tree mapping in terrestrial laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2017-07-01

    This paper introduces a statistical framework for detecting cylindrical shapes in dense point clouds. We target the application of mapping fallen trees in datasets obtained through terrestrial laser scanning. This is a challenging task due to the presence of ground vegetation, standing trees, DTM artifacts, as well as the fragmentation of dead trees into non-collinear segments. Our method shares the concept of voting in parameter space with the generalized Hough transform, however two of its significant drawbacks are improved upon. First, the need to generate samples on the shape's surface is eliminated. Instead, pairs of nearby input points lying on the surface cast a vote for the cylinder's parameters based on the intrinsic geometric properties of cylindrical shapes. Second, no discretization of the parameter space is required: the voting is carried out in continuous space by means of constructing a kernel density estimator and obtaining its local maxima, using automatic, data-driven kernel bandwidth selection. Furthermore, we show how the detected cylindrical primitives can be efficiently merged to obtain object-level (entire tree) semantic information using graph-cut segmentation and a tailored dynamic algorithm for eliminating cylinder redundancy. Experiments were performed on 3 plots from the Bavarian Forest National Park, with ground truth obtained through visual inspection of the point clouds. It was found that relative to sample consensus (SAC) cylinder fitting, the proposed voting framework can improve the detection completeness by up to 10 percentage points while maintaining the correctness rate.

  10. Control of Early Flame Kernel Growth by Multi-Wavelength Laser Pulses for Enhanced Ignition

    DOE PAGES

    Dumitrache, Ciprian; VanOsdol, Rachel; Limbach, Christopher M.; ...

    2017-08-31

    The present contribution examines the impact of plasma dynamics and plasma-driven fluid dynamics on the flame growth of laser ignited mixtures and shows that a new dual-pulse scheme can be used to control the kernel formation process in ways that extend the lean ignition limit. We do this by performing a comparative study between (conventional) single-pulse laser ignition (λ = 1064 nm) and a novel dual-pulse method based on combining an ultraviolet (UV) pre-ionization pulse (λ = 266 nm) with an overlapped near-infrared (NIR) energy addition pulse (λ = 1064 nm). We employ OH* chemiluminescence to visualize the evolution ofmore » the early flame kernel. For single-pulse laser ignition at lean conditions, the flame kernel separates through third lobe detachment, corresponding to high strain rates that extinguish the flame. In this work, we investigate the capabilities of the dual-pulse to control the plasma-driven fluid dynamics by adjusting the axial offset of the two focal points. In particular, we find there exists a beam waist offset whereby the resulting vorticity suppresses formation of the third lobe, consequently reducing flame stretch. With this approach, we demonstrate that the dual-pulse method enables reduced flame speeds (at early times), an extended lean limit, increased combustion efficiency, and decreased laser energy requirements.« less

  11. Infrared microspectroscopic imaging of plant tissues: spectral visualization of Triticum aestivum kernel and Arabidopsis leaf microstructure.

    PubMed

    Warren, Frederick J; Perston, Benjamin B; Galindez-Najera, Silvia P; Edwards, Cathrina H; Powell, Prudence O; Mandalari, Giusy; Campbell, Grant M; Butterworth, Peter J; Ellis, Peter R

    2015-11-01

    Infrared microspectroscopy is a tool with potential for studies of the microstructure, chemical composition and functionality of plants at a subcellular level. Here we present the use of high-resolution bench top-based infrared microspectroscopy to investigate the microstructure of Triticum aestivum L. (wheat) kernels and Arabidopsis leaves. Images of isolated wheat kernel tissues and whole wheat kernels following hydrothermal processing and simulated gastric and duodenal digestion were generated, as well as images of Arabidopsis leaves at different points during a diurnal cycle. Individual cells and cell walls were resolved, and large structures within cells, such as starch granules and protein bodies, were clearly identified. Contrast was provided by converting the hyperspectral image cubes into false-colour images using either principal component analysis (PCA) overlays or by correlation analysis. The unsupervised PCA approach provided a clear view of the sample microstructure, whereas the correlation analysis was used to confirm the identity of different anatomical structures using the spectra from isolated components. It was then demonstrated that gelatinized and native starch within cells could be distinguished, and that the loss of starch during wheat digestion could be observed, as well as the accumulation of starch in leaves during a diurnal period. © 2015 The Authors The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.

  12. Control of Early Flame Kernel Growth by Multi-Wavelength Laser Pulses for Enhanced Ignition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumitrache, Ciprian; VanOsdol, Rachel; Limbach, Christopher M.

    The present contribution examines the impact of plasma dynamics and plasma-driven fluid dynamics on the flame growth of laser ignited mixtures and shows that a new dual-pulse scheme can be used to control the kernel formation process in ways that extend the lean ignition limit. We do this by performing a comparative study between (conventional) single-pulse laser ignition (λ = 1064 nm) and a novel dual-pulse method based on combining an ultraviolet (UV) pre-ionization pulse (λ = 266 nm) with an overlapped near-infrared (NIR) energy addition pulse (λ = 1064 nm). We employ OH* chemiluminescence to visualize the evolution ofmore » the early flame kernel. For single-pulse laser ignition at lean conditions, the flame kernel separates through third lobe detachment, corresponding to high strain rates that extinguish the flame. In this work, we investigate the capabilities of the dual-pulse to control the plasma-driven fluid dynamics by adjusting the axial offset of the two focal points. In particular, we find there exists a beam waist offset whereby the resulting vorticity suppresses formation of the third lobe, consequently reducing flame stretch. With this approach, we demonstrate that the dual-pulse method enables reduced flame speeds (at early times), an extended lean limit, increased combustion efficiency, and decreased laser energy requirements.« less

  13. Initial Kernel Timing Using a Simple PIM Performance Model

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.; Block, Gary L.; Springer, Paul L.; Sterling, Thomas; Brockman, Jay B.; Callahan, David

    2005-01-01

    This presentation will describe some initial results of paper-and-pencil studies of 4 or 5 application kernels applied to a processor-in-memory (PIM) system roughly similar to the Cascade Lightweight Processor (LWP). The application kernels are: * Linked list traversal * Sun of leaf nodes on a tree * Bitonic sort * Vector sum * Gaussian elimination The intent of this work is to guide and validate work on the Cascade project in the areas of compilers, simulators, and languages. We will first discuss the generic PIM structure. Then, we will explain the concepts needed to program a parallel PIM system (locality, threads, parcels). Next, we will present a simple PIM performance model that will be used in the remainder of the presentation. For each kernel, we will then present a set of codes, including codes for a single PIM node, and codes for multiple PIM nodes that move data to threads and move threads to data. These codes are written at a fairly low level, between assembly and C, but much closer to C than to assembly. For each code, we will present some hand-drafted timing forecasts, based on the simple PIM performance model. Finally, we will conclude by discussing what we have learned from this work, including what programming styles seem to work best, from the point-of-view of both expressiveness and performance.

  14. Control of Early Flame Kernel Growth by Multi-Wavelength Laser Pulses for Enhanced Ignition.

    PubMed

    Dumitrache, Ciprian; VanOsdol, Rachel; Limbach, Christopher M; Yalin, Azer P

    2017-08-31

    The present contribution examines the impact of plasma dynamics and plasma-driven fluid dynamics on the flame growth of laser ignited mixtures and shows that a new dual-pulse scheme can be used to control the kernel formation process in ways that extend the lean ignition limit. We perform a comparative study between (conventional) single-pulse laser ignition (λ = 1064 nm) and a novel dual-pulse method based on combining an ultraviolet (UV) pre-ionization pulse (λ = 266 nm) with an overlapped near-infrared (NIR) energy addition pulse (λ = 1064 nm). We employ OH* chemiluminescence to visualize the evolution of the early flame kernel. For single-pulse laser ignition at lean conditions, the flame kernel separates through third lobe detachment, corresponding to high strain rates that extinguish the flame. In this work, we investigate the capabilities of the dual-pulse to control the plasma-driven fluid dynamics by adjusting the axial offset of the two focal points. In particular, we find there exists a beam waist offset whereby the resulting vorticity suppresses formation of the third lobe, consequently reducing flame stretch. With this approach, we demonstrate that the dual-pulse method enables reduced flame speeds (at early times), an extended lean limit, increased combustion efficiency, and decreased laser energy requirements.

  15. Cell death is induced by ciglitazone, a peroxisome proliferator-activated receptor {gamma} (PPAR{gamma}) agonist, independently of PPAR{gamma} in human glioma cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Myoung Woo; Kim, Dae Seong; Kim, Hye Ryung

    Highlights: Black-Right-Pointing-Pointer Greater than 30 {mu}M ciglitazone induces cell death in glioma cells. Black-Right-Pointing-Pointer Cell death by ciglitazone is independent of PPAR{gamma} in glioma cells. Black-Right-Pointing-Pointer CGZ induces cell death by the loss of MMP via decreased Akt. -- Abstract: Peroxisome proliferator-activated receptor {gamma} (PPAR{gamma}) regulates multiple signaling pathways, and its agonists induce apoptosis in various cancer cells. However, their role in cell death is unclear. In this study, the relationship between ciglitazone (CGZ) and PPAR{gamma} in CGZ-induced cell death was examined. At concentrations of greater than 30 {mu}M, CGZ, a synthetic PPAR{gamma} agonist, activated caspase-3 and induced apoptosis inmore » T98G cells. Treatment of T98G cells with less than 30 {mu}M CGZ effectively induced cell death after pretreatment with 30 {mu}M of the PPAR{gamma} antagonist GW9662, although GW9662 alone did not induce cell death. This cell death was also observed when cells were co-treated with CGZ and GW9662, but was not observed when cells were treated with CGZ prior to GW9662. In cells in which PPAR{gamma} was down-regulated cells by siRNA, lower concentrations of CGZ (<30 {mu}M) were sufficient to induce cell death, although higher concentrations of CGZ ( Greater-Than-Or-Slanted-Equal-To 30 {mu}M) were required to induce cell death in control T98G cells, indicating that CGZ effectively induces cell death in T98G cells independently of PPAR{gamma}. Treatment with GW9662 followed by CGZ resulted in a down-regulation of Akt activity and the loss of mitochondrial membrane potential (MMP), which was accompanied by a decrease in Bcl-2 expression and an increase in Bid cleavage. These data suggest that CGZ is capable of inducing apoptotic cell death independently of PPAR{gamma} in glioma cells, by down-regulating Akt activity and inducing MMP collapse.« less

  16. Fault Network Reconstruction using Agglomerative Clustering: Applications to South Californian Seismicity

    NASA Astrophysics Data System (ADS)

    Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen

    2014-05-01

    We present applications of a new clustering method for fault network reconstruction based on the spatial distribution of seismicity. Unlike common approaches that start from the simplest large scale and gradually increase the complexity trying to explain the small scales, our method uses a bottom-up approach, by an initial sampling of the small scales and then reducing the complexity. The new approach also exploits the location uncertainty associated with each event in order to obtain a more accurate representation of the spatial probability distribution of the seismicity. For a given dataset, we first construct an agglomerative hierarchical cluster (AHC) tree based on Ward's minimum variance linkage. Such a tree starts out with one cluster and progressively branches out into an increasing number of clusters. To atomize the structure into its constitutive protoclusters, we initialize a Gaussian Mixture Modeling (GMM) at a given level of the hierarchical clustering tree. We then let the GMM converge using an Expectation Maximization (EM) algorithm. The kernels that become ill defined (less than 4 points) at the end of the EM are discarded. By incrementing the number of initialization clusters (by atomizing at increasingly populated levels of the AHC tree) and repeating the procedure above, we are able to determine the maximum number of Gaussian kernels the structure can hold. The kernels in this configuration constitute our protoclusters. In this setting, merging of any pair will lessen the likelihood (calculated over the pdf of the kernels) but in turn will reduce the model's complexity. The information loss/gain of any possible merging can thus be quantified based on the Minimum Description Length (MDL) principle. Similar to an inter-distance matrix, where the matrix element di,j gives the distance between points i and j, we can construct a MDL gain/loss matrix where mi,j gives the information gain/loss resulting from the merging of kernels i and j. Based on this matrix, merging events resulting in MDL gain are performed in descending order until no gainful merging is possible anymore. We envision that the results of this study could lead to a better understanding of the complex interactions within the Californian fault system and hopefully use the acquired insights for earthquake forecasting.

  17. Experiences modeling ocean circulation problems on a 30 node commodity cluster with 3840 GPU processor cores.

    NASA Astrophysics Data System (ADS)

    Hill, C.

    2008-12-01

    Low cost graphic cards today use many, relatively simple, compute cores to deliver support for memory bandwidth of more than 100GB/s and theoretical floating point performance of more than 500 GFlop/s. Right now this performance is, however, only accessible to highly parallel algorithm implementations that, (i) can use a hundred or more, 32-bit floating point, concurrently executing cores, (ii) can work with graphics memory that resides on the graphics card side of the graphics bus and (iii) can be partially expressed in a language that can be compiled by a graphics programming tool. In this talk we describe our experiences implementing a complete, but relatively simple, time dependent shallow-water equations simulation targeting a cluster of 30 computers each hosting one graphics card. The implementation takes into account the considerations (i), (ii) and (iii) listed previously. We code our algorithm as a series of numerical kernels. Each kernel is designed to be executed by multiple threads of a single process. Kernels are passed memory blocks to compute over which can be persistent blocks of memory on a graphics card. Each kernel is individually implemented using the NVidia CUDA language but driven from a higher level supervisory code that is almost identical to a standard model driver. The supervisory code controls the overall simulation timestepping, but is written to minimize data transfer between main memory and graphics memory (a massive performance bottle-neck on current systems). Using the recipe outlined we can boost the performance of our cluster by nearly an order of magnitude, relative to the same algorithm executing only on the cluster CPU's. Achieving this performance boost requires that many threads are available to each graphics processor for execution within each numerical kernel and that the simulations working set of data can fit into the graphics card memory. As we describe, this puts interesting upper and lower bounds on the problem sizes for which this technology is currently most useful. However, many interesting problems fit within this envelope. Looking forward, we extrapolate our experience to estimate full-scale ocean model performance and applicability. Finally we describe preliminary hybrid mixed 32-bit and 64-bit experiments with graphics cards that support 64-bit arithmetic, albeit at a lower performance.

  18. Nuclear proteins that bind the human gamma-globin gene promoter: alterations in binding produced by point mutations associated with hereditary persistence of fetal hemoglobin.

    PubMed Central

    Gumucio, D L; Rood, K L; Gray, T A; Riordan, M F; Sartor, C I; Collins, F S

    1988-01-01

    The molecular mechanisms responsible for the human fetal-to-adult hemoglobin switch have not yet been elucidated. Point mutations identified in the promoter regions of gamma-globin genes from individuals with nondeletion hereditary persistence of fetal hemoglobin (HPFH) may mark cis-acting sequences important for this switch, and the trans-acting factors which interact with these sequences may be integral parts in the puzzle of gamma-globin gene regulation. We have used gel retardation and footprinting strategies to define nuclear proteins which bind to the normal gamma-globin promoter and to determine the effect of HPFH mutations on the binding of a subset of these proteins. We have identified five proteins in human erythroleukemia cells (K562 and HEL) which bind to the proximal promoter region of the normal gamma-globin gene. One factor, gamma CAAT, binds the duplicated CCAAT box sequences; the -117 HPFH mutation increases the affinity of interaction between gamma CAAT and its cognate site. Two proteins, gamma CAC1 and gamma CAC2, bind the CACCC sequence. These proteins require divalent cations for binding. The -175 HPFH mutation interferes with the binding of a fourth protein, gamma OBP, which binds an octamer sequence (ATGCAAAT) in the normal gamma-globin promoter. The HPFH phenotype of the -175 mutation indicates that the octamer-binding protein may play a negative regulatory role in this setting. A fifth protein, EF gamma a, binds to sequences which overlap the octamer-binding site. The erythroid-specific distribution of EF gamma a and its close approximation to an apparent repressor-binding site suggest that it may be important in gamma-globin regulation. Images PMID:2468996

  19. Livermore Compiler Analysis Loop Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hornung, R. D.

    2013-03-01

    LCALS is designed to evaluate compiler optimizations and performance of a variety of loop kernels and loop traversal software constructs. Some of the loop kernels are pulled directly from "Livermore Loops Coded in C", developed at LLNL (see item 11 below for details of earlier code versions). The older suites were used to evaluate floating-point performances of hardware platforms prior to porting larger application codes. The LCALS suite is geared toward assissing C++ compiler optimizations and platform performance related to SIMD vectorization, OpenMP threading, and advanced C++ language features. LCALS contains 20 of 24 loop kernels from the older Livermoremore » Loop suites, plus various others representative of loops found in current production appkication codes at LLNL. The latter loops emphasize more diverse loop constructs and data access patterns than the others, such as multi-dimensional difference stencils. The loops are included in a configurable framework, which allows control of compilation, loop sampling for execution timing, which loops are run and their lengths. It generates timing statistics for analysis and comparing variants of individual loops. Also, it is easy to add loops to the suite as desired.« less

  20. Delimiting Areas of Endemism through Kernel Interpolation

    PubMed Central

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  1. Exploring the Brighter-fatter Effect with the Hyper Suprime-Cam

    NASA Astrophysics Data System (ADS)

    Coulton, William R.; Armstrong, Robert; Smith, Kendrick M.; Lupton, Robert H.; Spergel, David N.

    2018-06-01

    The brighter-fatter effect has been postulated to arise due to the build up of a transverse electric field, produced as photocharges accumulate in the pixels’ potential wells. We investigate the brighter-fatter effect in the Hyper Suprime-Cam by examining flat fields and moments of stars. We observe deviations from the expected linear relation in the photon transfer curve (PTC), luminosity-dependent correlations between pixels in flat-field images, and a luminosity-dependent point-spread function (PSF) in stellar observations. Under the key assumptions of translation invariance and Maxwell’s equations in the quasi-static limit, we give a first-principles proof that the effect can be parameterized by a translationally invariant scalar kernel. We describe how this kernel can be estimated from flat fields and discuss how this kernel has been used to remove the brighter-fatter distortions in Hyper Suprime-Cam images. We find that our correction restores the expected linear relation in the PTCs and significantly reduces, but does not completely remove, the luminosity dependence of the PSF over a wide range of magnitudes.

  2. Renormalization of loop functions for all loops

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, R.A.; Neri, F.; Sato, M.

    1981-08-15

    It is shown that the vacuum expectation values W(C/sub 1/,xxx, C/sub n/) of products of the traces of the path-ordered phase factors P exp(igcontour-integral/sub C/iA/sub ..mu../(x)dx/sup ..mu../) are multiplicatively renormalizable in all orders of perturbation theory. Here A/sub ..mu../(x) are the vector gauge field matrices in the non-Abelian gauge theory with gauge group U(N) or SU(N), and C/sub i/ are loops (closed paths). When the loops are smooth (i.e., differentiable) and simple (i.e., non-self-intersecting), it has been shown that the generally divergent loop functions W become finite functions W when expressed in terms of the renormalized coupling constant and multipliedmore » by the factors e/sup -K/L(C/sub i/), where K is linearly divergent and L(C/sub i/) is the length of C/sub i/. It is proved here that the loop functions remain multiplicatively renormalizable even if the curves have any finite number of cusps (points of nondifferentiability) or cross points (points of self-intersection). If C/sub ..gamma../ is a loop which is smooth and simple except for a single cusp of angle ..gamma.., then W/sub R/(C/sub ..gamma../) = Z(..gamma..)W(C/sub ..gamma../) is finite for a suitable renormalization factor Z(..gamma..) which depends on ..gamma.. but on no other characteristic of C/sub ..gamma../. This statement is made precise by introducing a regularization, or via a loop-integrand subtraction scheme specified by a normalization condition W/sub R/(C-bar/sub ..gamma../) = 1 for an arbitrary but fixed loop C-bar/sub ..gamma../. Next, if C/sub ..beta../ is a loop which is smooth and simple except for a cross point of angles ..beta.., then W(C/sub ..beta../) must be renormalized together with the loop functions of associated sets S/sup i//sub ..beta../ = )C/sup i//sub 1/,xxx, C/sup i//sub p/i) (i = 2,xxx,I) of loops C/sup i//sub q/ which coincide with certain parts of C/sub ..beta../equivalentC/sup 1//sub 1/. Then W/sub R/(S/sup i//sub ..beta../) = Z/sup i/j(..beta..)W(S/sup j//sub ..beta../) is finite for a suitable matrix Z/sup i/j(..beta..).« less

  3. Revisiting the Cramér Rao Lower Bound for Elastography: Predicting the Performance of Axial, Lateral and Polar Strain Elastograms.

    PubMed

    Verma, Prashant; Doyley, Marvin M

    2017-09-01

    We derived the Cramér Rao lower bound for 2-D estimators employed in quasi-static elastography. To illustrate the theory, we modeled the 2-D point spread function as a sinc-modulated sine pulse in the axial direction and as a sinc function in the lateral direction. We compared theoretical predictions of the variance incurred in displacements and strains when quasi-static elastography was performed under varying conditions (different scanning methods, different configuration of conventional linear array imaging and different-size kernels) with those measured from simulated or experimentally acquired data. We performed studies to illustrate the application of the derived expressions when performing vascular elastography with plane wave and compounded plane wave imaging. Standard deviations in lateral displacements were an order higher than those in axial. Additionally, the derived expressions predicted that peak performance should occur when 2% strain is applied, the same order of magnitude as observed in simulations (1%) and experiments (1%-2%). We assessed how different configurations of conventional linear array imaging (number of active reception and transmission elements) influenced the quality of axial and lateral strain elastograms. The theoretical expressions predicted that 2-D echo tracking should be performed with wide kernels, but the length of the kernels should be selected using knowledge of the magnitude of the applied strain: specifically, longer kernels for small strains (<5%) and shorter kernels for larger strains. Although the general trends of theoretical predictions and experimental observations were similar, biases incurred during beamforming and subsample displacement estimation produced noticeable differences. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  4. Validation of Born Traveltime Kernels

    NASA Astrophysics Data System (ADS)

    Baig, A. M.; Dahlen, F. A.; Hung, S.

    2001-12-01

    Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.

  5. Design and Performance of the GAMMA-400 Gamma-Ray Telescope for Dark Matter Searches

    NASA Technical Reports Server (NTRS)

    Galper, A.M.; Adriani, O.; Aptekar, R. L.; Arkhangelskaja, I. V.; Arkhangelskiy, A.I.; Boezio, M.; Bonvicini, V.; Boyarchuk, K. A.; Fradkin, M. I.; Gusakov, Yu. V.; hide

    2012-01-01

    The GAMMA-400 gamma-ray telescope is designed to measure the fluxes of gamma-rays and cosmic-ray electrons + positrons, which can be produced by annihilation or decay of the dark matter particles, as well as to survey the celestial sphere in order to study point and extended sources of gamma-rays, measure energy spectra of Galactic and extragalactic diffuse gamma-ray emission, gamma-ray bursts, and gamma-ray emission from the Sun. GAMMA-400 covers the energy range from 100 MeV to 3000 GeV. Its angular resolution is approx. 0.01 deg (E(sub gamma) > 100 GeV), the energy resolution approx. 1% (E(sub gamma) > 10 GeV), and the proton rejection factor approx 10(exp 6). GAMMA-400 will be installed on the Russian space platform Navigator. The beginning of observations is planned for 2018.

  6. Design and Performance of the GAMMA-400 Gamma-Ray Telescope for Dark Matter Searches

    NASA Technical Reports Server (NTRS)

    Galper, A. M.; Adriani, O.; Aptekar, R. L.; Arkhangelskaja, I. V.; Arkhangelskiy, A. I.; Boezio, M.; Bonvicini, V.; Boyarchuk, K. A.; Fradkin, M. I.; Gusakov, Yu V.; hide

    2012-01-01

    The GAMMA-400 gamma-ray telescope is designed to measure the fluxes of gamma-rays and cosmic-ray electrons (+) positrons, which can be produced by annihilation or decay of the dark matter particles, as well as to survey the celestial sphere in order to study point and extended sources of gamma-rays, measure energy spectra of Galactic and extragalactic diffuse gamma-ray emission, gamma-ray bursts, and gamma-ray emission from the Sun. GAMMA-400 covers the energy range from 100 MeV to 3000 GeV. Its angular resolution is approximately 0.01deg (E(sub gamma) greater than 100 GeV), the energy resolution approximately 1% (E(sub gamma) greater than 10 GeV), and the proton rejection factor approximately 10(exp 6). GAMMA-400 will be installed on the Russian space platform Navigator. The beginning of observations is planned for 2018.

  7. Comparison of full width at half maximum and penumbra of different Gamma Knife models.

    PubMed

    Asgari, Sepideh; Banaee, Nooshin; Nedaie, Hassan Ali

    2018-01-01

    As a radiosurgical tool, Gamma Knife has the best and widespread name recognition. Gamma Knife is a noninvasive intracranial technique invented and developed by Swedish neurosurgeon Lars Leksell. The first commercial Leksell Gamma Knife entered the therapeutic armamentarium at the University of Pittsburgh in the United States on August 1987. Since that time, different generation of Gamma Knife developed. In this study, the technical points and dosimetric parameters including full width at half maximum and penumbra on different generation of Gamma Knife will be reviewed and compared. The results of this review study show that the rotating gamma system provides a better dose conformity.

  8. Systemic Growth of F. graminearum in Wheat Plants and Related Accumulation of Deoxynivalenol

    PubMed Central

    Moretti, Antonio; Panzarini, Giuseppe; Somma, Stefania; Campagna, Claudio; Ravaglia, Stefano; Logrieco, Antonio F.; Solfrizzo, Michele

    2014-01-01

    Fusarium head blight (FHB) is an important disease of wheat worldwide caused mainly by Fusarium graminearum (syn. Gibberella zeae). This fungus can be highly aggressive and can produce several mycotoxins such as deoxynivalenol (DON), a well known harmful metabolite for humans, animals, and plants. The fungus can survive overwinter on wheat residues and on the soil, and can usually attack the wheat plant at their point of flowering, being able to infect the heads and to contaminate the kernels at the maturity. Contaminated kernels can be sometimes used as seeds for the cultivation of the following year. Poor knowledge on the ability of the strains of F. graminearum occurring on wheat seeds to be transmitted to the plant and to contribute to the final DON contamination of kernels is available. Therefore, this study had the goals of evaluating: (a) the capability of F. graminearum causing FHB of wheat to be transmitted from the seeds or soil to the kernels at maturity and the progress of the fungus within the plant at different growth stages; (b) the levels of DON contamination in both plant tissues and kernels. The study has been carried out for two years in a climatic chamber. The F. gramineraum strain selected for the inoculation was followed within the plant by using Vegetative Compatibility technique, and quantified by Real-Time PCR. Chemical analyses of DON were carried out by using immunoaffinity cleanup and HPLC/UV/DAD. The study showed that F. graminearum originated from seeds or soil can grow systemically in the plant tissues, with the exception of kernels and heads. There seems to be a barrier that inhibits the colonization of the heads by the fungus. High levels of DON and F. graminearum were found in crowns, stems, and straw, whereas low levels of DON and no detectable levels of F. graminearum were found in both heads and kernels. Finally, in all parts of the plant (heads, crowns, and stems at milk and vitreous ripening stages, and straw at vitreous ripening), also the accumulation of significant quantities of DON-3-glucoside (DON-3G), a product of DON glycosylation, was detected, with decreasing levels in straw, crown, stems and kernels. The presence of DON and DON-3G in heads and kernels without the occurrence of F. graminearum may be explained by their water solubility that could facilitate their translocation from stem to heads and kernels. The presence of DON-3G at levels 23 times higher than DON in the heads at milk stage without the occurrence of F. graminearum may indicate that an active glycosylation of DON also occurs in the head tissues. Finally, the high levels of DON accumulated in straws are worrisome since they represent additional sources of mycotoxin for livestock. PMID:24727554

  9. Kernel abortion in maize : I. Carbohydrate concentration patterns and Acid invertase activity of maize kernels induced to abort in vitro.

    PubMed

    Hanft, J M; Jones, R J

    1986-06-01

    Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.

  10. SU-E-T-117: Analysis of the ArcCHECK Dosimetry Gamma Failure Using the 3DVH System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, S; Choi, W; Lee, H

    2015-06-15

    Purpose: To evaluate gamma analysis failure for the VMAT patient specific QA using ArcCHECK cylindrical phantom. The 3DVH system(Sun Nuclear, FL) was used to analyze the dose difference statistic between measured dose and treatment planning system calculated dose. Methods: Four case of gamma analysis failure were selected retrospectively. Our institution gamma analysis indexes were absolute dose, 3%/3mm and 90%pass rate in the ArcCHECK dosimetry. The collapsed cone convolution superposition (CCCS) dose calculation algorithm for VMAT was used. Dose delivery was performed with Elekta Agility. The A1SL(standard imaging, WI) and cavity plug were used for point dose measurement. Delivery QA plansmore » and images were used for 3DVH Reference data instead of patient plan and image. The measured data of ‘.txt’ file was used for comparison at diodes to acquire a global dose level. The,.acml’ file was used for AC-PDP and to calculated point dose. Results: The global dose of 3DVH was calculated as 1.10 Gy, 1.13, 1.01 and 0.2 Gy respectively. The global dose of 0.2 Gy case was induced by distance discrepancy. The TPS calculated point dose of was 2.33 Gy to 2.77 Gy and 3DVH calculated dose was 2.33 Gy to 2.68 Gy. The maximum dose differences were −2.83% and −3.1% for TPS vs. measured dose and TPS vs. 3DVH calculated respectively in the same case. The difference between measured and 3DVH was 0.1% in that case. The 3DVH gamma pass rate was 98% to 99.7%. Conclusion: We found the TPS calculation error by 3DVH calculation using ArcCHECK measured dose. It seemed that our CCCS algorithm RTP system over estimated at the central region and underestimated scattering at the peripheral diode detector point. The relative gamma analysis and point dose measurement would be recommended for VMAT DQA in the gamma failure case of ArcCHECK dosimetry.« less

  11. Chemical and genetic blockade of HDACs enhances osteogenic differentiation of human adipose tissue-derived stem cells by oppositely affecting osteogenic and adipogenic transcription factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maroni, Paola; Brini, Anna Teresa; Dipartimento di Scienze Biomediche, Chirurgiche ed Odontoiatriche, Universita degli Studi di Milano, Milano

    2012-11-16

    Highlights: Black-Right-Pointing-Pointer Acetylation affected hASCs osteodifferentiation through Runx2-PPAR{gamma}. Black-Right-Pointing-Pointer HDACs knocking-down favoured the commitment effect of osteogenic medium. Black-Right-Pointing-Pointer HDACs silencing early activated Runx2 and ALP. Black-Right-Pointing-Pointer PPAR{gamma} reduction and calcium/collagen deposition occurred later. Black-Right-Pointing-Pointer Runx2/PPAR{gamma} target genes were modulated in line with HDACs role in osteo-commitment. -- Abstract: The human adipose-tissue derived stem/stromal cells (hASCs) are an interesting source for bone-tissue engineering applications. Our aim was to clarify in hASCs the role of acetylation in the control of Runt-related transcription factor 2 (Runx2) and Peroxisome proliferator activated receptor (PPAR) {gamma}. These key osteogenic and adipogenic transcription factors are oppositelymore » involved in osteo-differentiation. The hASCs, committed or not towards bone lineage with osteoinductive medium, were exposed to HDACs chemical blockade with Trichostatin A (TSA) or were genetically silenced for HDACs. Alkaline phosphatase (ALP) and collagen/calcium deposition, considered as early and late osteogenic markers, were evaluated concomitantly as index of osteo-differentiation. TSA pretreatment, useful experimental protocol to analyse pan-HDAC-chemical inhibition, and switch to osteogenic medium induced early-osteoblast maturation gene Runx2, while transiently decreased PPAR{gamma} and scarcely affected late-differentiation markers. Time-dependent effects were observed after knocking-down of HDAC1 and 3: Runx2 and ALP underwent early activation, followed by late-osteogenic markers increase and by PPAR{gamma}/ALP activity diminutions mostly after HDAC3 silencing. HDAC1 and 3 genetic blockade increased and decreased Runx2 and PPAR{gamma} target genes, respectively. Noteworthy, HDACs knocking-down favoured the commitment effect of osteogenic medium. Our results reveal a role for HDACs in orchestrating osteo-differentiation of hASCs at transcriptional level, and might provide new insights into the modulation of hASCs-based regenerative therapy.« less

  12. Design and performance of the GAMMA-400 gamma-ray telescope for dark matter searches

    NASA Astrophysics Data System (ADS)

    Galper, A. M.; Adriani, O.; Aptekar, R. L.; Arkhangelskaja, I. V.; Arkhangelskiy, A. I.; Boezio, M.; Bonvicini, V.; Boyarchuk, K. A.; Fradkin, M. I.; Gusakov, Yu. V.; Kaplin, V. A.; Kachanov, V. A.; Kheymits, M. D.; Leonov, A. A.; Longo, F.; Mazets, E. P.; Maestro, P.; Marrocchesi, P.; Mereminskiy, I. A.; Mikhailov, V. V.; Moiseev, A. A.; Mocchiutti, E.; Mori, N.; Moskalenko, I. V.; Naumov, P. Yu.; Papini, P.; Picozza, P.; Rodin, V. G.; Runtso, M. F.; Sparvoli, R.; Spillantini, P.; Suchkov, S. I.; Tavani, M.; Topchiev, N. P.; Vacchi, A.; Vannuccini, E.; Yurkin, Yu. T.; Zampa, N.; Zverev, V. G.; Zirakashvili, V. N.

    2013-02-01

    The GAMMA-400 gamma-ray telescope is designed to measure the fluxes of gamma-rays and cosmic-ray electrons + positrons, which can be produced by annihilation or decay of the dark matter particles, as well as to survey the celestial sphere in order to study point and extended sources of gamma-rays, measure energy spectra of Galactic and extragalactic diffuse gamma-ray emission, gamma-ray bursts, and gamma-ray emission from the Sun. GAMMA-400 covers the energy range from 100 MeV to 3000 GeV. Its angular resolution is ~0.01° (Eγ > 100 GeV), the energy resolution ~1% (Eγ > 10 GeV), and the proton rejection factor ~106. GAMMA-400 will be installed on the Russian space platform Navigator. The beginning of observations is planned for 2018.

  13. Search for TeV gamma ray emission from the Andromeda galaxy

    NASA Astrophysics Data System (ADS)

    Aharonian, F. A.; Akhperjanian, A. G.; Beilicke, M.; Bernlöhr, K.; Bojahr, H.; Bolz, O.; Börst, H.; Coarasa, T.; Contreras, J. L.; Cortina, J.; Denninghoff, S.; Fonseca, V.; Girma, M.; Götting, N.; Heinzelmann, G.; Hermann, G.; Heusler, A.; Hofmann, W.; Horns, D.; Jung, I.; Kankanyan, R.; Kestel, M.; Kettler, J.; Kohnle, A.; Konopelko, A.; Kornmeyer, H.; Kranich, D.; Krawczynski, H.; Lampeitl, H.; Lopez, M.; Lorenz, E.; Lucarelli, F.; Mang, O.; Meyer, H.; Mirzoyan, R.; Moralejo, A.; Ona, E.; Panter, M.; Plyasheshnikov, A.; Pühlhofer, G.; Rauterberg, G.; Reyes, R.; Rhode, W.; Ripken, J.; Röhring, A.; Rowell, G. P.; Sahakian, V.; Samorski, M.; Schilling, M.; Siems, M.; Sobzynska, D.; Stamm, W.; Tluczykont, M.; Völk, H. J.; Wiedner, C. A.; Wittek, W.

    2003-03-01

    Using the HEGRA system of imaging atmospheric Cherenkov telescopes, the Andromeda galaxy (M 31) was surveyed for TeV gamma ray emission. Given the large field of view of the HEGRA telescopes, three pointings were sufficient to cover all of M 31, including also M 32 and NGC 205. No indications for point sources of TeV gamma rays were found. Upper limits are given at a level of a few percent of the Crab flux. A specific search for monoenergetic gamma-ray lines from annihilation of supersymmetric dark matter particles accumulating near the center of M 31 resulted in flux limits in the 10-13 cm-2 s-1 range, well above the predicted MSSM flux levels except for models with pronounced dark-matter spikes or strongly enhanced annihilation rates.

  14. Design considerations for a Space Station radiation shield for protection from both man-made and natural sources

    NASA Technical Reports Server (NTRS)

    Bolch, Wesley E.; Peddicord, K. Lee; Felsher, Harry; Smith, Simon

    1994-01-01

    This study was conducted to analyze scenarios involving the use of nuclear-power vehicles in the vicinity of a manned Space Station (SS) in low-earth-orbit (LEO) to quantify their radiological impact to the station crew. In limiting the radiant dose to crew members, mission planners may (1) shut the reactor down prior to reentry, (2) position the vehicle at a prescribed parking distance, and (3) deploy radiation shield about the shutdown reactor. The current report focuses on the third option in which point-kernel gamma-ray shielding calculations were performed for a variety of shield configurations for both nuclear electric propulsion (NEP) and nuclear thermal rocket (NTR) vehicles. For a returning NTR vehicle, calculations indicate that a 14.9 MT shield would be needed to limit the integrated crew exposure to no more than 0.05 Sv over a period of six months (25 percent of the allowable exposure to man-made radiation sources). During periods of low vehicular activity in LEO, the shield may be redeployed about the SS habitation module in order to decrease crew exposures to trapped proton radiations by approximately a factor of 10. The corresponding shield mass required for deployment at a returning NEP vehicle is 2.21 MT. Additional scenarios examined include the radioactivation of various metals as might be found in tools used in EVA activities.

  15. Physical modification of palm kernel meal improved available carbohydrate, physicochemical properties and in vitro digestibility in economic freshwater fish.

    PubMed

    Thongprajukaew, Karun; Yawang, Pinya; Dudae, Lateepah; Bilanglod, Husna; Dumrongrittamatt, Terdtoon; Tantikitti, Chutima; Kovitvadhi, Uthaiwan

    2013-12-01

    Unavailable carbohydrates are an important limiting factor for utilization of palm kernel meal (PKM) as aquafeed ingredients. The aim of this study was to improve available carbohydrate from PKM. Different physical modifications including water soaking, microwave irradiation, gamma irradiation and electron beam, were investigated in relation to chemical composition, physicochemical properties and in vitro carbohydrate digestibility using digestive enzymes from economic freshwater fish. Modified methods had significant (P < 0.05) effects on chemical composition by decreasing crude fiber and increasing available carbohydrates. Improvements in physicochemical properties of PKM, such as water solubility, microstructure, relative crystallinity and lignocellulosic spectra, were mainly achieved by soaking and microwave irradiation. Carbohydrate digestibility varied among the physical modifications tested (P < 0.05) and three fish species had different abilities to digest PKM. Soaking was the appropriate modification for increasing carbohydrate digestion specifically in Nile tilapia (Oreochromis niloticus), whereas either soaking or microwave irradiation was effective for striped snakehead (Channa striata). For walking catfish (Clarias batrachus), carbohydrate digestibility was similar among raw, soaked and microwave-irradiated PKM. These findings suggest that soaking and microwave irradiation could be practical methods for altering appropriate physicochemical properties of PKM as well as increasing carbohydrate digestibility in select economic freshwater fish. © 2013 Society of Chemical Industry.

  16. Derivative based sensitivity analysis of gamma index

    PubMed Central

    Sarkar, Biplab; Pradhan, Anirudh; Ganesh, T.

    2015-01-01

    Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ) concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD) and distance-to-agreement (DTA) measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm), the point is included in the quantitative score as “pass.” Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP) representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP) was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP) which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA) against the RP, the first and second order derivatives of the DDs (δD’, δD”) between these two curves were derived and used as the boundary values for evaluating the STTP against the RP. Even though the STTP passed the simple gamma pass criteria, it was found failing at many locations when the derivatives were used as the boundary values. The proposed derivative-based method can identify a noisy curve and can prove to be a useful tool for improving the sensitivity of the gamma index. PMID:26865761

  17. High-energy photon-hadron scattering in holographic QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishio, Ryoichi; Institute for the Physics and Mathematics of the Universe, University of Tokyo, Kashiwano-ha 5-1-5, 277-8583; Watari, Taizan

    2011-10-01

    This article provides an in-depth look at hadron high-energy scattering by using gravity dual descriptions of strongly coupled gauge theories. Just like deeply inelastic scattering (DIS) and deeply virtual Compton scattering (DVCS) serve as clean experimental probes into nonperturbative internal structure of hadrons, elastic scattering amplitude of a hadron and a (virtual) photon in gravity dual can be exploited as a theoretical probe. Since the scattering amplitude at sufficiently high energy (small Bjorken x) is dominated by parton contributions (=Pomeron contributions) even in strong coupling regime, there is a chance to learn a lesson for generalized parton distribution (GPD) bymore » using gravity dual models. We begin with refining derivation of the Brower-Polchinski-Strassler-Tan (BPST) Pomeron kernel in gravity dual, paying particular attention to the role played by the complex spin variable j. The BPST Pomeron on warped spacetime consists of a Kaluza-Klein tower of 4D Pomerons with nonlinear trajectories, and we clarify the relation between Pomeron couplings and the Pomeron form factor. We emphasize that the saddle-point value j* of the scattering amplitude in the complex j-plane representation is a very important concept in understanding qualitative behavior of the scattering amplitude. The total Pomeron contribution to the scattering is decomposed into the saddle-point contribution and at most a finite number of pole contributions, and when the pole contributions are absent (which we call saddle-point phase), kinematical variable (q,x,t)-dependence of ln(1/q) evolution and ln(1/x) evolution parameters {gamma}{sub eff} and {lambda}{sub eff} in DIS and t-slope parameter B of DVCS in HERA experiment are all reproduced qualitatively in gravity dual. All of these observations shed a new light on modeling of GPD. Straightforward application of those results to other hadron high-energy scattering is also discussed.« less

  18. 7 CFR 810.602 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...

  19. Collider effects of unparticle interactions in multiphoton signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aliev, T. M.; Frank, Mariana; Turan, Ismail

    2009-12-01

    A new model of physics, with a hidden conformal sector which manifests itself as an unparticle coupling to standard model particles effectively through higher dimensional operators, predicts strong collider signals due to unparticle self-interactions. We perform a complete analysis of the most spectacular of these signals at the hadron collider, pp(p){yields}{gamma}{gamma}{gamma}{gamma} and {gamma}{gamma}gg. These processes can go through the three-point unparticle self-interactions as well as through some s and t channel diagrams with one and/or two unparticle exchanges. We study the contributions of individual diagrams classified with respect to the number of unparticle exchanges and discuss their effect on themore » cross sections at the Tevatron and the LHC. We also restrict the Tevatron bound on the unknown coefficient of the three-point unparticle correlator. With the availability of data from the Tevatron, and the advent of the data emerging from the LHC, these interactions can provide a clear and strong indication of unparticle physics and distinguish this model from other beyond the standard model scenarios.« less

  20. Search for Gamma-Ray Emission from Local Primordial Black Holes with the Fermi Large Area Telescope

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bellazzini, R.; Berenji, B.; Bissaldi, E.; Blandford, R. D.; Bloom, E. D.; Bonino, R.; Bottacini, E.; Bregeon, J.; Bruel, P.; Buehler, R.; Cameron, R. A.; Caputo, R.; Caraveo, P. A.; Cavazzuti, E.; Charles, E.; Chekhtman, A.; Cheung, C. C.; Chiaro, G.; Ciprini, S.; Cohen-Tanugi, J.; Conrad, J.; Costantin, D.; D’Ammando, F.; de Palma, F.; Digel, S. W.; Di Lalla, N.; Di Mauro, M.; Di Venere, L.; Favuzzi, C.; Fegan, S. J.; Focke, W. B.; Franckowiak, A.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gargano, F.; Gasparrini, D.; Giglietto, N.; Giordano, F.; Giroletti, M.; Green, D.; Grenier, I. A.; Guillemot, L.; Guiriec, S.; Horan, D.; Jóhannesson, G.; Johnson, C.; Kensei, S.; Kocevski, D.; Kuss, M.; Larsson, S.; Latronico, L.; Li, J.; Longo, F.; Loparco, F.; Lovellette, M. N.; Lubrano, P.; Magill, J. D.; Maldera, S.; Malyshev, D.; Manfreda, A.; Mazziotta, M. N.; McEnery, J. E.; Meyer, M.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Monzani, M. E.; Moretti, E.; Morselli, A.; Moskalenko, I. V.; Negro, M.; Nuss, E.; Ojha, R.; Omodei, N.; Orienti, M.; Orlando, E.; Ormes, J. F.; Palatiello, M.; Paliya, V. S.; Paneque, D.; Persic, M.; Pesce-Rollins, M.; Piron, F.; Principe, G.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Ritz, S.; Sánchez-Conde, M.; Sgrò, C.; Siskind, E. J.; Spada, F.; Spandre, G.; Spinelli, P.; Suson, D. J.; Tajima, H.; Thayer, J. G.; Thayer, J. B.; Torres, D. F.; Tosti, G.; Troja, E.; Valverde, J.; Vianello, G.; Wood, K.; Wood, M.; Zaharijas, G.

    2018-04-01

    Black holes with masses below approximately 1015 g are expected to emit gamma-rays with energies above a few tens of MeV, which can be detected by the Fermi Large Area Telescope (LAT). Although black holes with these masses cannot be formed as a result of stellar evolution, they may have formed in the early universe and are therefore called primordial black holes (PBHs). Previous searches for PBHs have focused on either short-timescale bursts or the contribution of PBHs to the isotropic gamma-ray emission. We show that, in cases of individual PBHs, the Fermi-LAT is most sensitive to PBHs with temperatures above approximately 16 GeV and masses 6 × 1011 g, which it can detect out to a distance of about 0.03 pc. These PBHs have a remaining lifetime of months to years at the start of the Fermi mission. They would appear as potentially moving point sources with gamma-ray emission that become spectrally harder and brighter with time until the PBH completely evaporates. In this paper, we develop a new algorithm to detect the proper motion of gamma-ray point sources, and apply it to 318 unassociated point sources at a high galactic latitude in the third Fermi-LAT source catalog. None of the unassociated point sources with spectra consistent with PBH evaporation show significant proper motion. Using the nondetection of PBH candidates, we derive a 99% confidence limit on the PBH evaporation rate in the vicinity of Earth, {\\dot{ρ }}PBH}< 7.2× {10}3 {pc}}-3 {yr}}-1. This limit is similar to the limits obtained with ground-based gamma-ray observatories.

  1. Search for Gamma-Ray Emission from Local Primordial Black Holes with the Fermi Large Area Telescope

    DOE PAGES

    Ackermann, M.; Atwood, W. B.; Baldini, L.; ...

    2018-04-10

    Black holes with masses below approximately 10 15 g are expected to emit gamma-rays with energies above a few tens of MeV, which can be detected by the Fermi Large Area Telescope (LAT). Although black holes with these masses cannot be formed as a result of stellar evolution, they may have formed in the early universe and are therefore called primordial black holes (PBHs). Previous searches for PBHs have focused on either short-timescale bursts or the contribution of PBHs to the isotropic gamma-ray emission. We show that, in cases of individual PBHs, the Fermi-LAT is most sensitive to PBHs with temperatures above approximately 16 GeV and masses 6 × 10 11 g, which it can detect out to a distance of about 0.03 pc. These PBHs have a remaining lifetime of months to years at the start of the Fermi mission. They would appear as potentially moving point sources with gamma-ray emission that become spectrally harder and brighter with time until the PBH completely evaporates. In this paper, we develop a new algorithm to detect the proper motion of gamma-ray point sources, and apply it to 318 unassociated point sources at a high galactic latitude in the third Fermi-LAT source catalog. None of the unassociated point sources with spectra consistent with PBH evaporation show significant proper motion. Finally, using the nondetection of PBH candidates, we derive a 99% confidence limit on the PBH evaporation rate in the vicinity of Earth,more » $${\\dot{\\rho }}_{\\mathrm{PBH}}\\lt 7.2\\times {10}^{3}\\ {\\mathrm{pc}}^{-3}\\,{\\mathrm{yr}}^{-1}$$. This limit is similar to the limits obtained with ground-based gamma-ray observatories.« less

  2. Search for Gamma-Ray Emission from Local Primordial Black Holes with the Fermi Large Area Telescope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackermann, M.; Atwood, W. B.; Baldini, L.

    Black holes with masses below approximately 10 15 g are expected to emit gamma-rays with energies above a few tens of MeV, which can be detected by the Fermi Large Area Telescope (LAT). Although black holes with these masses cannot be formed as a result of stellar evolution, they may have formed in the early universe and are therefore called primordial black holes (PBHs). Previous searches for PBHs have focused on either short-timescale bursts or the contribution of PBHs to the isotropic gamma-ray emission. We show that, in cases of individual PBHs, the Fermi-LAT is most sensitive to PBHs with temperatures above approximately 16 GeV and masses 6 × 10 11 g, which it can detect out to a distance of about 0.03 pc. These PBHs have a remaining lifetime of months to years at the start of the Fermi mission. They would appear as potentially moving point sources with gamma-ray emission that become spectrally harder and brighter with time until the PBH completely evaporates. In this paper, we develop a new algorithm to detect the proper motion of gamma-ray point sources, and apply it to 318 unassociated point sources at a high galactic latitude in the third Fermi-LAT source catalog. None of the unassociated point sources with spectra consistent with PBH evaporation show significant proper motion. Finally, using the nondetection of PBH candidates, we derive a 99% confidence limit on the PBH evaporation rate in the vicinity of Earth,more » $${\\dot{\\rho }}_{\\mathrm{PBH}}\\lt 7.2\\times {10}^{3}\\ {\\mathrm{pc}}^{-3}\\,{\\mathrm{yr}}^{-1}$$. This limit is similar to the limits obtained with ground-based gamma-ray observatories.« less

  3. Analytical results for the statistical distribution related to a memoryless deterministic walk: dimensionality effect and mean-field models.

    PubMed

    Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto

    2005-08-01

    Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.

  4. Kernel Abortion in Maize 1

    PubMed Central

    Hanft, Jonathan M.; Jones, Robert J.

    1986-01-01

    Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846

  5. 7 CFR 810.1202 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...

  6. Kernel Ada Programming Support Environment (KAPSE) Interface Team: Public Report. Volume II.

    DTIC Science & Technology

    1982-10-28

    essential I parameters from our work so far in this area and, using trade-offs concerning these, construct the KIT’s recommended alternative. 1145...environment that are also in the development states. At this point in development it is essential for the KITEC to provide a forum and act as a focal...standardization in this area. Moreover, this is an area with considerable divergence in proposed approaches. Or the other hand, an essential tool from the point of

  7. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize.

    PubMed

    Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.

  8. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize

    PubMed Central

    Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143

  9. Growth and characterization of SrI2:Eu2+ single crystal for gamma ray detector applications

    NASA Astrophysics Data System (ADS)

    Raja, A.; Daniel, D. Joseph; Ramasamy, P.; Singh, S. G.; Sen, S.; Gadkari, S. C.

    2018-04-01

    Europium activated Strontium Iodide single crystal was grown by vertical Bridgman-stockbarger technique. The melting point and freezing point of SrI2:Eu2+ crystal was analyzed by TG/DTA. The Radioluminescence emission was recorded. The scintillation measurement was carried out for the grown SrI2:Eu2+ crystal under 137Cs gamma energy source.

  10. Brachypodium distachyon-Cochliobolus sativus pathosystem is a new model for studying plant-fungal interactions in cereal crops

    USDA-ARS?s Scientific Manuscript database

    Cochliobolus sativus (anamorph: Bipolaris sorokiniana) causes three major diseases in barley and wheat, including spot blotch, common root rot and kernel blight or black point. These diseases significantly reduce the yield and quality of the two most important cereal crops in the US and other region...

  11. IMPLEMENTATION OF THE SMOKE EMISSION DATA PROCESSOR AND SMOKE TOOL INPUT DATA PROCESSOR IN MODELS-3

    EPA Science Inventory

    The U.S. Environmental Protection Agency has implemented Version 1.3 of SMOKE (Sparse Matrix Object Kernel Emission) processor for preparation of area, mobile, point, and biogenic sources emission data within Version 4.1 of the Models-3 air quality modeling framework. The SMOK...

  12. [Spatial analysis of road traffic accidents with fatalities in Spain, 2008-2011].

    PubMed

    Gómez-Barroso, Diana; López-Cuadrado, Teresa; Llácer, Alicia; Palmera Suárez, Rocío; Fernández-Cuenca, Rafael

    2015-09-01

    To estimate the areas of greatest density of road traffic accidents with fatalities at 24 hours per km(2)/year in Spain from 2008 to 2011, using a geographic information system. Accidents were geocodified using the road and kilometer points where they occurred. The average nearest neighbor was calculated to detect possible clusters and to obtain the bandwidth for kernel density estimation. A total of 4775 accidents were analyzed, of which 73.3% occurred on conventional roads. The estimated average distance between accidents was 1,242 meters, and the average expected distance was 10,738 meters. The nearest neighbor index was 0.11, indicating that there were aggregations of accidents in space. A map showing the kernel density was obtained with a resolution of 1 km(2), which identified the areas of highest density. This methodology allowed a better approximation to locating accident risks by taking into account kilometer points. The map shows areas where there was a greater density of accidents. This could be an advantage in decision-making by the relevant authorities. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.

  13. A search for optical counterparts of gamma-ray bursts. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Hye-Sook

    Gamma Ray Bursts (GRBS) are mysterious flashes of gamma rays lasting several tens to hundreds of seconds that occur approximately once per day. NASA launched the orbiting Compton Gamma Ray Observatory to study GRBs and other gamma ray phenomena. CGRO carries the Burst and Transient Experiment (BATSE) specifically to study GRBS. Although BATSE has collected data on over 600 GRBS, and confirmed that GRBs are localized, high intensity point sources of MeV gamma rays distributed isotropically in the sky, the nature and origin of GRBs remains a fundamental problem in astrophysics. BATSE`s 8 gamma ray sensors located on the comersmore » of the box shaped CGRO can detect the onset of GRBs and record their intensity and energy spectra as a function of time. The position of the burst on the sky can be determined to < {plus_minus}10{degrees} from the BATSE data stream. This position resolution is not sufficient to point a large, optical telescope at the exact position of a GRB which would determine its origin by associating it with a star. Because of their brief duration it is not known if GRBs are accompanied by visible radiation. Their seemingly large energy output suggests thatthis should be. Simply scaling the ratio of visible to gamma ray intensities of the Crab Nebula to the GRB output suggests that GRBs ought to be accompanied by visible flashes of magnitude 10 or so. A few photographs of areas containing a burst location that were coincidentally taken during the burst yield lower limits on visible output of magnitude 4. The detection of visible light during the GRB would provide information on burst physics, provide improved pointing coordinates for precise examination of the field by large telescope and provide the justification for larger dedicated optical counterpart instruments. The purpose of this experiment is to detect or set lower limits on optical counterpart radiation simultaneously accompanying the gamma rays from« less

  14. 7 CFR 810.802 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...

  15. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  16. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  17. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  18. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  19. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  20. Classification With Truncated Distance Kernel.

    PubMed

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  1. Ascofuranone stimulates expression of adiponectin and peroxisome proliferator activated receptor through the modulation of mitogen activated protein kinase family members in 3T3-L1, murine pre-adipocyte cell line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Young-Chae, E-mail: ycchang@cu.ac.kr; Cho, Hyun-Ji, E-mail: hjcho.dr@gmail.com

    2012-06-08

    Highlights: Black-Right-Pointing-Pointer Ascofuranone increases expression of adiponectin and PPAR{gamma}. Black-Right-Pointing-Pointer Inhibitors for MEK and JNK increased the expression of adiponectin and PPAR{gamma}. Black-Right-Pointing-Pointer Ascofuranone significantly suppressed phosho-ERK, while increasing phospho-p38. -- Abstract: Ascofuranone, an isoprenoid antibiotic, was originally isolated as a hypolipidemic substance from a culture broth of the phytopathogenic fungus, Ascochyta visiae. Adiponectin is mainly synthesized by adipocytes. It relieves insulin resistance by decreasing the plasma triglycerides and improving glucose uptake, and has anti-atherogenic properties. Here, we found that ascofuranone increases expression of adiponectin and PPAR{gamma}, a major transcription factor for adiponectin, in 3T3-L1, murine pre-adipocytes cell line, withoutmore » promoting accumulation of lipid droplets. Ascofuranone induced expression of adiponectin, and increases the promoter activity of adiponectin and PPRE, PPAR response element, as comparably as a PPAR{gamma} agonist, rosiglitazone, that stimulates lipid accumulation in the preadipocyte cell line. Moreover, inhibitors for MEK and JNK, like ascofuranone, considerably increased the expression of adiponectin and PPAR{gamma}, while a p38 inhibitor significantly suppressed. Ascofuranone significantly suppressed ERK phosphorylation, while increasing p38 phosphorylation, during adipocyte differentiation program. These results suggest that ascofuranone regulates the expression of adiponectin and PPAR{gamma} through the modulation of MAP kinase family members.« less

  2. Analysis of the spatial distribution of dengue cases in the city of Rio de Janeiro, 2011 and 2012

    PubMed Central

    Carvalho, Silvia; Magalhães, Mônica de Avelar Figueiredo Mafra; Medronho, Roberto de Andrade

    2017-01-01

    ABSTRACT OBJECTIVE Analyze the spatial distribution of classical dengue and severe dengue cases in the city of Rio de Janeiro. METHODS Exploratory study, considering cases of classical dengue and severe dengue with laboratory confirmation of the infection in the city of Rio de Janeiro during the years 2011/2012. The georeferencing technique was applied for the cases notified in the Notification Increase Information System in the period of 2011 and 2012. For this process, the fields “street” and “number” were used. The ArcGis10 program’s Geocoding tool’s automatic process was performed. The spatial analysis was done through the kernel density estimator. RESULTS Kernel density pointed out hotspots for classic dengue that did not coincide geographically with severe dengue and were in or near favelas. The kernel ratio did not show a notable change in the spatial distribution pattern observed in the kernel density analysis. The georeferencing process showed a loss of 41% of classic dengue registries and 17% of severe dengue registries due to the address in the Notification Increase Information System form. CONCLUSIONS The hotspots near the favelas suggest that the social vulnerability of these localities can be an influencing factor for the occurrence of this aggravation since there is a deficiency of the supply and access to essential goods and services for the population. To reduce this vulnerability, interventions must be related to macroeconomic policies. PMID:28832752

  3. Comparative analysis of genetic architectures for nine developmental traits of rye.

    PubMed

    Masojć, Piotr; Milczarski, P; Kruszona, P

    2017-08-01

    Genetic architectures of plant height, stem thickness, spike length, awn length, heading date, thousand-kernel weight, kernel length, leaf area and chlorophyll content were aligned on the DArT-based high-density map of the 541 × Ot1-3 RILs population of rye using the genes interaction assorting by divergent selection (GIABDS) method. Complex sets of QTL for particular traits contained 1-5 loci of the epistatic D class and 10-28 loci of the hypostatic, mostly R and E classes controlling traits variation through D-E or D-R types of two-loci interactions. QTL were distributed on each of the seven rye chromosomes in unique positions or as a coinciding loci for 2-8 traits. Detection of considerable numbers of the reversed (D', E' and R') classes of QTL might be attributed to the transgression effects observed for most of the studied traits. First examples of E* and F QTL classes, defined in the model, are reported for awn length, leaf area, thousand-kernel weight and kernel length. The results of this study extend experimental data to 11 quantitative traits (together with pre-harvest sprouting and alpha-amylase activity) for which genetic architectures fit the model of mechanism underlying alleles distribution within tails of bi-parental populations. They are also a valuable starting point for map-based search of genes underlying detected QTL and for planning advanced marker-assisted multi-trait breeding strategies.

  4. Acceleration of GPU-based Krylov solvers via data transfer reduction

    DOE PAGES

    Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...

    2015-04-08

    Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less

  5. First On-Site True Gamma-Ray Imaging-Spectroscopy of Contamination near Fukushima Plant

    PubMed Central

    Tomono, Dai; Mizumoto, Tetsuya; Takada, Atsushi; Komura, Shotaro; Matsuoka, Yoshihiro; Mizumura, Yoshitaka; Oda, Makoto; Tanimori, Toru

    2017-01-01

    We have developed an Electron Tracking Compton Camera (ETCC), which provides a well-defined Point Spread Function (PSF) by reconstructing a direction of each gamma as a point and realizes simultaneous measurement of brightness and spectrum of MeV gamma-rays for the first time. Here, we present the results of our on-site pilot gamma-imaging-spectroscopy with ETCC at three contaminated locations in the vicinity of the Fukushima Daiichi Nuclear Power Plants in Japan in 2014. The obtained distribution of brightness (or emissivity) with remote-sensing observations is unambiguously converted into the dose distribution. We confirm that the dose distribution is consistent with the one taken by conventional mapping measurements with a dosimeter physically placed at each grid point. Furthermore, its imaging spectroscopy, boosted by Compton-edge-free spectra, reveals complex radioactive features in a quantitative manner around each individual target point in the background-dominated environment. Notably, we successfully identify a “micro hot spot” of residual caesium contamination even in an already decontaminated area. These results show that the ETCC performs exactly as the geometrical optics predicts, demonstrates its versatility in the field radiation measurement, and reveals potentials for application in many fields, including the nuclear industry, medical field, and astronomy. PMID:28155883

  6. Nucleosynthesis, neutrino bursts and gamma-rays from coalescing neutron stars

    NASA Technical Reports Server (NTRS)

    Eichler, David; Livio, Mario; Piran, Tsvi; Schramm, David N.

    1989-01-01

    It is pointed out here that neutron-star collisions should synthesize neutron-rich heavy elements, thought to be formed by rapid neutron capture (the r-process). Furthermore, these collisions should produce neutrino bursts and resultant bursts of gamma rays; the latter should comprise a subclass of observable gamma-ray bursts. It is argued that observed r-process abundances and gamma-ray burst rates predict rates for these collisions that are both significant and consistent with other estimates.

  7. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    PubMed Central

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-01-01

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202

  8. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.

  9. Break Point Distribution on Chromosome 3 of Human Epithelial Cells exposed to Gamma Rays, Neutrons and Fe Ions

    NASA Technical Reports Server (NTRS)

    Hada, M.; Saganti, P. B.; Gersey, B.; Wilkins, R.; Cucinotta, F. A.; Wu, H.

    2007-01-01

    Most of the reported studies of break point distribution on the damaged chromosomes from radiation exposure were carried out with the G-banding technique or determined based on the relative length of the broken chromosomal fragments. However, these techniques lack the accuracy in comparison with the later developed multicolor banding in situ hybridization (mBAND) technique that is generally used for analysis of intrachromosomal aberrations such as inversions. Using mBAND, we studied chromosome aberrations in human epithelial cells exposed in vitro to both low or high dose rate gamma rays in Houston, low dose rate secondary neutrons at Los Alamos National Laboratory and high dose rate 600 MeV/u Fe ions at NASA Space Radiation Laboratory. Detailed analysis of the inversion type revealed that all of the three radiation types induced a low incidence of simple inversions. Half of the inversions observed after neutron or Fe ion exposure, and the majority of inversions in gamma-irradiated samples were accompanied by other types of intrachromosomal aberrations. In addition, neutrons and Fe ions induced a significant fraction of inversions that involved complex rearrangements of both inter- and intrachromosome exchanges. We further compared the distribution of break point on chromosome 3 for the three radiation types. The break points were found to be randomly distributed on chromosome 3 after neutrons or Fe ions exposure, whereas non-random distribution with clustering break points was observed for gamma-rays. The break point distribution may serve as a potential fingerprint of high-LET radiation exposure.

  10. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  11. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  12. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  13. Forced Ignition Study Based On Wavelet Method

    NASA Astrophysics Data System (ADS)

    Martelli, E.; Valorani, M.; Paolucci, S.; Zikoski, Z.

    2011-05-01

    The control of ignition in a rocket engine is a critical problem for combustion chamber design. Therefore it is essential to fully understand the mechanism of ignition during its earliest stages. In this paper the characteristics of flame kernel formation and initial propagation in a hydrogen-argon-oxygen mixing layer are studied using 2D direct numerical simulations with detailed chemistry and transport properties. The flame kernel is initiated by adding an energy deposition source term in the energy equation. The effect of unsteady strain rate is studied by imposing a 2D turbulence velocity field, which is initialized by means of a synthetic field. An adaptive wavelet method, based on interpolating wavelets is used in this study to solve the compressible reactive Navier- Stokes equations. This method provides an alternative means to refine the computational grid points according to local demands of the physical solution. The present simulations show that in the very early instants the kernel perturbed by the turbulent field is characterized by an increased burning area and a slightly increased rad- ical formation. In addition, the calculations show that the wavelet technique yields a significant reduction in the number of degrees of freedom necessary to achieve a pre- scribed solution accuracy.

  14. Novel procedure for characterizing nonlinear systems with memory: 2017 update

    NASA Astrophysics Data System (ADS)

    Nuttall, Albert H.; Katz, Richard A.; Hughes, Derke R.; Koch, Robert M.

    2017-05-01

    The present article discusses novel improvements in nonlinear signal processing made by the prime algorithm developer, Dr. Albert H. Nuttall and co-authors, a consortium of research scientists from the Naval Undersea Warfare Center Division, Newport, RI. The algorithm, called the Nuttall-Wiener-Volterra or 'NWV' algorithm is named for its principal contributors [1], [2],[ 3] . The NWV algorithm significantly reduces the computational workload for characterizing nonlinear systems with memory. Following this formulation, two measurement waveforms are required in order to characterize a specified nonlinear system under consideration: (1) an excitation input waveform, x(t) (the transmitted signal); and, (2) a response output waveform, z(t) (the received signal). Given these two measurement waveforms for a given propagation channel, a 'kernel' or 'channel response', h= [h0,h1,h2,h3] between the two measurement points, is computed via a least squares approach that optimizes modeled kernel values by performing a best fit between measured response z(t) and a modeled response y(t). New techniques significantly diminish the exponential growth of the number of computed kernel coefficients at second and third order and alleviate the Curse of Dimensionality (COD) in order to realize practical nonlinear solutions of scientific and engineering interest.

  15. Rocksalt or cesium chloride: Investigating the relative stability of the cesium halide structures with random phase approximation based methods

    NASA Astrophysics Data System (ADS)

    Nepal, Niraj K.; Ruzsinszky, Adrienn; Bates, Jefferson E.

    2018-03-01

    The ground state structural and energetic properties for rocksalt and cesium chloride phases of the cesium halides were explored using the random phase approximation (RPA) and beyond-RPA methods to benchmark the nonempirical SCAN meta-GGA and its empirical dispersion corrections. The importance of nonadditivity and higher-order multipole moments of dispersion in these systems is discussed. RPA generally predicts the equilibrium volume for these halides within 2.4% of the experimental value, while beyond-RPA methods utilizing the renormalized adiabatic LDA (rALDA) exchange-correlation kernel are typically within 1.8%. The zero-point vibrational energy is small and shows that the stability of these halides is purely due to electronic correlation effects. The rAPBE kernel as a correction to RPA overestimates the equilibrium volume and could not predict the correct phase ordering in the case of cesium chloride, while the rALDA kernel consistently predicted results in agreement with the experiment for all of the halides. However, due to its reasonable accuracy with lower computational cost, SCAN+rVV10 proved to be a good alternative to the RPA-like methods for describing the properties of these ionic solids.

  16. Bandlimited computerized improvements in characterization of nonlinear systems with memory

    NASA Astrophysics Data System (ADS)

    Nuttall, Albert H.; Katz, Richard A.; Hughes, Derke R.; Koch, Robert M.

    2016-05-01

    The present article discusses some inroads in nonlinear signal processing made by the prime algorithm developer, Dr. Albert H. Nuttall and co-authors, a consortium of research scientists from the Naval Undersea Warfare Center Division, Newport, RI. The algorithm, called the Nuttall-Wiener-Volterra 'NWV' algorithm is named for its principal contributors [1], [2],[ 3] over many years of developmental research. The NWV algorithm significantly reduces the computational workload for characterizing nonlinear systems with memory. Following this formulation, two measurement waveforms on the system are required in order to characterize a specified nonlinear system under consideration: (1) an excitation input waveform, x(t) (the transmitted signal); and, (2) a response output waveform, z(t) (the received signal). Given these two measurement waveforms for a given propagation channel, a 'kernel' or 'channel response', h= [h0,h1,h2,h3] between the two measurement points, is computed via a least squares approach that optimizes modeled kernel values by performing a best fit between measured response z(t) and a modeled response y(t). New techniques significantly diminish the exponential growth of the number of computed kernel coefficients at second and third order in order to combat and reasonably alleviate the curse of dimensionality.

  17. HEAO C-1 gamma-ray spectrometer. [experimental design

    NASA Technical Reports Server (NTRS)

    Mahoney, W. A.; Ling, J. C.; Willett, J. B.; Jacobson, A. S.

    1978-01-01

    The gamma-ray spectroscopy experiment to be launched on the third High Energy Astronomy Observatory (HEAO C) will perform a complete sky search for narrow gamma-ray line emission to the level of about 00001 photons/sq cm -sec for steady point sources. The design of this experiment and its performance based on testing and calibration to date are discussed.

  18. Exploiting graph kernels for high performance biomedical relation extraction.

    PubMed

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.

  19. The GAMMA Ray Sky as Seen by Fermi: Opening a New Window on the High Energy Space Environment

    DTIC Science & Technology

    2009-01-01

    pulsars , stars whose repeating emissions can be used as ultra-precise chronometers. Measurement of gamma radiation provides unique insight...diffuse glow are a number of bright point sources, mostly gamma ray pulsars — rotating, magnetized neutron stars — as discussed below. The bright sources...important early discoveries of Fermi have been from objects in our galaxy. The LAT has discovered 12 new pulsars that seem to be visible only in gamma

  20. 7 CFR 810.2202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernels, foreign material, and shrunken and broken kernels. The sum of these three factors may not exceed... the removal of dockage and shrunken and broken kernels. (g) Heat-damaged kernels. Kernels, pieces of... sample after the removal of dockage and shrunken and broken kernels. (h) Other grains. Barley, corn...

  1. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  2. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...

  3. Secondary gamma-ray production in a coded aperture mask

    NASA Technical Reports Server (NTRS)

    Owens, A.; Frye, G. M., Jr.; Hall, C. J.; Jenkins, T. L.; Pendleton, G. N.; Carter, J. N.; Ramsden, D.; Agrinier, B.; Bonfand, E.; Gouiffes, C.

    1985-01-01

    The application of the coded aperture mask to high energy gamma-ray astronomy will provide the capability of locating a cosmic gamma-ray point source with a precision of a few arc-minutes above 20 MeV. Recent tests using a mask in conjunction with drift chamber detectors have shown that the expected point spread function is achieved over an acceptance cone of 25 deg. A telescope employing this technique differs from a conventional telescope only in that the presence of the mask modifies the radiation field in the vicinity of the detection plane. In addition to reducing the primary photon flux incident on the detector by absorption in the mask elements, the mask will also be a secondary radiator of gamma-rays. The various background components in a CAMTRAC (Coded Aperture Mask Track Chamber) telescope are considered. Monte-Carlo calculations are compared with recent measurements obtained using a prototype instrument in a tagged photon beam line.

  4. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    PubMed

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Unconventional protein sources: apricot seed kernels.

    PubMed

    Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M

    1981-09-01

    Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.

  6. An introduction to kernel-based learning algorithms.

    PubMed

    Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B

    2001-01-01

    This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.

  7. Space-Borne Observations of Intense Gamma-Ray Flashes (TGFs) Above Thunderstorms

    NASA Technical Reports Server (NTRS)

    Fishman, Gerald J.

    2011-01-01

    Intense millisecond flashes of MeV photons have been observed with space-borne detectors. These terrestrial gamma-ray flashes (TGFs) were discovered with the Burst and Transient Source Experiment (BATSE) aboard the Compton Gamma- Ray Observatory (CGRO) in the early 1990s. They are now being observed with several other instruments, including the Gamma-ray Burst Monitor (GBM) detectors on the Fermi Gamma-ray Space Telescope. Although Fermi-GBM was designed and optimized for the observation of cosmic gamma-ray bursts (GRBs), it has unprecedented capabilities for these TGF observations. On several occasions, intense beams of high-energy electrons and positrons have been observed at the geomagnetic conjugate points of TGFs.

  8. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  9. Design of CT reconstruction kernel specifically for clinical lung imaging

    NASA Astrophysics Data System (ADS)

    Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.

    2005-04-01

    In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.

  10. Initial results from a multi-point mapping observation of thundercloud high-energy radiation in coastal area of Japan sea

    NASA Astrophysics Data System (ADS)

    Wada, Y.; Enoto, T.; Furuta, Y.; Nakazawa, K.; Yuasa, T.; Okuda, K.; Makishima, K.; Nakano, T.; Umemoto, D.; Tsuchiya, H.

    2017-12-01

    On-ground detections of Thunderstorm Radiation Bursts (TRB) which mainly consist of bremsstrahlung gamma rays with energy extending up to 20 MeV indicate powerful electron accelerations inside thunderclouds or along lightning discharge paths (e.g. Torii et al., 2002, Tsuchiya et al., 2007, Dwyer et al., 2004). In order to resolve time variation and structure of the electron accelerators, we have constructed a multi-point mapping observation network with the aim of tracing gamma rays from moving thunderclouds since 2015. In fiscal 2016, we developed low cost and small size detectors dedicated to our observation. The data acquisition system records energy and timing of individual gamma-ray photons by 4-ch and 50 MHz sampling electrical boards (9.5 cm x 9.5 cm), coupled with BGO scintillator crystals. The systems were installed in portable water-proof boxes. We operated 10 detectors in two areas (Ishikawa and Niigata) along the coast of Japan Sea from October 2016 to April 2017. During this period, detectors in Ishikawa detected in total 10 TRBs lasting for several minutes associated with passage of a thundercloud. Our previous single-site measurement at Niigata, has recorded 1.4 TRBs per year on average in 2006-2015. Therefore, our new multi-point observation detected 7 times as many events as the previous system. One of the TRB gamma-ray spectra was fitted well by a cutoff power-law model. We performed a Monte Carlo simulation, and revealed that this spectrum was explained as bremsstrahlung of a monochromatic 15 MeV electron beam generated at an altitude of 500 m. We also succeeded in tracing gamma rays from an identical moving thundercloud with two detectors, demonstrating performance of the multi-point observation. In addition, we detected "short TRBs" lasting for a few hundred milliseconds associated with lightning discharges from four independent detectors placed 500 m apart simultaneously, in January and February 2017 at Niigata. The results in 2016-2017 winter season are proving that our multi-point observation can firmly detect a large number of TRBs and trace gamma rays from thunderstorms.

  11. A new discriminative kernel from probabilistic models.

    PubMed

    Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert

    2002-10-01

    Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.

  12. The structure, logic of operation and distinctive features of the system of triggers and counting signals formation for gamma-telescope GAMMA-400

    NASA Astrophysics Data System (ADS)

    Topchiev, N. P.; Galper, A. M.; Arkhangelskiy, A. I.; Arkhangelskaja, I. V.; Kheymits, M. D.; Suchkov, S. I.; Yurkin, Y. T.

    2017-01-01

    Scientific project GAMMA-400 (Gamma Astronomical Multifunctional Modular Apparatus) relates to the new generation of space observatories intended to perform an indirect search for signatures of dark matter in the cosmic-ray fluxes, measurements of characteristics of diffuse gamma-ray emission and gamma-rays from the Sun during periods of solar activity, gamma-ray bursts, extended and point gamma-ray sources, electron/positron and cosmic-ray nuclei fluxes up to TeV energy region by means of the GAMMA-400 gamma-ray telescope represents the core of the scientific complex. The system of triggers and counting signals formation of the GAMMA-400 gamma-ray telescope constitutes the pipelined processor structure which collects data from the gamma-ray telescope subsystems and produces summary information used in forming the trigger decision for each event. The system design is based on the use of state-of-the-art reconfigurable logic devices and fast data links. The basic structure, logic of operation and distinctive features of the system are presented.

  13. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  14. Characterization of phospholipase C gamma enzymes with gain-of-function mutations.

    PubMed

    Everett, Katy L; Bunney, Tom D; Yoon, Youngdae; Rodrigues-Lima, Fernando; Harris, Richard; Driscoll, Paul C; Abe, Koichiro; Fuchs, Helmut; de Angelis, Martin Hrabé; Yu, Philipp; Cho, Wohnwa; Katan, Matilda

    2009-08-21

    Phospholipase C gamma isozymes (PLC gamma 1 and PLC gamma 2) have a crucial role in the regulation of a variety of cellular functions. Both enzymes have also been implicated in signaling events underlying aberrant cellular responses. Using N-ethyl-N-nitrosourea (ENU) mutagenesis, we have recently identified single point mutations in murine PLC gamma 2 that lead to spontaneous inflammation and autoimmunity. Here we describe further, mechanistic characterization of two gain-of-function mutations, D993G and Y495C, designated as ALI5 and ALI14. The residue Asp-993, mutated in ALI5, is a conserved residue in the catalytic domain of PLC enzymes. Analysis of PLC gamma 1 and PLC gamma 2 with point mutations of this residue showed that removal of the negative charge enhanced PLC activity in response to EGF stimulation or activation by Rac. Measurements of PLC activity in vitro and analysis of membrane binding have suggested that ALI5-type mutations facilitate membrane interactions without compromising substrate binding and hydrolysis. The residue mutated in ALI14 (Tyr-495) is within the spPH domain. Replacement of this residue had no effect on folding of the domain and enhanced Rac activation of PLC gamma 2 without increasing Rac binding. Importantly, the activation of the ALI14-PLC gamma 2 and corresponding PLC gamma 1 variants was enhanced in response to EGF stimulation and bypassed the requirement for phosphorylation of critical tyrosine residues. ALI5- and ALI14-type mutations affected basal activity only slightly; however, their combination resulted in a constitutively active PLC. Based on these data, we suggest that each mutation could compromise auto-inhibition in the inactive PLC, facilitating the activation process; in addition, ALI5-type mutations could enhance membrane interaction in the activated state.

  15. Graph embedding and extensions: a general framework for dimensionality reduction.

    PubMed

    Yan, Shuicheng; Xu, Dong; Zhang, Benyu; Zhang, Hong-Jiang; Yang, Qiang; Lin, Stephen

    2007-01-01

    Over the past few decades, a large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions.

  16. An evaluation of potential sampling locations in a reservoir with emphasis on conserved spatial correlation structure.

    PubMed

    Yenilmez, Firdes; Düzgün, Sebnem; Aksoy, Aysegül

    2015-01-01

    In this study, kernel density estimation (KDE) was coupled with ordinary two-dimensional kriging (OK) to reduce the number of sampling locations in measurement and kriging of dissolved oxygen (DO) concentrations in Porsuk Dam Reservoir (PDR). Conservation of the spatial correlation structure in the DO distribution was a target. KDE was used as a tool to aid in identification of the sampling locations that would be removed from the sampling network in order to decrease the total number of samples. Accordingly, several networks were generated in which sampling locations were reduced from 65 to 10 in increments of 4 or 5 points at a time based on kernel density maps. DO variograms were constructed, and DO values in PDR were kriged. Performance of the networks in DO estimations were evaluated through various error metrics, standard error maps (SEM), and whether the spatial correlation structure was conserved or not. Results indicated that smaller number of sampling points resulted in loss of information in regard to spatial correlation structure in DO. The minimum representative sampling points for PDR was 35. Efficacy of the sampling location selection method was tested against the networks generated by experts. It was shown that the evaluation approach proposed in this study provided a better sampling network design in which the spatial correlation structure of DO was sustained for kriging.

  17. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction.

    PubMed

    Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E

    2015-01-07

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners-the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [(11)C]AFM rats imaged on the HRRT and [(11)C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.

  18. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction

    PubMed Central

    Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E

    2016-01-01

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners - the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods. PMID:25490063

  19. Alaska/Yukon Geoid Improvement by a Data-Driven Stokes's Kernel Modification Approach

    NASA Astrophysics Data System (ADS)

    Li, Xiaopeng; Roman, Daniel R.

    2015-04-01

    Geoid modeling over Alaska of USA and Yukon Canada being a trans-national issue faces a great challenge primarily due to the inhomogeneous surface gravity data (Saleh et al, 2013) and the dynamic geology (Freymueller et al, 2008) as well as its complex geological rheology. Previous study (Roman and Li 2014) used updated satellite models (Bruinsma et al 2013) and newly acquired aerogravity data from the GRAV-D project (Smith 2007) to capture the gravity field changes in the targeting areas primarily in the middle-to-long wavelength. In CONUS, the geoid model was largely improved. However, the precision of the resulted geoid model in Alaska was still in the decimeter level, 19cm at the 32 tide bench marks and 24cm on the 202 GPS/Leveling bench marks that gives a total of 23.8cm at all of these calibrated surface control points, where the datum bias was removed. Conventional kernel modification methods in this area (Li and Wang 2011) had limited effects on improving the precision of the geoid models. To compensate the geoid miss fits, a new Stokes's kernel modification method based on a data-driven technique is presented in this study. First, the method was tested on simulated data sets (Fig. 1), where the geoid errors have been reduced by 2 orders of magnitude (Fig 2). For the real data sets, some iteration steps are required to overcome the rank deficiency problem caused by the limited control data that are irregularly distributed in the target area. For instance, after 3 iterations, the standard deviation dropped about 2.7cm (Fig 3). Modification at other critical degrees can further minimize the geoid model miss fits caused either by the gravity error or the remaining datum error in the control points.

  20. Lagged kernel machine regression for identifying time windows of susceptibility to exposures of complex mixtures.

    PubMed

    Liu, Shelley H; Bobb, Jennifer F; Lee, Kyu Ha; Gennings, Chris; Claus Henn, Birgit; Bellinger, David; Austin, Christine; Schnaas, Lourdes; Tellez-Rojo, Martha M; Hu, Howard; Wright, Robert O; Arora, Manish; Coull, Brent A

    2018-07-01

    The impact of neurotoxic chemical mixtures on children's health is a critical public health concern. It is well known that during early life, toxic exposures may impact cognitive function during critical time intervals of increased vulnerability, known as windows of susceptibility. Knowledge on time windows of susceptibility can help inform treatment and prevention strategies, as chemical mixtures may affect a developmental process that is operating at a specific life phase. There are several statistical challenges in estimating the health effects of time-varying exposures to multi-pollutant mixtures, such as: multi-collinearity among the exposures both within time points and across time points, and complex exposure-response relationships. To address these concerns, we develop a flexible statistical method, called lagged kernel machine regression (LKMR). LKMR identifies critical exposure windows of chemical mixtures, and accounts for complex non-linear and non-additive effects of the mixture at any given exposure window. Specifically, LKMR estimates how the effects of a mixture of exposures change with the exposure time window using a Bayesian formulation of a grouped, fused lasso penalty within a kernel machine regression (KMR) framework. A simulation study demonstrates the performance of LKMR under realistic exposure-response scenarios, and demonstrates large gains over approaches that consider each time window separately, particularly when serial correlation among the time-varying exposures is high. Furthermore, LKMR demonstrates gains over another approach that inputs all time-specific chemical concentrations together into a single KMR. We apply LKMR to estimate associations between neurodevelopment and metal mixtures in Early Life Exposures in Mexico and Neurotoxicology, a prospective cohort study of child health in Mexico City.

  1. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  2. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    NASA Astrophysics Data System (ADS)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  3. UAV remote sensing atmospheric degradation image restoration based on multiple scattering APSF estimation

    NASA Astrophysics Data System (ADS)

    Qiu, Xiang; Dai, Ming; Yin, Chuan-li

    2017-09-01

    Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.

  4. Average capacity of the ground to train communication link of a curved track in the turbulence of gamma-gamma distribution

    NASA Astrophysics Data System (ADS)

    Yang, Yanqiu; Yu, Lin; Zhang, Yixin

    2017-04-01

    A model of the average capacity of optical wireless communication link with pointing errors for the ground-to-train of the curved track is established based on the non-Kolmogorov. By adopting the gamma-gamma distribution model, we derive the average capacity expression for this channel. The numerical analysis reveals that heavier fog reduces the average capacity of link. The strength of atmospheric turbulence, the variance of pointing errors, and the covered track length need to be reduced for the larger average capacity of link. The normalized beamwidth and the average signal-to-noise ratio (SNR) of the turbulence-free link need to be increased. We can increase the transmit aperture to expand the beamwidth and enhance the signal intensity, thereby decreasing the impact of the beam wander accordingly. As the system adopting the automatic tracking of beam at the receiver positioned on the roof of the train, for eliminating the pointing errors caused by beam wander and train vibration, the equivalent average capacity of the channel will achieve a maximum value. The impact of the non-Kolmogorov spectral index's variation on the average capacity of link can be ignored.

  5. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  6. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.

  7. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.

  8. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    PubMed

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P < 0.05) but the unbroken kernels became significantly harder. Moisture content and moisture uptake rate were positively correlated, and cooked rice hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  9. Nonparametric Bayesian inference for mean residual life functions in survival analysis.

    PubMed

    Poynor, Valerie; Kottas, Athanasios

    2018-01-19

    Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. The maximum vector-angular margin classifier and its fast training on large datasets using a core vector machine.

    PubMed

    Hu, Wenjun; Chung, Fu-Lai; Wang, Shitong

    2012-03-01

    Although pattern classification has been extensively studied in the past decades, how to effectively solve the corresponding training on large datasets is a problem that still requires particular attention. Many kernelized classification methods, such as SVM and SVDD, can be formulated as the corresponding quadratic programming (QP) problems, but computing the associated kernel matrices requires O(n2)(or even up to O(n3)) computational complexity, where n is the size of the training patterns, which heavily limits the applicability of these methods for large datasets. In this paper, a new classification method called the maximum vector-angular margin classifier (MAMC) is first proposed based on the vector-angular margin to find an optimal vector c in the pattern feature space, and all the testing patterns can be classified in terms of the maximum vector-angular margin ρ, between the vector c and all the training data points. Accordingly, it is proved that the kernelized MAMC can be equivalently formulated as the kernelized Minimum Enclosing Ball (MEB), which leads to a distinctive merit of MAMC, i.e., it has the flexibility of controlling the sum of support vectors like v-SVC and may be extended to a maximum vector-angular margin core vector machine (MAMCVM) by connecting the core vector machine (CVM) method with MAMC such that the corresponding fast training on large datasets can be effectively achieved. Experimental results on artificial and real datasets are provided to validate the power of the proposed methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Using kernel density estimates to investigate lymphatic filariasis in northeast Brazil

    PubMed Central

    Medeiros, Zulma; Bonfim, Cristine; Brandão, Eduardo; Netto, Maria José Evangelista; Vasconcellos, Lucia; Ribeiro, Liany; Portugal, José Luiz

    2012-01-01

    After more than 10 years of the Global Program to Eliminate Lymphatic Filariasis (GPELF) in Brazil, advances have been seen, but the endemic disease persists as a public health problem. The aim of this study was to describe the spatial distribution of lymphatic filariasis in the municipality of Jaboatão dos Guararapes, Pernambuco, Brazil. An epidemiological survey was conducted in the municipality, and positive filariasis cases identified in this survey were georeferenced in point form, using the GPS. A kernel intensity estimator was applied to identify clusters with greater intensity of cases. We examined 23 673 individuals and 323 individuals with microfilaremia were identified, representing a mean prevalence rate of 1.4%. Around 88% of the districts surveyed presented cases of filarial infection, with prevalences of 0–5.6%. The male population was more affected by the infection, with 63.8% of the cases (P<0.005). Positive cases were found in all age groups examined. The kernel intensity estimator identified the areas of greatest intensity and least intensity of filarial infection cases. The case distribution was heterogeneous across the municipality. The kernel estimator identified spatial clusters of cases, thus indicating locations with greater intensity of transmission. The main advantage of this type of analysis lies in its ability to rapidly and easily show areas with the highest concentration of cases, thereby contributing towards planning, monitoring, and surveillance of filariasis elimination actions. Incorporation of geoprocessing and spatial analysis techniques constitutes an important tool for use within the GPELF. PMID:22943547

  12. Studies of fatty acid composition, physicochemical and thermal properties, and crystallization behavior of mango kernel fats from various Thai varieties.

    PubMed

    Sonwai, Sopark; Ponprachanuvut, Punnee

    2014-01-01

    Mango kernel fat (MKF) has received attention in recent years due to the resemblance between its characteristics and those of cocoa butter (CB). In this work, fatty acid (FA) composition, physicochemical and thermal properties and crystallization behavior of MKFs obtained from four varieties of Thai mangoes: Keaw-Morakot (KM), Keaw-Sawoey (KS), Nam-Dokmai (ND) and Aok-Rong (AR), were characterized. The fat content of the mango kernels was 6.40, 5.78, 5.73 and 7.74% (dry basis) for KM, KS, ND and AR, respectively. The analysis of FA composition revealed that all four cultivars had oleic and stearic acids as the main FA components with ND and AR exhibiting highest and lowest stearic acid content, respectively. ND had the highest slip melting point and solid fat content (SFC) followed by KS, KM and AR. All fat samples exhibited high SFC at 20℃ and below. They melted slowly as the temperature increased and became complete liquids as the temperature approached 35°C. During static isothermal crystallization at 20°C, ND displayed the highest Avrami rate constant k followed by KS, KM and AR, indicating that the crystallization was fastest for ND and slowest for AR. The Avrami exponent n of all samples ranged from 0.89 to 1.73. The x-ray diffraction analysis showed that all MKFs crystallized into a mixture of pseudo-β', β', sub-β and β structures with β' being the predominant polymorph. Finally, the crystals of the kernel fats from all mango varieties exhibited spherulitic morphology.

  13. Reply to Comments to X. Li and Y. M. Wang (2011) Comparisons of geoid models over Alaska computed with different Stokes' kernel modifications, JGS 1(2): 136-142 by L. E. Sjöberg

    NASA Astrophysics Data System (ADS)

    Wang, Y.

    2012-01-01

    The authors thank professor Sjöberg for having interest in our paper. The main goal of the paper is to test kernel modification methods used in geoid computations. Our tests found that Vanicek/Kleusberg's and Featherstone's methods fit the GPS/leveling data the best in the relative sense at various cap sizes. At the same time, we also pointed out that their methods are unstable and the mean values change from dm to meters by just changing the cap size. By contrast, the modification of the Wong and Gore type (including the spectral combination, method of Heck and Grüninger) is stable and insensitive to the truncation degree and cap size. This feature is especially useful when we know the accuracy of the gravity field at different frequency bands. For instance, it is advisable to truncate Stokes' kernel at a degree to which the satellite model is believed to be more accurate than surface data. The method of the Wong and Goretype does this job quite well. In contrast, the low degrees of Stokes' kernel are modified by Molodensky's coefficients tn in Vanicek/Kleusberg's and Featherstone's methods (cf. Eq. (6) in Li and Wang (2011)). It implies that the low degree gravity field of the reference model will be altered by less accurate surface data in the final geoid. This is also the cause of the larger variation in mean values of the geoid.

  14. A distance-driven deconvolution method for CT image-resolution improvement

    NASA Astrophysics Data System (ADS)

    Han, Seokmin; Choi, Kihwan; Yoo, Sang Wook; Yi, Jonghyon

    2016-12-01

    The purpose of this research is to achieve high spatial resolution in CT (computed tomography) images without hardware modification. The main idea is to consider geometry optics model, which can provide the approximate blurring PSF (point spread function) kernel, which varies according to the distance from the X-ray tube to each point. The FOV (field of view) is divided into several band regions based on the distance from the X-ray source, and each region is deconvolved with a different deconvolution kernel. As the number of subbands increases, the overshoot of the MTF (modulation transfer function) curve increases first. After that, the overshoot begins to decrease while still showing a larger MTF than the normal FBP (filtered backprojection). The case of five subbands seems to show balanced performance between MTF boost and overshoot minimization. It can be seen that, as the number of subbands increases, the noise (STD) can be seen to show a tendency to decrease. The results shows that spatial resolution in CT images can be improved without using high-resolution detectors or focal spot wobbling. The proposed algorithm shows promising results in improving spatial resolution while avoiding excessive noise boost.

  15. Kernel-based discriminant feature extraction using a representative dataset

    NASA Astrophysics Data System (ADS)

    Li, Honglin; Sancho Gomez, Jose-Luis; Ahalt, Stanley C.

    2002-07-01

    Discriminant Feature Extraction (DFE) is widely recognized as an important pre-processing step in classification applications. Most DFE algorithms are linear and thus can only explore the linear discriminant information among the different classes. Recently, there has been several promising attempts to develop nonlinear DFE algorithms, among which is Kernel-based Feature Extraction (KFE). The efficacy of KFE has been experimentally verified by both synthetic data and real problems. However, KFE has some known limitations. First, KFE does not work well for strongly overlapped data. Second, KFE employs all of the training set samples during the feature extraction phase, which can result in significant computation when applied to very large datasets. Finally, KFE can result in overfitting. In this paper, we propose a substantial improvement to KFE that overcomes the above limitations by using a representative dataset, which consists of critical points that are generated from data-editing techniques and centroid points that are determined by using the Frequency Sensitive Competitive Learning (FSCL) algorithm. Experiments show that this new KFE algorithm performs well on significantly overlapped datasets, and it also reduces computational complexity. Further, by controlling the number of centroids, the overfitting problem can be effectively alleviated.

  16. The gamma ray continuum spectrum from the galactic center disk and point sources

    NASA Technical Reports Server (NTRS)

    Gehrels, Neil; Tueller, Jack

    1992-01-01

    A light curve of gamma-ray continuum emission from point sources in the galactic center region is generated from balloon and satellite observations made over the past 25 years. The emphasis is on the wide field-of-view instruments which measure the combined flux from all sources within approximately 20 degrees of the center. These data have not been previously used for point-source analyses because of the unknown contribution from diffuse disk emission. In this study, the galactic disk component is estimated from observations made by the Gamma Ray Imaging Spectrometer (GRIS) instrument in Oct. 1988. Surprisingly, there are several times during the past 25 years when all gamma-ray sources (at 100 keV) within about 20 degrees of the galactic center are turned off or are in low emission states. This implies that the sources are all variable and few in number. The continuum gamma-ray emission below approximately 150 keV from the black hole candidate 1E1740.7-2942 is seen to turn off in May 1989 on a time scale of less than two weeks, significantly shorter than ever seen before. With the continuum below 150 keV turned off, the spectral shape derived from the HEXAGONE observation on 22 May 1989 is very peculiar with a peak near 200 keV. This source was probably in its normal state for more than half of all observations since the mid-1960's. There are only two observations (in 1977 and 1979) for which the sum flux from the point sources in the region significantly exceeds that from 1E1740.7-2942 in its normal state.

  17. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  18. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Study on Energy Productivity Ratio (EPR) at palm kernel oil processing factory: case study on PT-X at Sumatera Utara Plantation

    NASA Astrophysics Data System (ADS)

    Haryanto, B.; Bukit, R. Br; Situmeang, E. M.; Christina, E. P.; Pandiangan, F.

    2018-02-01

    The purpose of this study was to determine the performance, productivity and feasibility of the operation of palm kernel processing plant based on Energy Productivity Ratio (EPR). EPR is expressed as the ratio of output to input energy and by-product. Palm Kernel plan is process in palm kernel to become palm kernel oil. The procedure started from collecting data needed as energy input such as: palm kernel prices, energy demand and depreciation of the factory. The energy output and its by-product comprise the whole production price such as: palm kernel oil price and the remaining products such as shells and pulp price. Calculation the equality of energy of palm kernel oil is to analyze the value of Energy Productivity Ratio (EPR) bases on processing capacity per year. The investigation has been done in Kernel Oil Processing Plant PT-X at Sumatera Utara plantation. The value of EPR was 1.54 (EPR > 1), which indicated that the processing of palm kernel into palm kernel oil is feasible to be operated based on the energy productivity.

  20. Penetrative nature of high energy showers observed in Chacaltaya emulsion chamber

    NASA Technical Reports Server (NTRS)

    Funayama, Y.; Tamada, M.

    1985-01-01

    About 30% of single core showers with E (sup gamma) 10 TeV have stronger penetrating power than that expected from electromagnetic showers (e,gamma). On the other hand, their starting points of cascades in the chamber are found to be as shallow as those of (e,gamma) components. It is suggested that those showers are very collimated bundles of hadron and (e,gamma) component. Otherwise, it is assumed that the collision mean free path of those showers in the chamber is shorter than that of hadron with geometrical value.

  1. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    PubMed Central

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  2. Modulators and inhibitors of gamma- and beta-secretases.

    PubMed

    Schmidt, Boris; Baumann, Stefanie; Narlawar, Rajeshwar; Braun, Hannes A; Larbig, Gregor

    2006-01-01

    Most gene mutations associated with Alzheimer's disease point to the metabolism of amyloid precursor protein as a potential cause. The beta- and gamma-secretases are two executioners of amyloid precursor protein processing resulting in amyloid-beta. Significant progress has been made in the selective inhibition of both proteases, regardless of structural information for gamma-secretase. Several peptidic and nonpeptidic leads were identified for both targets. Copyright 2006 S. Karger AG, Basel.

  3. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  4. Critical indices for reversible gamma-alpha phase transformation in metallic cerium

    NASA Astrophysics Data System (ADS)

    Soldatova, E. D.; Tkachenko, T. B.

    1980-08-01

    Critical indices for cerium have been determined within the framework of the pseudobinary solution theory along the phase equilibrium curve, the critical isotherm, and the critical isobar. The results obtained verify the validity of relationships proposed by Rushbrook (1963), Griffiths (1965), and Coopersmith (1968). It is concluded that reversible gamma-alpha transformation in metallic cerium is a critical-type transformation, and cerium has a critical point on the phase diagram similar to the critical point of the liquid-vapor system.

  5. A Search for Early Optical Emission at Gamma-Ray Burst Locations by the Solar Mass Ejection Imager (SMEI)

    NASA Technical Reports Server (NTRS)

    Band, David L.; Buffington, Andrew; Jackson, Bernard V.; Hick, P. Paul; Smith, Aaron C.

    2005-01-01

    The Solar Mass Ejection Imager (SMEI) views nearly every point on the sky once every 102 minutes and can detect point sources as faint as R approx. 10th magnitude. Therefore, SMEI can detect or provide upper limits for the optical afterglow from gamma-ray bursts in the tens of minutes after the burst when different shocked regions may emit optically. Here we provide upper limits for 58 bursts between 2003 February and 2005 April.

  6. An SVM model with hybrid kernels for hydrological time series

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cartarius, Holger; Moiseyev, Nimrod; Department of Physics and Minerva Center for Nonlinear Physics of Complex Systems, Technion-Israel Institute of Technology, Haifa, 32000

    The unique time signature of the survival probability exactly at the exceptional point parameters is studied here for the hydrogen atom in strong static magnetic and electric fields. We show that indeed the survival probability S(t)=|<{psi}(0)|{psi}(t)>|{sup 2} decays exactly as |1-at|{sup 2}e{sup -{Gamma}{sub E}{sub P}t/({Dirac_h}/2{pi})}, where {Gamma}{sub EP} is associated with the decay rate at the exceptional point and a is a complex constant depending solely on the initial wave packet that populates exclusively the two almost degenerate states of the non-Hermitian Hamiltonian. This may open the possibility for a first experimental detection of exceptional points in a quantum system.

  8. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Multiple kernels learning-based biological entity relationship extraction method.

    PubMed

    Dongliang, Xu; Jingchang, Pan; Bailing, Wang

    2017-09-20

    Automatic extracting protein entity interaction information from biomedical literature can help to build protein relation network and design new drugs. There are more than 20 million literature abstracts included in MEDLINE, which is the most authoritative textual database in the field of biomedicine, and follow an exponential growth over time. This frantic expansion of the biomedical literature can often be difficult to absorb or manually analyze. Thus efficient and automated search engines are necessary to efficiently explore the biomedical literature using text mining techniques. The P, R, and F value of tag graph method in Aimed corpus are 50.82, 69.76, and 58.61%, respectively. The P, R, and F value of tag graph kernel method in other four evaluation corpuses are 2-5% higher than that of all-paths graph kernel. And The P, R and F value of feature kernel and tag graph kernel fuse methods is 53.43, 71.62 and 61.30%, respectively. The P, R and F value of feature kernel and tag graph kernel fuse methods is 55.47, 70.29 and 60.37%, respectively. It indicated that the performance of the two kinds of kernel fusion methods is better than that of simple kernel. In comparison with the all-paths graph kernel method, the tag graph kernel method is superior in terms of overall performance. Experiments show that the performance of the multi-kernels method is better than that of the three separate single-kernel method and the dual-mutually fused kernel method used hereof in five corpus sets.

  10. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  11. 7 CFR 810.206 - Grades and grade requirements for barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... weight per bushel (pounds) Sound barley (percent) Maximum Limits of— Damaged kernels 1 (percent) Heat damaged kernels (percent) Foreign material (percent) Broken kernels (percent) Thin barley (percent) U.S... or otherwise of distinctly low quality. 1 Includes heat-damaged kernels. Injured-by-frost kernels and...

  12. High energy gamma-ray astronomy; Proceedings of the International Conference, ANN Arbor, MI, Oct. 2-5, 1990

    NASA Astrophysics Data System (ADS)

    Matthews, James

    The present volume on high energy gamma-ray astronomy discusses the composition and properties of heavy cosmic rays greater than 10 exp 12 eV, implications of the IRAS Survey for galactic gamma-ray astronomy, gamma-ray emission from young neutron stars, and high-energy diffuse gamma rays. Attention is given to observations of TeV photons at the Whipple Observatory, TeV gamma rays from millisecond pulsars, recent data from the CYGNUS experiment, and recent results from the Woomera Telescope. Topics addressed include bounds on a possible He/VHE gamma-ray line signal of Galactic dark matter, albedo gamma rays from cosmic ray interactions on the solar surface, source studies, and the CANGAROO project. Also discussed are neural nets and other methods for maximizing the sensitivity of a low-threshold VHE gamma-ray telescope, a prototype water-Cerenkov air-shower detector, detection of point sources with spark chamber gamma-ray telescopes, and real-time image parameterization in high energy gamma-ray astronomy using transputers. (For individual items see A93-25002 to A93-25039)

  13. Shear Thinning Near the Critical Point of Xenon

    NASA Technical Reports Server (NTRS)

    Zimmerli, Gregory A.; Berg, Robert F.; Moldover, Michael R.; Yao, Minwu

    2008-01-01

    We measured shear thinning, a viscosity decrease ordinarily associated with complex liquids, near the critical point of xenon. The data span a wide range of reduced shear rate: 10(exp -3) < gamma-dot tau < 700, where gamma-dot tau is the shear rate scaled by the relaxation time tau of critical fluctuations. The measurements had a temperature resolution of 0.01 mK and were conducted in microgravity aboard the Space Shuttle Columbia to avoid the density stratification caused by Earth's gravity. The viscometer measured the drag on a delicate nickel screen as it oscillated in the xenon at amplitudes 3 mu,m < chi (sub 0) >430 mu, and frequencies 1 Hz < omega/2 pi < 5 Hz. To separate shear thinning from other nonlinearities, we computed the ratio of the viscous force on the screen at gamma-dot tau to the force at gamma-dot tau approximates 0: C(sub gamma) is identical with F(chi(sub 0), omega tau, gamma-dot tau )/F)(chi(sub 0, omega tau, 0). At low frequencies, (omega tau)(exp 2) < gamma-dot tau, C(sub gamma) depends only on gamma-dot tau, as predicted by dynamic critical scaling. At high frequencies, (omega tau)(exp 2) > gamma-dot tau, C(sub gamma) depends also on both x(sub 0) and omega. The data were compared with numerical calculations based on the Carreau-Yasuda relation for complex fluids: eta(gamma-dot)/eta(0)=[1+A(sub gamma)|gamma-dot tau|](exp - chi(sub eta)/3+chi(sub eta)), where chi(sub eta) =0.069 is the critical exponent for viscosity and mode-coupling theory predicts A(sub gamma) =0.121. For xenon we find A(sub gamma) =0.137 +/- 0.029, in agreement with the mode coupling value. Remarkably, the xenon data close to the critical temperature T(sub c) were independent of the cooling rate (both above and below T(sub c) and these data were symmetric about T(sub c) to within a temperature scale factor. The scale factors for the magnitude of the oscillator s response differed from those for the oscillator's phase; this suggests that the surface tension of the two-phase domains affected the drag on the screen below T(sub c).

  14. Microbial analysis and survey test of gamma-irradiated freeze-dried fruits for patient's food

    NASA Astrophysics Data System (ADS)

    Park, Jae-Nam; Sung, Nak-Yun; Byun, Eui-Hong; Byun, Eui-Baek; Song, Beom-Seok; Kim, Jae-Hun; Lee, Kyung-A.; Son, Eun-Joo; Lyu, Eun-Soon

    2015-06-01

    This study examined the microbiological and organoleptic qualities of gamma-irradiated freeze-dried apples, pears, strawberries, pineapples, and grapes, and evaluated the organoleptic acceptability of the sterilized freeze-dried fruits for hospitalized patients. The freeze-dried fruits were gamma-irradiated at 0, 1, 2, 3, 4, 5, 10, 12, and 15 kGy, and their quality was evaluated. Microorganisms were not detected in apples after 1 kGy, in strawberries and pears after 4 kGy, in pineapples after 5 kGy, and in grapes after 12 kGy of gamma irradiation. The overall acceptance score, of the irradiated freeze-dried fruits on a 7-point scale at the sterilization doses was 5.5, 4.2, 4.0, 4.1, and 5.1 points for apples, strawberries, pears, pineapples, and grapes, respectively. The sensory survey of the hospitalized cancer patients (N=102) resulted in scores of 3.8, 3.7, 3.9, 3.9, and 3.7 on a 5-point scale for the gamma-irradiated freeze-dried apples, strawberries, pears, pineapples, and grapes, respectively. The results suggest that freeze-dried fruits can be sterilized with a dose of 5 kGy, except for grapes, which require a dose of 12 kGy, and that the organoleptic quality of the fruits is acceptable to immuno-compromised patients. However, to clarify the microbiological quality and safety of freeze-dried fruits should be verified by plating for both aerobic and anaerobic microorganisms.

  15. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  16. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  17. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will not...

  18. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  19. The Classification of Diabetes Mellitus Using Kernel k-means

    NASA Astrophysics Data System (ADS)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  20. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  1. Detection of maize kernels breakage rate based on K-means clustering

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping

    2017-04-01

    In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.

  2. Modeling adaptive kernels from probabilistic phylogenetic trees.

    PubMed

    Nicotra, Luca; Micheli, Alessio

    2009-01-01

    Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.

  3. Aflatoxin and nutrient contents of peanut collected from local market and their processed foods

    NASA Astrophysics Data System (ADS)

    Ginting, E.; Rahmianna, A. A.; Yusnawan, E.

    2018-01-01

    Peanut is succeptable to aflatoxin contamination and the sources of peanut as well as processing methods considerably affect aflatoxin content of the products. Therefore, the study on aflatoxin and nutrient contents of peanut collected from local market and their processed foods were performed. Good kernels of peanut were prepared into fried peanut, pressed-fried peanut, peanut sauce, peanut press cake, fermented peanut press cake (tempe) and fried tempe, while blended kernels (good and poor kernels) were processed into peanut sauce and tempe and poor kernels were only processed into tempe. The results showed that good and blended kernels which had high number of sound/intact kernels (82,46% and 62,09%), contained 9.8-9.9 ppb of aflatoxin B1, while slightly higher level was seen in poor kernels (12.1 ppb). However, the moisture, ash, protein, and fat contents of the kernels were similar as well as the products. Peanut tempe and fried tempe showed the highest increase in protein content, while decreased fat contents were seen in all products. The increase in aflatoxin B1 of peanut tempe prepared from poor kernels > blended kernels > good kernels. However, it averagely decreased by 61.2% after deep-fried. Excluding peanut tempe and fried tempe, aflatoxin B1 levels in all products derived from good kernels were below the permitted level (15 ppb). This suggests that sorting peanut kernels as ingredients and followed by heat processing would decrease the aflatoxin content in the products.

  4. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  5. Technical Note: Dose gradients and prescription isodose in orthovoltage stereotactic radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fagerstrom, Jessica M., E-mail: fagerstrom@wisc.edu; Bender, Edward T.; Culberson, Wesley S.

    Purpose: The purpose of this work is to examine the trade-off between prescription isodose and dose gradients in orthovoltage stereotactic radiosurgery. Methods: Point energy deposition kernels (EDKs) describing photon and electron transport were calculated using Monte Carlo methods. EDKs were generated from 10  to 250 keV, in 10 keV increments. The EDKs were converted to pencil beam kernels and used to calculate dose profiles through isocenter from a 4π isotropic delivery from all angles of circularly collimated beams. Monoenergetic beams and an orthovoltage polyenergetic spectrum were analyzed. The dose gradient index (DGI) is the ratio of the 50% prescription isodosemore » volume to the 100% prescription isodose volume and represents a metric by which dose gradients in stereotactic radiosurgery (SRS) may be evaluated. Results: Using the 4π dose profiles calculated using pencil beam kernels, the relationship between DGI and prescription isodose was examined for circular cones ranging from 4 to 18 mm in diameter and monoenergetic photon beams with energies ranging from 20 to 250 keV. Values were found to exist for prescription isodose that optimize DGI. Conclusions: The relationship between DGI and prescription isodose was found to be dependent on both field size and energy. Examining this trade-off is an important consideration for designing optimal SRS systems.« less

  6. Appraisal of ALM predictions of turbulent wake features

    NASA Astrophysics Data System (ADS)

    Rocchio, Benedetto; Cilurzo, Lorenzo; Ciri, Umberto; Salvetti, Maria Vittoria; Leonardi, Stefano

    2017-11-01

    Wind turbine blades create a turbulent wake that may persist far downstream, with significant implications on wind farm design and on its power production. The numerical representation of the real blade geometry would lead to simulations beyond the present computational resources. We focus our attention on the Actuator Line Model (ALM), in which the blade is replaced by a rotating line divided into finite segments with representative aerodynamic coefficients. The total aerodynamic force is projected along the computational axis and, to avoid numerical instabilities, it is distributed among the nearest grid points by using a Gaussian regularization kernel. The standard deviation of this kernel is a fundamental parameter that strongly affects the characteristics of the wake. We compare here the wake features obtained in direct numerical simulations of the flow around 2D bodies (a flat plate and an airfoil) modeled using the Immersed Boundary Method with the results of simulations in which the body is modeled by ALM. In particular, we investigate whether the ALM is able to reproduce the mean velocity field and the turbulent kinetic energy in the wake for the considered bodies at low and high angles of attack and how this depends on the choice of the ALM kernel. S. Leonardi was supported by the National Science Foundation, Grant No. 1243482 (the WINDINSPIRE project).

  7. DANCING IN THE DARK: NEW BROWN DWARF BINARIES FROM KERNEL PHASE INTERFEROMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Benjamin; Tuthill, Peter; Martinache, Frantz, E-mail: bjsp@physics.usyd.edu.au, E-mail: p.tuthill@physics.usyd.edu.au, E-mail: frantz@naoj.org

    2013-04-20

    This paper revisits a sample of ultracool dwarfs in the solar neighborhood previously observed with the Hubble Space Telescope's NICMOS NIC1 instrument. We have applied a novel high angular resolution data analysis technique based on the extraction and fitting of kernel phases to archival data. This was found to deliver a dramatic improvement over earlier analysis methods, permitting a search for companions down to projected separations of {approx}1 AU on NIC1 snapshot images. We reveal five new close binary candidates and present revised astrometry on previously known binaries, all of which were recovered with the technique. The new candidate binariesmore » have sufficiently close separation to determine dynamical masses in a short-term observing campaign. We also present four marginal detections of objects which may be very close binaries or high-contrast companions. Including only confident detections within 19 pc, we report a binary fraction of at least #Greek Lunate Epsilon Symbol#{sub b} = 17.2{sub -3.7}{sup +5.7}%. The results reported here provide new insights into the population of nearby ultracool binaries, while also offering an incisive case study of the benefits conferred by the kernel phase approach in the recovery of companions within a few resolution elements of the point-spread function core.« less

  8. Semi-Tomographic Gamma Scanning Technique for Non-Destructive Assay of Radioactive Waste Drums

    NASA Astrophysics Data System (ADS)

    Gu, Weiguo; Rao, Kaiyuan; Wang, Dezhong; Xiong, Jiemei

    2016-12-01

    Segmented gamma scanning (SGS) and tomographic gamma scanning (TGS) are two traditional detection techniques for low and intermediate level radioactive waste drum. This paper proposes one detection method named semi-tomographic gamma scanning (STGS) to avoid the poor detection accuracy of SGS and shorten detection time of TGS. This method and its algorithm synthesize the principles of SGS and TGS. In this method, each segment is divided into annual voxels and tomography is used in the radiation reconstruction. The accuracy of STGS is verified by experiments and simulations simultaneously for the 208 liter standard waste drums which contains three types of nuclides. The cases of point source or multi-point sources, uniform or nonuniform materials are employed for comparison. The results show that STGS exhibits a large improvement in the detection performance, and the reconstruction error and statistical bias are reduced by one quarter to one third or less for most cases if compared with SGS.

  9. Susceptibility Measurements Near the He-3 Liquid-Gas Critical Point

    NASA Technical Reports Server (NTRS)

    Barmatz, Martin; Zhong, Fang; Hahn, Inseob

    2000-01-01

    An experiment is now being developed to measure both the linear susceptibility and specific heat at constant volume near the liquid-gas critical point of He-3 in a microgravity environment. An electrostriction technique for measuring susceptibility will be described. Initial electrostriction measurements were performed on the ground along the critical isochore in a 0.5 mm high measurement cell filled to within 0.1 % of the critical density. These measurements agreed with the susceptibility determined from pressure-density measurements along isotherms. The critical temperature, T(sub c), determined separately from specific heat and susceptibility measurements was self-consistent. Susceptibility measurements in the range t = T/T(sub c) - 1 > 10(exp -4)were fit to Chi(sup *)(sub T) = Gamma(sup +)t(exp -lambda)(1 + Gamma(sup +)(sub 1)t(sup delta). Best fit parameters for the asymptotic amplitude Gamma(sup +) and the first Wegner amplitude Gamma(sup +)(sub 1) will be presented and compared to previous measurements.

  10. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  11. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  12. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  13. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  14. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  15. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  16. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  17. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  18. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  19. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  20. Wavelet SVM in Reproducing Kernel Hilbert Space for hyperspectral remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Du, Peijun; Tan, Kun; Xing, Xiaoshi

    2010-12-01

    Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.

  1. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature

    PubMed Central

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838

  3. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  4. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature.

    PubMed

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.

  5. SMM/HXRBS observations of Cygnus X-1 from 1986 December to 1988 April

    NASA Technical Reports Server (NTRS)

    Schwartz, R. A.; Orwig, L. E.; Dennis, B. R.; Ling, J. C.; Wheaton, W. A.

    1991-01-01

    The Solar Maximum Mission's Hard X-ray Burst Spectrometer made 30 measurements of Cygnus X-1 from December, 1986 to April, 1988, yielding a data set of broad synoptic coverage but limited duration for each data point. The hard X-ray intensity was found to be between the gamma(2) and gamma(3) levels, with a range of fluctuations about the average intensity level. The shape of the photon spectrum was found to be closest to that reported by Ling et al. (1983, 1987) during the time of the gamma(3) level emission, although the spectral shapes reported for the gamma(2) and gamma(1) levels were not precluded.

  6. Fermi-Lat Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center

    NASA Technical Reports Server (NTRS)

    Ajello, M.; Albert, A.; Atwood, W.B.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bissaldi, E.; Blandford, R. D.; Brandt, T. J.; hide

    2016-01-01

    The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission toward the Galactic center (GC) in high-energy gamma-rays. This paper describes the analysis of data taken during the first 62 months of the mission in the energy range 1-100 GeV from a 15 degrees x 15 degrees region about the direction of the GC. Specialized interstellar emission models (IEMs) are constructed to enable the separation of the gamma-ray emissions produced by cosmic ray particles interacting with the interstellar gas and radiation fields in the Milky Way into that from the inner 1 kpc surrounding the GC, and that from the rest of the Galaxy. A catalog of point sources for the 15 degrees x 15 degrees region is self-consistently constructed using these IEMs: the First Fermi-LAT Inner Galaxy Point SourceCatalog (1FIG). The spatial locations, fluxes, and spectral properties of the 1FIG sources are presented, and compared with gamma-ray point sources over the same region taken from existing catalogs. After subtracting the interstellar emission and point-source contributions a residual is found. If templates that peak toward the GC areused to model the positive residual the agreement with the data improves, but none of the additional templates tried account for all of its spatial structure. The spectrum of the positive residual modeled with these templates has a strong dependence on the choice of IEM.

  7. Magnetically separable {gamma}-Fe{sub 2}O{sub 3}-SiO{sub 2}-Ce-doped TiO{sub 2} core-shell nanocomposites: Fabrication and visible-light-driven photocatalytic activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Minqiang, E-mail: jbmwgkc@126.com; Li, Di; Jiang, Deli

    2012-08-15

    Novel visible-light-induced {gamma}-Fe{sub 2}O{sub 3}-SiO{sub 2}-Ce-doped-TiO{sub 2} core-shell nanocomposite photocatalysts capable of magnetic separation have been synthesized by a facile sol-gel and after-annealing process. The as-obtained core-shell nanocomposite is composed of a central {gamma}-Fe{sub 2}O{sub 3} core with a strong response to external fields, an interlayer of SiO{sub 2}, and an outer layer of Ce-doped TiO{sub 2} nanocrystals. UV-vis spectra analysis indicates that Ce doping in the compound results in a red-shift of the absorption edge, thus offering increased visible light absorption. We show that such a {gamma}-Fe{sub 2}O{sub 3}-SiO{sub 2}-Ce-doped-TiO{sub 2} core-shell nanocomposite with appreciated Ce doping amount exhibitsmore » much higher visible-light photocatalytic activity than bare TiO{sub 2} and undoped {gamma}-Fe{sub 2}O{sub 3}-SiO{sub 2}-TiO{sub 2} core-shell nanocomposite toward the degradation of rhodamine B (RhB). Moreover, the {gamma}-Fe{sub 2}O{sub 3}-SiO{sub 2}-Ce-doped-TiO{sub 2} core-shell nanocomposite photocatalysts could be easily separated and reused from the treated water under application of an external magnetic field. - Graphical abstract: Novel {gamma}-Fe{sub 2}O{sub 3}-SiO{sub 2}-Ce-doped-TiO{sub 2} core/shell nanocomposite photocatalysts with enhanced photocatalytic activity and fast magnetic separability were prepared. Highlights: Black-Right-Pointing-Pointer Novel {gamma}-Fe{sub 2}O{sub 3}-SiO{sub 2}-Ce-doped TiO{sub 2} core/shell composite photocatalysts were prepared. Black-Right-Pointing-Pointer The resulting core/shell composite show high visible light photocatalytic activity. Black-Right-Pointing-Pointer The nanocomposite photocatalysts can be easily recycled with excellent durability.« less

  8. Weighted Bergman Kernels and Quantization}

    NASA Astrophysics Data System (ADS)

    Engliš, Miroslav

    Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion for x,y near z, where φ(x,y) is an almost-analytic extension of &\\phi(x)=φ(x,x) and similarly for ψ. Further, . If in addition Ω is of finite type, φ,ψ behave reasonably at the boundary, and - log φ, - log ψ are strictly plurisubharmonic on Ω, we obtain also an analogous asymptotic expansion for the Berezin transform and give applications to the Berezin quantization. Finally, for Ω smoothly bounded and strictly pseudoconvex and φ a smooth strictly plurisubharmonic defining function for Ω, we also obtain results on the Berezin-Toeplitz quantization.

  9. Elliptic polylogarithms and iterated integrals on elliptic curves. II. An application to the sunrise integral

    NASA Astrophysics Data System (ADS)

    Broedel, Johannes; Duhr, Claude; Dulat, Falko; Tancredi, Lorenzo

    2018-06-01

    We introduce a class of iterated integrals that generalize multiple polylogarithms to elliptic curves. These elliptic multiple polylogarithms are closely related to similar functions defined in pure mathematics and string theory. We then focus on the equal-mass and non-equal-mass sunrise integrals, and we develop a formalism that enables us to compute these Feynman integrals in terms of our iterated integrals on elliptic curves. The key idea is to use integration-by-parts identities to identify a set of integral kernels, whose precise form is determined by the branch points of the integral in question. These kernels allow us to express all iterated integrals on an elliptic curve in terms of them. The flexibility of our approach leads us to expect that it will be applicable to a large variety of integrals in high-energy physics.

  10. Scramjet Nozzles

    DTIC Science & Technology

    2010-09-01

    and y, the axial and radial coordinates respectively. Point c lies somewhere within the mesh generated by the initial expansion (the kernel). All that...and the surface will be subjected to high heat loads restricting the choice of suitable materials. Material choice has direct implications for...Some legacy trajectory codes might not be able to deal with anything other than axial forces from engines, reflecting the class of problem they were

  11. Pregnancy IFN-gamma responses to foetal alloantigens are altered by maternal allergy and gravidity status.

    PubMed

    Breckler, L A; Hale, J; Taylor, A; Dunstan, J A; Thornton, C A; Prescott, S L

    2008-11-01

    During pregnancy, variations in maternal-foetal cellular interactions may influence immune programming. This study was carried out to determine if maternal responses to foetal alloantigens are altered by maternal allergic disease and/or previous pregnancies. For this cohort study, peripheral blood was collected from allergic (n = 69) and nonallergic (n = 63) pregnant women at 20, 30, 36-week gestation and 6-week postpartum (pp). Cord blood was collected at delivery. Mixed lymphocyte reactions were used to measure maternal cytokine responses [interleukin-6 (IL-6), IL-10, IL-13 and (interferon-gamma) IFN-gamma] at each time point towards foetal mononuclear cells. Maternal cytokine responses during pregnancy (20, 30 and 36 weeks) were suppressed compared to the responses at 6-week pp. The ratio of maternal IFN-gamma/IL-13 and IFN-gamma/IL-10 responses were lower during pregnancy. Allergic mothers had lower IFN-gamma responses at each time-point during pregnancy with the greatest difference in responses observed at 36-week gestation. When allergic and nonallergic women were further stratified by gravidity group, IFN-gamma responses of allergic multigravid mothers were significantly lower than nonallergic multigravid mothers during pregnancy. During normal pregnancy, peripheral T-cell cytokine responses to foetal alloantigens may be altered by both allergic status of the mother and previous pregnancies. These factors could influence the cytokine milieu experienced by the foetus and will be further explored in the development of allergic disease during early life.

  12. Ischemia episode detection in ECG using kernel density estimation, support vector machine and feature selection

    PubMed Central

    2012-01-01

    Background Myocardial ischemia can be developed into more serious diseases. Early Detection of the ischemic syndrome in electrocardiogram (ECG) more accurately and automatically can prevent it from developing into a catastrophic disease. To this end, we propose a new method, which employs wavelets and simple feature selection. Methods For training and testing, the European ST-T database is used, which is comprised of 367 ischemic ST episodes in 90 records. We first remove baseline wandering, and detect time positions of QRS complexes by a method based on the discrete wavelet transform. Next, for each heart beat, we extract three features which can be used for differentiating ST episodes from normal: 1) the area between QRS offset and T-peak points, 2) the normalized and signed sum from QRS offset to effective zero voltage point, and 3) the slope from QRS onset to offset point. We average the feature values for successive five beats to reduce effects of outliers. Finally we apply classifiers to those features. Results We evaluated the algorithm by kernel density estimation (KDE) and support vector machine (SVM) methods. Sensitivity and specificity for KDE were 0.939 and 0.912, respectively. The KDE classifier detects 349 ischemic ST episodes out of total 367 ST episodes. Sensitivity and specificity of SVM were 0.941 and 0.923, respectively. The SVM classifier detects 355 ischemic ST episodes. Conclusions We proposed a new method for detecting ischemia in ECG. It contains signal processing techniques of removing baseline wandering and detecting time positions of QRS complexes by discrete wavelet transform, and feature extraction from morphology of ECG waveforms explicitly. It was shown that the number of selected features were sufficient to discriminate ischemic ST episodes from the normal ones. We also showed how the proposed KDE classifier can automatically select kernel bandwidths, meaning that the algorithm does not require any numerical values of the parameters to be supplied in advance. In the case of the SVM classifier, one has to select a single parameter. PMID:22703641

  13. Fermi Establishes Classical Novae as a Distinct Class of Gamma-ray Sources

    NASA Technical Reports Server (NTRS)

    Ackermann, M.; Ajello, M.; Albert, A.; Baldini, L.; Ballet, J.; Bastieri, D.; Bellazzini, R.; Bissaldi, E.; Blandford, R. D.; Bloom, E. D.; hide

    2014-01-01

    A classical nova results from runaway thermonuclear explosions on the surface of a white dwarf that accretes matter from a low-mass main-sequence stellar companion. In 2012 and 2013, three novae were detected in gamma rays and stood in contrast to the first gamma-ray detected nova V407 Cygni 2010, which belongs to a rare class of symbiotic binary systems. Despite likely differences in the compositions and masses of their white dwarf progenitors, the three classical novae are similarly characterized as soft spectrum transient gamma-ray sources detected over 2-3 week durations. The gamma-ray detections point to unexpected high-energy particle acceleration processes linked to the mass ejection from thermonuclear explosions in an unanticipated class of Galactic gamma-ray sources.

  14. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    PubMed

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  15. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Gamma scintigraphic study of the hydrodynamically balanced matrix tablets of Metformin HCl in rabbits

    PubMed Central

    Razavi, Mahboubeh; Karimian, Hamed; Yeong, Chai Hong; Sarji, Sazilah Ahmad; Chung, Lip Yong; Nyamathulla, Shaik; Noordin, Mohamed Ibrahim

    2015-01-01

    The purpose of this study is to evaluate the in vitro and in vivo performance of gastro-retentive matrix tablets having Metformin HCl as model drug and combination of natural polymers. A total of 16 formulations were prepared by a wet granulation method using xanthan, tamarind seed powder, tamarind kernel powder and salep as the gel-forming agents and sodium bicarbonate as a gas-forming agent. All the formulations were evaluated for compendial and non-compendial tests and in vitro study was carried out on a USP-II dissolution apparatus at a paddle speed of 50 rpm. MOX2 formulation, composed of salep and xanthan in the ratio of 4:1 with 96.9% release, was considered as the optimum formulation with more than 90% release in 12 hours and short floating lag time. In vivo study was carried out using gamma scintigraphy in New Zealand White rabbits, optimized formulation was incorporated with 10 mg of 153Sm for labeling MOX2 formulation. The radioactive samarium oxide was used as the marker to trace transit of the tablets in the gastrointestinal tract. The in vivo data also supported retention of MOX2 formulation in the gastric region for 12 hours and were different from the control formulation without a gas and gel forming agent. It was concluded that the prepared floating gastro-retentive matrix tablets had a sustained-release effect in vitro and in vivo, gamma scintigraphy played an important role in locating the oral transit and the drug-release pattern. PMID:26124637

  17. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  18. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    PubMed Central

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  19. Combined multi-kernel head computed tomography images optimized for depicting both brain parenchyma and bone.

    PubMed

    Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki

    2014-01-01

    The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.

  20. A novel method for quantitative geosteering using azimuthal gamma-ray logging.

    PubMed

    Yuan, Chao; Zhou, Cancan; Zhang, Feng; Hu, Song; Li, Chaoliu

    2015-02-01

    A novel method for quantitative geosteering by using azimuthal gamma-ray logging is proposed. Real-time up and bottom gamma-ray logs when a logging tool travels through a boundary surface with different relative dip angles are simulated with the Monte Carlo method. Study results show that response points of up and bottom gamma-ray logs when the logging tool moves towards a highly radioactive formation can be used to predict the relative dip angle, and then the distance from the drilling bit to the boundary surface is calculated. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  2. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  3. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  4. graphkernels: R and Python packages for graph comparison

    PubMed Central

    Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-01-01

    Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902

  5. Aflatoxin variability in pistachios.

    PubMed Central

    Mahoney, N E; Rodriguez, S B

    1996-01-01

    Pistachio fruit components, including hulls (mesocarps and epicarps), seed coats (testas), and kernels (seeds), all contribute to variable aflatoxin content in pistachios. Fresh pistachio kernels were individually inoculated with Aspergillus flavus and incubated 7 or 10 days. Hulled, shelled kernels were either left intact or wounded prior to inoculation. Wounded kernels, with or without the seed coat, were readily colonized by A. flavus and after 10 days of incubation contained 37 times more aflatoxin than similarly treated unwounded kernels. The aflatoxin levels in the individual wounded pistachios were highly variable. Neither fungal colonization nor aflatoxin was detected in intact kernels without seed coats. Intact kernels with seed coats had limited fungal colonization and low aflatoxin concentrations compared with their wounded counterparts. Despite substantial fungal colonization of wounded hulls, aflatoxin was not detected in hulls. Aflatoxin levels were significantly lower in wounded kernels with hulls than in kernels of hulled pistachios. Both the seed coat and a water-soluble extract of hulls suppressed aflatoxin production by A. flavus. PMID:8919781

  6. graphkernels: R and Python packages for graph comparison.

    PubMed

    Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-02-01

    Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.

  7. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    PubMed Central

    Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.

    2013-01-01

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507

  8. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Unraveling multiple changes in complex climate time series using Bayesian inference

    NASA Astrophysics Data System (ADS)

    Berner, Nadine; Trauth, Martin H.; Holschneider, Matthias

    2016-04-01

    Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of observations. Unraveling such transitions yields essential information for the understanding of the observed system. The precise detection and basic characterization of underlying changes is therefore of particular importance in environmental sciences. We present a kernel-based Bayesian inference approach to investigate direct as well as indirect climate observations for multiple generic transition events. In order to develop a diagnostic approach designed to capture a variety of natural processes, the basic statistical features of central tendency and dispersion are used to locally approximate a complex time series by a generic transition model. A Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of such a transition. To systematically investigate time series for multiple changes occurring at different temporal scales, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. Thus, based on a generic transition model a probability expression is derived that is capable to indicate multiple changes within a complex time series. We discuss the method's performance by investigating direct and indirect climate observations. The approach is applied to environmental time series (about 100 a), from the weather station in Tuscaloosa, Alabama, and confirms documented instrumentation changes. Moreover, the approach is used to investigate a set of complex terrigenous dust records from the ODP sites 659, 721/722 and 967 interpreted as climate indicators of the African region of the Plio-Pleistocene period (about 5 Ma). The detailed inference unravels multiple transitions underlying the indirect climate observations coinciding with established global climate events.

  10. Features and flaws of a contact interaction treatment of the kaon

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Chang, Lei; Roberts, Craig D.; Schmidt, Sebastian M.; Wan, Shaolong; Wilson, David J.

    2013-04-01

    Elastic and semileptonic transition form factors for the kaon and pion are calculated using the leading order in a global-symmetry-preserving truncation of the Dyson-Schwinger equations and a momentum-independent form for the associated kernels in the gap and Bethe-Salpeter equations. The computed form factors are compared both with those obtained using the same truncation but an interaction that preserves the one-loop renormalization-group behavior of QCD and with data. The comparisons show that in connection with observables revealed by probes with |Q2|≲M2, where M≈0.4GeV is an infrared value of the dressed-quark mass, results obtained using a symmetry-preserving regularization of the contact interaction are not realistically distinguishable from those produced by more sophisticated kernels, and available data on kaon form factors do not extend into the domain whereupon one could distinguish among the interactions. The situation differs if one includes the domain Q2>M2. Thereupon, a fully consistent treatment of the contact interaction produces form factors that are typically harder than those obtained with QCD renormalization-group-improved kernels. Among other things also described are a Ward identity for the inhomogeneous scalar vertex, similarity between the charge distribution of a dressed u quark in the K+ and that of the dressed u quark in the π+, and reflections upon the point whereat one might begin to see perturbative behavior in the pion form factor. Interpolations of the form factors are provided, which should assist in working to chart the interaction between light quarks by explicating the impact on hadron properties of differing assumptions about the behavior of the Bethe-Salpeter kernel.

  11. A point kernel algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Debus, Charlotte; Oelfke, Uwe; Bartzsch, Stefan

    2017-11-01

    Microbeam radiation therapy (MRT) is a treatment approach in radiation therapy where the treatment field is spatially fractionated into arrays of a few tens of micrometre wide planar beams of unusually high peak doses separated by low dose regions of several hundred micrometre width. In preclinical studies, this treatment approach has proven to spare normal tissue more effectively than conventional radiation therapy, while being equally efficient in tumour control. So far dose calculations in MRT, a prerequisite for future clinical applications are based on Monte Carlo simulations. However, they are computationally expensive, since scoring volumes have to be small. In this article a kernel based dose calculation algorithm is presented that splits the calculation into photon and electron mediated energy transport, and performs the calculation of peak and valley doses in typical MRT treatment fields within a few minutes. Kernels are analytically calculated depending on the energy spectrum and material composition. In various homogeneous materials peak, valley doses and microbeam profiles are calculated and compared to Monte Carlo simulations. For a microbeam exposure of an anthropomorphic head phantom calculated dose values are compared to measurements and Monte Carlo calculations. Except for regions close to material interfaces calculated peak dose values match Monte Carlo results within 4% and valley dose values within 8% deviation. No significant differences are observed between profiles calculated by the kernel algorithm and Monte Carlo simulations. Measurements in the head phantom agree within 4% in the peak and within 10% in the valley region. The presented algorithm is attached to the treatment planning platform VIRTUOS. It was and is used for dose calculations in preclinical and pet-clinical trials at the biomedical beamline ID17 of the European synchrotron radiation facility in Grenoble, France.

  12. The Crash Intensity Evaluation Using General Centrality Criterions and a Geographically Weighted Regression

    NASA Astrophysics Data System (ADS)

    Ghadiriyan Arani, M.; Pahlavani, P.; Effati, M.; Noori Alamooti, F.

    2017-09-01

    Today, one of the social problems influencing on the lives of many people is the road traffic crashes especially the highway ones. In this regard, this paper focuses on highway of capital and the most populous city in the U.S. state of Georgia and the ninth largest metropolitan area in the United States namely Atlanta. Geographically weighted regression and general centrality criteria are the aspects of traffic used for this article. In the first step, in order to estimate of crash intensity, it is needed to extract the dual graph from the status of streets and highways to use general centrality criteria. With the help of the graph produced, the criteria are: Degree, Pageranks, Random walk, Eccentricity, Closeness, Betweenness, Clustering coefficient, Eigenvector, and Straightness. The intensity of crash point is counted for every highway by dividing the number of crashes in that highway to the total number of crashes. Intensity of crash point is calculated for each highway. Then, criteria and crash point were normalized and the correlation between them was calculated to determine the criteria that are not dependent on each other. The proposed hybrid approach is a good way to regression issues because these effective measures result to a more desirable output. R2 values for geographically weighted regression using the Gaussian kernel was 0.539 and also 0.684 was obtained using a triple-core cube. The results showed that the triple-core cube kernel is better for modeling the crash intensity.

  13. Inhibitors and modulators of beta- and gamma-secretase.

    PubMed

    Schmidt, Boris; Baumann, Stefanie; Braun, Hannes A; Larbig, Gregor

    2006-01-01

    Most gene mutations associated with Alzheimer's disease point to the metabolism of amyloid precursor protein as potential cause. The beta- and gamma-secretases are two executioners of amyloid precursor protein processing resulting in amyloid beta. Significant progress has been made in the selective inhibition of both proteases, regardless of structural information for gamma-secretase. Several peptidic and non-peptidic leads were identified and first drug candidates are in clinical trials. This review focuses on the developments since 2003.

  14. The role of antimatter in big-bang cosmology

    NASA Technical Reports Server (NTRS)

    Stecker, F. W.

    1973-01-01

    Big bang cosmology is discussed with reference to both its strong points and gaps. Characteristics of a spectral component of red shifted gamma-radiation from cosmological matter-antimatter annihilation show a flattening of the gamma-ray spectrum in the vicinity of 1 MeV, an increased gamma-ray flux between 1 and 100 MeV, and a very steep spectrum between 50 and 135 MeV. This data fits well with the theoretical predictions in energy and intensity.

  15. SKYSHINEIII. Calculating Effects of Structure Design on Neutron Dose Rates in Air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lampley, C.M.; Andrews, C.M.; Wells, M.B.

    1988-12-01

    SKYSHINE was designed to aid in the evaluation of the effects of structure geometry on the gamma-ray dose rate at given detector positions outside of a building housing gamma-ray sources. The program considers a rectangular structure enclosed by four walls and a roof. Each of the walls and the roof of the building may be subdivided into up to nine different areas, representing different materials or different thicknesses of the same material for those positions of the wall or roof. Basic sets of iron and concrete slab transmission and reflection data for 6.2 MeV gamma-rays are part of the SKYSHINEmore » block data. These data, as well as parametric air transport data for line-beam sources at a number of energies between 0.6 MeV and 6.2 MeV and ranges to 3750 ft, are used to estimate the various components of the gamma-ray dose rate at positions outside of the building. The gamma-ray source is assumed to be a 6.2 MeV point-isotropic source. SKYSHINE-III provides an increase in versatility over the original SKYSHINE code in that it addresses both neutron and gamma-ray point sources. In addition, the emitted radiation may be characterized by an energy emission spectrum defined by the user. A new SKYSHINE data base is also included.« less

  16. Statistical Measurement of the Gamma-Ray Source-count Distribution as a Function of Energy

    NASA Astrophysics Data System (ADS)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Regis, Marco

    2016-08-01

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. We employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ˜50 GeV. The index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index of {2.2}-0.3+0.7 in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain {83}-13+7% ({81}-19+52%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). The method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.

  17. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  18. 7 CFR 810.204 - Grades and grade requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...— Damaged kernels 1 (percent) Foreign material (percent) Other grains (percent) Skinned and broken kernels....0 10.0 15.0 1 Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered against sound barley. Notes: Malting barley shall not be infested in accordance with...

  19. Agreement between gamma passing rates using computed tomography in radiotherapy and secondary cancer risk prediction from more advanced dose calculated models

    PubMed Central

    Balosso, Jacques

    2017-01-01

    Background During the past decades, in radiotherapy, the dose distributions were calculated using density correction methods with pencil beam as type ‘a’ algorithm. The objectives of this study are to assess and evaluate the impact of dose distribution shift on the predicted secondary cancer risk (SCR), using modern advanced dose calculation algorithms, point kernel, as type ‘b’, which consider change in lateral electrons transport. Methods Clinical examples of pediatric cranio-spinal irradiation patients were evaluated. For each case, two radiotherapy treatment plans with were generated using the same prescribed dose to the target resulting in different number of monitor units (MUs) per field. The dose distributions were calculated, respectively, using both algorithms types. A gamma index (γ) analysis was used to compare dose distribution in the lung. The organ equivalent dose (OED) has been calculated with three different models, the linear, the linear-exponential and the plateau dose response curves. The excess absolute risk ratio (EAR) was also evaluated as (EAR = OED type ‘b’ / OED type ‘a’). Results The γ analysis results indicated an acceptable dose distribution agreement of 95% with 3%/3 mm. Although, the γ-maps displayed dose displacement >1 mm around the healthy lungs. Compared to type ‘a’, the OED values from type ‘b’ dose distributions’ were about 8% to 16% higher, leading to an EAR ratio >1, ranged from 1.08 to 1.13 depending on SCR models. Conclusions The shift of dose calculation in radiotherapy, according to the algorithm, can significantly influence the SCR prediction and the plan optimization, since OEDs are calculated from DVH for a specific treatment. The agreement between dose distribution and SCR prediction depends on dose response models and epidemiological data. In addition, the γ passing rates of 3%/3 mm does not translate the difference, up to 15%, in the predictions of SCR resulting from alternative algorithms. Considering that modern algorithms are more accurate, showing more precisely the dose distributions, but that the prediction of absolute SCR is still very imprecise, only the EAR ratio could be used to rank radiotherapy plans. PMID:28811995

  20. Buildup factor and mechanical properties of high-density cement mixed with crumb rubber and prompt gamma ray study

    NASA Astrophysics Data System (ADS)

    Aim-O, P.; Wongsawaeng, D.; Tancharakorn, S.; Sophon, M.

    2017-09-01

    High-density cement mixed with crumb rubber has been studied to be a gamma ray and neutron shielding material, especially for photonuclear reactions that may occur from accelerators where both types of radiation exist. The Buildup factors from gamma ray scattering, prompt and secondary gamma ray emissions from neutron capture and mechanical properties were evaluated. For buildup factor studies, two different geometries were used: narrow beam and broad beam. Prompt Gamma Neutron Activation Analysis (PGNAA) was carried out to determine the prompt and secondary gamma ray emissions. The compressive strength of samples was evaluated by using compression testing machine which was central point loading crushing test. The results revealed that addition of crumb rubber increased the buildup factor. Gamma ray spectra following PGNAA revealed no prompt or secondary gamma ray emission. Mechanical testing indicated that the compressive strength of the shielding material decreased with increasing volume percentage of crumb rubber.

  1. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  2. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  3. 7 CFR 810.205 - Grades and grade requirements for Two-rowed Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (percent) Maximum limits of— Wild oats (percent) Foreign material (percent) Skinned and broken kernels... Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered...

  4. A Search Technique for Weak and Long-Duration Gamma-Ray Bursts from Background Model Residuals

    NASA Technical Reports Server (NTRS)

    Skelton, R. T.; Mahoney, W. A.

    1993-01-01

    We report on a planned search technique for Gamma-Ray Bursts too weak to trigger the on-board threshold. The technique is to search residuals from a physically based background model used for analysis of point sources by the Earth occultation method.

  5. Three-dimensional waveform sensitivity kernels

    NASA Astrophysics Data System (ADS)

    Marquering, Henk; Nolet, Guust; Dahlen, F. A.

    1998-03-01

    The sensitivity of intermediate-period (~10-100s) seismic waveforms to the lateral heterogeneity of the Earth is computed using an efficient technique based upon surface-wave mode coupling. This formulation yields a general, fully fledged 3-D relationship between data and model without imposing smoothness constraints on the lateral heterogeneity. The calculations are based upon the Born approximation, which yields a linear relation between data and model. The linear relation ensures fast forward calculations and makes the formulation suitable for inversion schemes; however, higher-order effects such as wave-front healing are neglected. By including up to 20 surface-wave modes, we obtain Fréchet, or sensitivity, kernels for waveforms in the time frame that starts at the S arrival and which includes direct and surface-reflected body waves. These 3-D sensitivity kernels provide new insights into seismic-wave propagation, and suggest that there may be stringent limitations on the validity of ray-theoretical interpretations. Even recently developed 2-D formulations, which ignore structure out of the source-receiver plane, differ substantially from our 3-D treatment. We infer that smoothness constraints on heterogeneity, required to justify the use of ray techniques, are unlikely to hold in realistic earth models. This puts the use of ray-theoretical techniques into question for the interpretation of intermediate-period seismic data. The computed 3-D sensitivity kernels display a number of phenomena that are counter-intuitive from a ray-geometrical point of view: (1) body waves exhibit significant sensitivity to structure up to 500km away from the source-receiver minor arc; (2) significant near-surface sensitivity above the two turning points of the SS wave is observed; (3) the later part of the SS wave packet is most sensitive to structure away from the source-receiver path; (4) the sensitivity of the higher-frequency part of the fundamental surface-wave mode is wider than for its faster, lower-frequency part; (5) delayed body waves may considerably influence fundamental Rayleigh and Love waveforms. The strong sensitivity of waveforms to crustal structure due to fundamental-mode-to-body-wave scattering precludes the use of phase-velocity filters to model body-wave arrivals. Results from the 3-D formulation suggest that the use of 2-D and 1-D techniques for the interpretation of intermediate-period waveforms should seriously be reconsidered.

  6. Brownian motion of a nano-colloidal particle: the role of the solvent.

    PubMed

    Torres-Carbajal, Alexis; Herrera-Velarde, Salvador; Castañeda-Priego, Ramón

    2015-07-15

    Brownian motion is a feature of colloidal particles immersed in a liquid-like environment. Usually, it can be described by means of the generalised Langevin equation (GLE) within the framework of the Mori theory. In principle, all quantities that appear in the GLE can be calculated from the molecular information of the whole system, i.e., colloids and solvent molecules. In this work, by means of extensive Molecular Dynamics simulations, we study the effects of the microscopic details and the thermodynamic state of the solvent on the movement of a single nano-colloid. In particular, we consider a two-dimensional model system in which the mass and size of the colloid are two and one orders of magnitude, respectively, larger than the ones associated with the solvent molecules. The latter ones interact via a Lennard-Jones-type potential to tune the nature of the solvent, i.e., it can be either repulsive or attractive. We choose the linear momentum of the Brownian particle as the observable of interest in order to fully describe the Brownian motion within the Mori framework. We particularly focus on the colloid diffusion at different solvent densities and two temperature regimes: high and low (near the critical point) temperatures. To reach our goal, we have rewritten the GLE as a second kind Volterra integral in order to compute the memory kernel in real space. With this kernel, we evaluate the momentum-fluctuating force correlation function, which is of particular relevance since it allows us to establish when the stationarity condition has been reached. Our findings show that even at high temperatures, the details of the attractive interaction potential among solvent molecules induce important changes in the colloid dynamics. Additionally, near the critical point, the dynamical scenario becomes more complex; all the correlation functions decay slowly in an extended time window, however, the memory kernel seems to be only a function of the solvent density. Thus, the explicit inclusion of the solvent in the description of Brownian motion allows us to better understand the behaviour of the memory kernel at those thermodynamic states near the critical region without any further approximation. This information is useful to elaborate more realistic descriptions of Brownian motion that take into account the particular details of the host medium.

  7. Application of information-theoretic measures to quantitative analysis of immunofluorescent microscope imaging.

    PubMed

    Shutin, Dmitriy; Zlobinskaya, Olga

    2010-02-01

    The goal of this contribution is to apply model-based information-theoretic measures to the quantification of relative differences between immunofluorescent signals. Several models for approximating the empirical fluorescence intensity distributions are considered, namely Gaussian, Gamma, Beta, and kernel densities. As a distance measure the Hellinger distance and the Kullback-Leibler divergence are considered. For the Gaussian, Gamma, and Beta models the closed-form expressions for evaluating the distance as a function of the model parameters are obtained. The advantages of the proposed quantification framework as compared to simple mean-based approaches are analyzed with numerical simulations. Two biological experiments are also considered. The first is the functional analysis of the p8 subunit of the TFIIH complex responsible for a rare hereditary multi-system disorder--trichothiodystrophy group A (TTD-A). In the second experiment the proposed methods are applied to assess the UV-induced DNA lesion repair rate. A good agreement between our in vivo results and those obtained with an alternative in vitro measurement is established. We believe that the computational simplicity and the effectiveness of the proposed quantification procedure will make it very attractive for different analysis tasks in functional proteomics, as well as in high-content screening. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  8. Detection of ochratoxin A contamination in stored wheat using near-infrared hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Senthilkumar, T.; Jayas, D. S.; White, N. D. G.; Fields, P. G.; Gräfenhan, T.

    2017-03-01

    Near-infrared (NIR) hyperspectral imaging system was used to detect five concentration levels of ochratoxin A (OTA) in contaminated wheat kernels. The wheat kernels artificially inoculated with two different OTA producing Penicillium verrucosum strains, two different non-toxigenic P. verrucosum strains, and sterile control wheat kernels were subjected to NIR hyperspectral imaging. The acquired three-dimensional data were reshaped into readable two-dimensional data. Principal Component Analysis (PCA) was applied to the two dimensional data to identify the key wavelengths which had greater significance in detecting OTA contamination in wheat. Statistical and histogram features extracted at the key wavelengths were used in the linear, quadratic and Mahalanobis statistical discriminant models to differentiate between sterile control, five concentration levels of OTA contamination in wheat kernels, and five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels. The classification models differentiated sterile control samples from OTA contaminated wheat kernels and non-OTA producing P. verrucosum inoculated wheat kernels with a 100% accuracy. The classification models also differentiated between five concentration levels of OTA contaminated wheat kernels and between five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels with a correct classification of more than 98%. The non-OTA producing P. verrucosum inoculated wheat kernels and OTA contaminated wheat kernels subjected to hyperspectral imaging provided different spectral patterns.

  9. Application of kernel method in fluorescence molecular tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Baikejiang, Reheman; Li, Changqing

    2017-02-01

    Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.

  10. Credit scoring analysis using kernel discriminant

    NASA Astrophysics Data System (ADS)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  11. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  12. An integrated method for atherosclerotic carotid plaque segmentation in ultrasound image.

    PubMed

    Qian, Chunjun; Yang, Xiaoping

    2018-01-01

    Carotid artery atherosclerosis is an important cause of stroke. Ultrasound imaging has been widely used in the diagnosis of atherosclerosis. Therefore, segmenting atherosclerotic carotid plaque in ultrasound image is an important task. Accurate plaque segmentation is helpful for the measurement of carotid plaque burden. In this paper, we propose and evaluate a novel learning-based integrated framework for plaque segmentation. In our study, four different classification algorithms, along with the auto-context iterative algorithm, were employed to effectively integrate features from ultrasound images and later also the iteratively estimated and refined probability maps together for pixel-wise classification. The four classification algorithms were support vector machine with linear kernel, support vector machine with radial basis function kernel, AdaBoost and random forest. The plaque segmentation was implemented in the generated probability map. The performance of the four different learning-based plaque segmentation methods was tested on 29 B-mode ultrasound images. The evaluation indices for our proposed methods were consisted of sensitivity, specificity, Dice similarity coefficient, overlap index, error of area, absolute error of area, point-to-point distance, and Hausdorff point-to-point distance, along with the area under the ROC curve. The segmentation method integrated the random forest and an auto-context model obtained the best results (sensitivity 80.4 ± 8.4%, specificity 96.5 ± 2.0%, Dice similarity coefficient 81.0 ± 4.1%, overlap index 68.3 ± 5.8%, error of area -1.02 ± 18.3%, absolute error of area 14.7 ± 10.9%, point-to-point distance 0.34 ± 0.10 mm, Hausdorff point-to-point distance 1.75 ± 1.02 mm, and area under the ROC curve 0.897), which were almost the best, compared with that from the existed methods. Our proposed learning-based integrated framework investigated in this study could be useful for atherosclerotic carotid plaque segmentation, which will be helpful for the measurement of carotid plaque burden. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Correlation and classification of single kernel fluorescence hyperspectral data with aflatoxin concentration in corn kernels inoculated with Aspergillus flavus spores.

    PubMed

    Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D

    2010-05-01

    The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to 0.91 when a threshold of either 20 or 100 ng g(-1) was used. Overall, the results indicate that fluorescence hyperspectral imaging may be applicable in estimating aflatoxin content in individual corn kernels.

  14. Gamma-ray burst theory: Back to the drawing board

    NASA Technical Reports Server (NTRS)

    Harding, Alice K.

    1994-01-01

    Gamma-ray bursts have always been intriguing sources to study in terms of particle acceleration, but not since their discovery two decades ago has the theory of these objects been in such turmoil. Prior to the launch of Compton Gamma-Ray Observatory and observations by Burst and Transient Source Experiment (BATSE), there was strong evidence pointing to magnetized Galactic neutron stars as the sources of gamma-ray bursts. However, since BATSE the observational picture has changed dramatically, requiring much more distant and possibly cosmological sources. I review the history of gamma-ray burst theory from the era of growing consensus for nearby neutron stars to the recent explosion of halo and cosmological models and the impact of the present confusion on the particle acceleration problem.

  15. GRO: Black hole models for gamma-ray bursts

    NASA Technical Reports Server (NTRS)

    Ruderman, Malvin

    1995-01-01

    The Burst and Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory (CGRO) has established that the distribution of gamma-ray bursts (GRB's) is isotropic but is bound radially. This finding suggests that the bursts are either cosmological or they originate from an extended Galactic halo. The implied luminosities and the observed variability of the GRB's on time scales as short as one millisecond suggest that they originate from compact objects. We are presently studying black hole models for GRB's. Any such model must produce a non-thermal photon spectrum to agree with the observed properties. For a wide range of burst parameters the assumed bursting source consists of a non-thermal electron-positron-photon plasma of very high density. It seems possible to produce such a plasma in accretion onto black holes. In our on-going work, we are developing the kinetic theory for a non-equilibrium pair plasma. The main new features of our work are as follows: (1) We do not assume the presence of a thermal electron bath. (2) Non-thermal, high-energy pairs are allowed to have an arbitrary concentration and energy distribution. (3) There is no soft photon source in our model; initially all the photons in the plasma are either energetic X-rays or gamma-rays. (4) The initial energy distribution of the pairs as well as photons is arbitrary. (5) We collect the analytical expressions for the kinetic kernels for all relevant processes. And (6) we present a different approach to finding the time-evolution of pair and photon spectra, which is a combination of the kinetic-theory and the non-linear Monte-Carlo schemes. We have developed many Monte-Carlo programs to model various process, to take into account the time evolution, and to incorporate various physical effects which are unique to non-thermal plasmas. The hydrodynamics of fireballs in GRB's was studied before. Applying results from kinetic theory will improve our understanding of these systems.

  16. A method for removing arm backscatter from EPID images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Brian W.; Greer, Peter B.; School of Mathematical and Physical Sciences, University of Newcastle, Newcastle, New South Wales 2308

    2013-07-15

    Purpose: To develop a method for removing the support arm backscatter from images acquired using current Varian electronic portal imaging devices (EPIDs).Methods: The effect of arm backscatter on EPID images was modeled using a kernel convolution method. The parameters of the model were optimized by comparing on-arm images to off-arm images. The model was used to develop a method to remove the effect of backscatter from measured EPID images. The performance of the backscatter removal method was tested by comparing backscatter corrected on-arm images to measured off-arm images for 17 rectangular fields of different sizes and locations on the imager.more » The method was also tested using on- and off-arm images from 42 intensity modulated radiotherapy (IMRT) fields.Results: Images generated by the backscatter removal method gave consistently better agreement with off-arm images than images without backscatter correction. For the 17 rectangular fields studied, the root mean square difference of in-plane profiles compared to off-arm profiles was reduced from 1.19% (standard deviation 0.59%) on average without backscatter removal to 0.38% (standard deviation 0.18%) when using the backscatter removal method. When comparing to the off-arm images from the 42 IMRT fields, the mean {gamma} and percentage of pixels with {gamma} < 1 were improved by the backscatter removal method in all but one of the images studied. The mean {gamma} value (1%, 1 mm) for the IMRT fields studied was reduced from 0.80 to 0.57 by using the backscatter removal method, while the mean {gamma} pass rate was increased from 72.2% to 84.6%.Conclusions: A backscatter removal method has been developed to estimate the image acquired by the EPID without any arm backscatter from an image acquired in the presence of arm backscatter. The method has been shown to produce consistently reliable results for a wide range of field sizes and jaw configurations.« less

  17. Classification of Phylogenetic Profiles for Protein Function Prediction: An SVM Approach

    NASA Astrophysics Data System (ADS)

    Kotaru, Appala Raju; Joshi, Ramesh C.

    Predicting the function of an uncharacterized protein is a major challenge in post-genomic era due to problems complexity and scale. Having knowledge of protein function is a crucial link in the development of new drugs, better crops, and even the development of biochemicals such as biofuels. Recently numerous high-throughput experimental procedures have been invented to investigate the mechanisms leading to the accomplishment of a protein’s function and Phylogenetic profile is one of them. Phylogenetic profile is a way of representing a protein which encodes evolutionary history of proteins. In this paper we proposed a method for classification of phylogenetic profiles using supervised machine learning method, support vector machine classification along with radial basis function as kernel for identifying functionally linked proteins. We experimentally evaluated the performance of the classifier with the linear kernel, polynomial kernel and compared the results with the existing tree kernel. In our study we have used proteins of the budding yeast saccharomyces cerevisiae genome. We generated the phylogenetic profiles of 2465 yeast genes and for our study we used the functional annotations that are available in the MIPS database. Our experiments show that the performance of the radial basis kernel is similar to polynomial kernel is some functional classes together are better than linear, tree kernel and over all radial basis kernel outperformed the polynomial kernel, linear kernel and tree kernel. In analyzing these results we show that it will be feasible to make use of SVM classifier with radial basis function as kernel to predict the gene functionality using phylogenetic profiles.

  18. Intraear Compensation of Field Corn, Zea mays, from Simulated and Naturally Occurring Injury by Ear-Feeding Larvae.

    PubMed

    Steckel, S; Stewart, S D

    2015-06-01

    Ear-feeding larvae, such as corn earworm, Helicoverpa zea Boddie (Lepidoptera: Noctuidae), can be important insect pests of field corn, Zea mays L., by feeding on kernels. Recently introduced, stacked Bacillus thuringiensis (Bt) traits provide improved protection from ear-feeding larvae. Thus, our objective was to evaluate how injury to kernels in the ear tip might affect yield when this injury was inflicted at the blister and milk stages. In 2010, simulated corn earworm injury reduced total kernel weight (i.e., yield) at both the blister and milk stage. In 2011, injury to ear tips at the milk stage affected total kernel weight. No differences in total kernel weight were found in 2013, regardless of when or how much injury was inflicted. Our data suggested that kernels within the same ear could compensate for injury to ear tips by increasing in size, but this increase was not always statistically significant or sufficient to overcome high levels of kernel injury. For naturally occurring injury observed on multiple corn hybrids during 2011 and 2012, our analyses showed either no or a minimal relationship between number of kernels injured by ear-feeding larvae and the total number of kernels per ear, total kernel weight, or the size of individual kernels. The results indicate that intraear compensation for kernel injury to ear tips can occur under at least some conditions. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.

  20. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  1. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  2. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  3. Recording from two neurons: second-order stimulus reconstruction from spike trains and population coding.

    PubMed

    Fernandes, N M; Pinto, B D L; Almeida, L O B; Slaets, J F W; Köberle, R

    2010-10-01

    We study the reconstruction of visual stimuli from spike trains, representing the reconstructed stimulus by a Volterra series up to second order. We illustrate this procedure in a prominent example of spiking neurons, recording simultaneously from the two H1 neurons located in the lobula plate of the fly Chrysomya megacephala. The fly views two types of stimuli, corresponding to rotational and translational displacements. Second-order reconstructions require the manipulation of potentially very large matrices, which obstructs the use of this approach when there are many neurons. We avoid the computation and inversion of these matrices using a convenient set of basis functions to expand our variables in. This requires approximating the spike train four-point functions by combinations of two-point functions similar to relations, which would be true for gaussian stochastic processes. In our test case, this approximation does not reduce the quality of the reconstruction. The overall contribution to stimulus reconstruction of the second-order kernels, measured by the mean squared error, is only about 5% of the first-order contribution. Yet at specific stimulus-dependent instants, the addition of second-order kernels represents up to 100% improvement, but only for rotational stimuli. We present a perturbative scheme to facilitate the application of our method to weakly correlated neurons.

  4. Gravitational lensing, time delay, and gamma-ray bursts

    NASA Technical Reports Server (NTRS)

    Mao, Shude

    1992-01-01

    The probability distributions of time delay in gravitational lensing by point masses and isolated galaxies (modeled as singular isothermal spheres) are studied. For point lenses (all with the same mass) the probability distribution is broad, and with a peak at delta(t) of about 50 S; for singular isothermal spheres, the probability distribution is a rapidly decreasing function with increasing time delay, with a median delta(t) equals about 1/h month, and its behavior depends sensitively on the luminosity function of galaxies. The present simplified calculation is particularly relevant to the gamma-ray bursts if they are of cosmological origin. The frequency of 'recurrent' bursts due to gravitational lensing by galaxies is probably between 0.05 and 0.4 percent. Gravitational lensing can be used as a test of the cosmological origin of gamma-ray bursts.

  5. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  6. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  7. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  8. Wigner functions defined with Laplace transform kernels.

    PubMed

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  9. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  10. Influence of wheat kernel physical properties on the pulverizing process.

    PubMed

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  11. An experimental assessment of the imaging quality of the low energy gamma-ray telescope ZEBRA

    NASA Technical Reports Server (NTRS)

    Butler, R. C.; Caroli, E.; Dicocco, G.; Natalucci, L.; Spada, G.; Spizzichino, A.; Stephen, J. B.; Carter, J. N.; Charalambous, P. M.; Dean, A. J.

    1985-01-01

    One gamma-ray detection plane of the ZEBRA telescope, consisting of nine position sensitive scintillation crystal bars designed to operate over the spectral range 0.2 to 10 MeV, has been constructed in the laboratory. A series of experimental images has been generated using a scaled down flight pattern mask in conjunction with a diverging gamma-ray beam. Point and extended sources have been imaged in order to assess quantitatively the performance of the system.

  12. Classification of corn kernels contaminated with aflatoxins using fluorescence and reflectance hyperspectral images analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas

    2015-05-01

    Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.

  13. Influence of Kernel Age on Fumonisin B1 Production in Maize by Fusarium moniliforme

    PubMed Central

    Warfield, Colleen Y.; Gilchrist, David G.

    1999-01-01

    Production of fumonisins by Fusarium moniliforme on naturally infected maize ears is an important food safety concern due to the toxic nature of this class of mycotoxins. Assessing the potential risk of fumonisin production in developing maize ears prior to harvest requires an understanding of the regulation of toxin biosynthesis during kernel maturation. We investigated the developmental-stage-dependent relationship between maize kernels and fumonisin B1 production by using kernels collected at the blister (R2), milk (R3), dough (R4), and dent (R5) stages following inoculation in culture at their respective field moisture contents with F. moniliforme. Highly significant differences (P ≤ 0.001) in fumonisin B1 production were found among kernels at the different developmental stages. The highest levels of fumonisin B1 were produced on the dent stage kernels, and the lowest levels were produced on the blister stage kernels. The differences in fumonisin B1 production among kernels at the different developmental stages remained significant (P ≤ 0.001) when the moisture contents of the kernels were adjusted to the same level prior to inoculation. We concluded that toxin production is affected by substrate composition as well as by moisture content. Our study also demonstrated that fumonisin B1 biosynthesis on maize kernels is influenced by factors which vary with the developmental age of the tissue. The risk of fumonisin contamination may begin early in maize ear development and increases as the kernels reach physiological maturity. PMID:10388675

  14. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  15. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Novel near-infrared sampling apparatus for single kernel analysis of oil content in maize.

    PubMed

    Janni, James; Weinstock, B André; Hagen, Lisa; Wright, Steve

    2008-04-01

    A method of rapid, nondestructive chemical and physical analysis of individual maize (Zea mays L.) kernels is needed for the development of high value food, feed, and fuel traits. Near-infrared (NIR) spectroscopy offers a robust nondestructive method of trait determination. However, traditional NIR bulk sampling techniques cannot be applied successfully to individual kernels. Obtaining optimized single kernel NIR spectra for applied chemometric predictive analysis requires a novel sampling technique that can account for the heterogeneous forms, morphologies, and opacities exhibited in individual maize kernels. In this study such a novel technique is described and compared to less effective means of single kernel NIR analysis. Results of the application of a partial least squares (PLS) derived model for predictive determination of percent oil content per individual kernel are shown.

  17. Alpha-crystallins are involved in specific interactions with the murine gamma D/E/F-crystallin-encoding gene.

    PubMed

    Pietrowski, D; Durante, M J; Liebstein, A; Schmitt-John, T; Werner, T; Graw, J

    1994-07-08

    The promoter of the murine gamma E-crystallin (gamma E-Cry) encoding gene (gamma E-cry) was analyzed for specific interactions with lenticular proteins in a gel-retardation assay. A 21-bp fragment immediately downstream of the transcription initiation site (DOTIS) is demonstrated to be responsible for specific interactions with lens extracts. The DOTIS-binding protein(s) accept only the sense DNA strand as target; anti-sense or double-stranded DNA do not interact with these proteins. The DOTIS sequence element is highly conserved among the murine gamma D-, gamma E- and gamma F-cry and is present at comparable positions in the orthologous rat genes. Only a weak or even no protein-binding activity is observed if a few particular bases are changed, as in the rat gamma A-, gamma C- and gamma E-cry elements. DOTIS-binding proteins were found in commercially available bovine alpha-Cry preparations. The essential participation of alpha-Cry in the DNA-binding protein complex was confirmed using alpha-Cry-specific monoclonal antibody. The results reported here point to a novel function of alpha-Cry besides the structural properties in the lens.

  18. Computed tomography coronary stent imaging with iterative reconstruction: a trade-off study between medium kernel and sharp kernel.

    PubMed

    Zhou, Qijing; Jiang, Biao; Dong, Fei; Huang, Peiyu; Liu, Hongtao; Zhang, Minming

    2014-01-01

    To evaluate the improvement of iterative reconstruction in image space (IRIS) technique in computed tomographic (CT) coronary stent imaging with sharp kernel, and to make a trade-off analysis. Fifty-six patients with 105 stents were examined by 128-slice dual-source CT coronary angiography (CTCA). Images were reconstructed using standard filtered back projection (FBP) and IRIS with both medium kernel and sharp kernel applied. Image noise and the stent diameter were investigated. Image noise was measured both in background vessel and in-stent lumen as objective image evaluation. Image noise score and stent score were performed as subjective image evaluation. The CTCA images reconstructed with IRIS were associated with significant noise reduction compared to that of CTCA images reconstructed using FBP technique in both of background vessel and in-stent lumen (the background noise decreased by approximately 25.4% ± 8.2% in medium kernel (P

  19. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  20. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    PubMed

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  1. Mapping QTLs controlling kernel dimensions in a wheat inter-varietal RIL mapping population.

    PubMed

    Cheng, Ruiru; Kong, Zhongxin; Zhang, Liwei; Xie, Quan; Jia, Haiyan; Yu, Dong; Huang, Yulong; Ma, Zhengqiang

    2017-07-01

    Seven kernel dimension QTLs were identified in wheat, and kernel thickness was found to be the most important dimension for grain weight improvement. Kernel morphology and weight of wheat (Triticum aestivum L.) affect both yield and quality; however, the genetic basis of these traits and their interactions has not been fully understood. In this study, to investigate the genetic factors affecting kernel morphology and the association of kernel morphology traits with kernel weight, kernel length (KL), width (KW) and thickness (KT) were evaluated, together with hundred-grain weight (HGW), in a recombinant inbred line population derived from Nanda2419 × Wangshuibai, with data from five trials (two different locations over 3 years). The results showed that HGW was more closely correlated with KT and KW than with KL. A whole genome scan revealed four QTLs for KL, one for KW and two for KT, distributed on five different chromosomes. Of them, QKl.nau-2D for KL, and QKt.nau-4B and QKt.nau-5A for KT were newly identified major QTLs for the respective traits, explaining up to 32.6 and 41.5% of the phenotypic variations, respectively. Increase of KW and KT and reduction of KL/KT and KW/KT ratios always resulted in significant higher grain weight. Lines combining the Nanda 2419 alleles of the 4B and 5A intervals had wider, thicker, rounder kernels and a 14% higher grain weight in the genotype-based analysis. A strong, negative linear relationship of the KW/KT ratio with grain weight was observed. It thus appears that kernel thickness is the most important kernel dimension factor in wheat improvement for higher yield. Mapping and marker identification of the kernel dimension-related QTLs definitely help realize the breeding goals.

  2. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. The algorithm of fast image stitching based on multi-feature extraction

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  4. Adaptive kernel function using line transect sampling

    NASA Astrophysics Data System (ADS)

    Albadareen, Baker; Ismail, Noriszura

    2018-04-01

    The estimation of f(0) is crucial in the line transect method which is used for estimating population abundance in wildlife survey's. The classical kernel estimator of f(0) has a high negative bias. Our study proposes an adaptation in the kernel function which is shown to be more efficient than the usual kernel estimator. A simulation study is adopted to compare the performance of the proposed estimators with the classical kernel estimators.

  5. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  6. Design and Mechanical Stability Analysis of the Interaction Region for the Inverse Compton Scattering Gamma-Ray Source Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Khizhanok, Andrei

    Development of a compact source of high-spectral brilliance and high impulse frequency gamma rays has been in scope of Fermi National Accelerator Laboratory for quite some time. Main goal of the project is to develop a setup to support gamma rays detection test and gamma ray spectroscopy. Potential applications include but not limited to nuclear astrophysics, nuclear medicine, oncology ('gamma knife'). Present work covers multiple interconnected stages of development of the interaction region to ensure high levels of structural strength and vibrational resistance. Inverse Compton scattering is a complex phenomenon, in which charged particle transfers a part of its energy to a photon. It requires extreme precision as the interaction point is estimated to be 20 microm. The slightest deflection of the mirrors will reduce effectiveness of conversion by orders of magnitude. For acceptable conversion efficiency laser cavity also must have >1000 finesse value, which requires a trade-off between size, mechanical stability, complexity, and price of the setup. This work focuses on advantages and weak points of different designs of interaction regions as well as in-depth description of analyses performed. This includes laser cavity amplification and finesse estimates, natural frequency mapping, harmonic analysis. Structural analysis is required as interaction must occur under high vacuum conditions.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kachru, Shamit; Paquette, Natalie M.; Volpato, Roberto

    Here, the simplest string theory compactifications to 3D with 16 supercharges—the heterotic string on T 7, and type II strings onmore » $$K3 \\times T^3$$ —are related by U-duality, and share a moduli space of vacua parametrized by $$O(8, 24;{{\\mathbb Z}}) ~\\backslash ~O(8, 24)~ /~ (O(8) \\times O(24))$$ . One can think of this as the moduli space of even, self-dual 32-dimensional lattices with signature (8,24). At 24 special points in moduli space, the lattice splits as $$\\Gamma^{8, 0} \\oplus \\Gamma^{0, 24}$$ . $$\\Gamma^{0, 24}$$ can be the Leech lattice or any of 23 Niemeier lattices, while $$\\Gamma^{8, 0}$$ is the E 8 root lattice. We show that starting from this observation, one can find a precise connection between the Umbral groups and type IIA string theory on K3. This may provide a natural physical starting point for understanding Mathieu and Umbral moonshine. The maximal unbroken subgroups of Umbral groups in 6D (or any other limit) are those obtained by starting at the associated Niemeier point and moving in moduli space while preserving the largest possible subgroup of the Umbral group. To illustrate the action of these symmetries on BPS states, we discuss the computation of certain protected four-derivative terms in the effective field theory, and recover facts about the spectrum and symmetry representations of 1/2-BPS states.« less

  8. Lifetime Effective Dose Assessment Based on Background Outdoor Gamma Exposure in Chihuahua City, Mexico

    PubMed Central

    Luevano-Gurrola, Sergio; Perez-Tapia, Angelica; Pinedo-Alvarez, Carmelo; Carrillo-Flores, Jorge; Montero-Cabrera, Maria Elena; Renteria-Villalobos, Marusia

    2015-01-01

    Determining ionizing radiation in a geographic area serves to assess its effects on a population’s health. The aim of this study was to evaluate the spatial distribution of the background environmental outdoor gamma dose rates in Chihuahua City. This study also estimated the annual effective dose and the lifetime cancer risks of the population of this city. To determine the outdoor gamma dose rate in air, the annual effective dose and the lifetime cancer risk, 48 sampling points were randomly selected in Chihuahua City. Outdoor gamma dose rate measurements were carried out by using a Geiger-Müller counter. Outdoor gamma dose rates ranged from 113 to 310 nGy·h−1. At the same sites, 48 soil samples were taken to obtain the activity concentrations of 226Ra, 232Th and 40K and to calculate their terrestrial gamma dose rates. Radioisotope activity concentrations were determined by gamma spectrometry. Calculated gamma dose rates ranged from 56 to 193 nGy·h−1. Results indicated that the lifetime effective dose of the inhabitants of Chihuahua City is on average 19.8 mSv, resulting in a lifetime cancer risk of 0.001. In addition, the mean of the activity concentrations in soil were 52, 73 and 1097 Bq·kg−1, for 226Ra, 232Th and 40K, respectively. From the analysis, the spatial distribution of 232Th, 226Ra and 40K is to the north, to the north-center and to the south of city, respectively. In conclusion, the natural background gamma dose received by the inhabitants of Chihuahua City is high and mainly due to the geological characteristics of the zone. From the radiological point of view, this kind of study allows us to identify the importance of manmade environments, which are often highly variable and difficult to characterize. PMID:26437425

  9. Lifetime Effective Dose Assessment Based on Background Outdoor Gamma Exposure in Chihuahua City, Mexico.

    PubMed

    Luevano-Gurrola, Sergio; Perez-Tapia, Angelica; Pinedo-Alvarez, Carmelo; Carrillo-Flores, Jorge; Montero-Cabrera, Maria Elena; Renteria-Villalobos, Marusia

    2015-09-30

    Determining ionizing radiation in a geographic area serves to assess its effects on a population's health. The aim of this study was to evaluate the spatial distribution of the background environmental outdoor gamma dose rates in Chihuahua City. This study also estimated the annual effective dose and the lifetime cancer risks of the population of this city. To determine the outdoor gamma dose rate in air, the annual effective dose and the lifetime cancer risk, 48 sampling points were randomly selected in Chihuahua City. Outdoor gamma dose rate measurements were carried out by using a Geiger-Müller counter. Outdoor gamma dose rates ranged from 113 to 310 nGy·h(-1). At the same sites, 48 soil samples were taken to obtain the activity concentrations of (226)Ra, (232)Th and (40)K and to calculate their terrestrial gamma dose rates. Radioisotope activity concentrations were determined by gamma spectrometry. Calculated gamma dose rates ranged from 56 to 193 nGy·h(-1). Results indicated that the lifetime effective dose of the inhabitants of Chihuahua City is on average 19.8 mSv, resulting in a lifetime cancer risk of 0.001. In addition, the mean of the activity concentrations in soil were 52, 73 and 1097 Bq·kg(-1), for (226)Ra, (232)Th and (40)K, respectively. From the analysis, the spatial distribution of (232)Th, (226)Ra and (40)K is to the north, to the north-center and to the south of city, respectively. In conclusion, the natural background gamma dose received by the inhabitants of Chihuahua City is high and mainly due to the geological characteristics of the zone. From the radiological point of view, this kind of study allows us to identify the importance of manmade environments, which are often highly variable and difficult to characterize.

  10. Pollen source effects on growth of kernel structures and embryo chemical compounds in maize.

    PubMed

    Tanaka, W; Mantese, A I; Maddonni, G A

    2009-08-01

    Previous studies have reported effects of pollen source on the oil concentration of maize (Zea mays) kernels through modifications to both the embryo/kernel ratio and embryo oil concentration. The present study expands upon previous analyses by addressing pollen source effects on the growth of kernel structures (i.e. pericarp, endosperm and embryo), allocation of embryo chemical constituents (i.e. oil, protein, starch and soluble sugars), and the anatomy and histology of the embryos. Maize kernels with different oil concentration were obtained from pollinations with two parental genotypes of contrasting oil concentration. The dynamics of the growth of kernel structures and allocation of embryo chemical constituents were analysed during the post-flowering period. Mature kernels were dissected to study the anatomy (embryonic axis and scutellum) and histology [cell number and cell size of the scutellums, presence of sub-cellular structures in scutellum tissue (starch granules, oil and protein bodies)] of the embryos. Plants of all crosses exhibited a similar kernel number and kernel weight. Pollen source modified neither the growth period of kernel structures, nor pericarp growth rate. By contrast, pollen source determined a trade-off between embryo and endosperm growth rates, which impacted on the embryo/kernel ratio of mature kernels. Modifications to the embryo size were mediated by scutellum cell number. Pollen source also affected (P < 0.01) allocation of embryo chemical compounds. Negative correlations among embryo oil concentration and those of starch (r = 0.98, P < 0.01) and soluble sugars (r = 0.95, P < 0.05) were found. Coincidently, embryos with low oil concentration had an increased (P < 0.05-0.10) scutellum cell area occupied by starch granules and fewer oil bodies. The effects of pollen source on both embryo/kernel ratio and allocation of embryo chemicals seems to be related to the early established sink strength (i.e. sink size and sink activity) of the embryos.

  11. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall be...

  12. 7 CFR 51.2090 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... defect which makes a kernel or piece of kernel unsuitable for human consumption, and includes decay...: Shriveling when the kernel is seriously withered, shrunken, leathery, tough or only partially developed: Provided, that partially developed kernels are not considered seriously damaged if more than one-fourth of...

  13. Anisotropic hydrodynamics with a scalar collisional kernel

    NASA Astrophysics Data System (ADS)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  14. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Straight-chain halocarbon forming fluids for TRISO fuel kernel production - Tests with yttria-stabilized zirconia microspheres

    NASA Astrophysics Data System (ADS)

    Baker, M. P.; King, J. C.; Gorman, B. P.; Braley, J. C.

    2015-03-01

    Current methods of TRISO fuel kernel production in the United States use a sol-gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.

  16. Numerical study of the ignition behavior of a post-discharge kernel injected into a turbulent stratified cross-flow

    NASA Astrophysics Data System (ADS)

    Jaravel, Thomas; Labahn, Jeffrey; Ihme, Matthias

    2017-11-01

    The reliable initiation of flame ignition by high-energy spark kernels is critical for the operability of aviation gas turbines. The evolution of a spark kernel ejected by an igniter into a turbulent stratified environment is investigated using detailed numerical simulations with complex chemistry. At early times post ejection, comparisons of simulation results with high-speed Schlieren data show that the initial trajectory of the kernel is well reproduced, with a significant amount of air entrainment from the surrounding flow that is induced by the kernel ejection. After transiting in a non-flammable mixture, the kernel reaches a second stream of flammable methane-air mixture, where the successful of the kernel ignition was found to depend on the local flow state and operating conditions. By performing parametric studies, the probability of kernel ignition was identified, and compared with experimental observations. The ignition behavior is characterized by analyzing the local chemical structure, and its stochastic variability is also investigated.

  17. The site, size, spatial stability, and energetics of an X-ray flare kernel

    NASA Technical Reports Server (NTRS)

    Petrasso, R.; Gerassimenko, M.; Nolte, J.

    1979-01-01

    The site, size evolution, and energetics of an X-ray kernel that dominated a solar flare during its rise and somewhat during its peak are investigated. The position of the kernel remained stationary to within about 3 arc sec over the 30-min interval of observations, despite pulsations in the kernel X-ray brightness in excess of a factor of 10. This suggests a tightly bound, deeply rooted magnetic structure, more plausibly associated with the near chromosphere or low corona rather than with the high corona. The H-alpha flare onset coincided with the appearance of the kernel, again suggesting a close spatial and temporal coupling between the chromospheric H-alpha event and the X-ray kernel. At the first kernel brightness peak its size was no larger than about 2 arc sec, when it accounted for about 40% of the total flare flux. In the second rise phase of the kernel, a source power input of order 2 times 10 to the 24th ergs/sec is minimally required.

  18. The pre-image problem in kernel methods.

    PubMed

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  19. Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.

    PubMed

    Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I

    2016-03-01

    The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor.

  20. Development of a kernel function for clinical data.

    PubMed

    Daemen, Anneleen; De Moor, Bart

    2009-01-01

    For most diseases and examinations, clinical data such as age, gender and medical history guides clinical management, despite the rise of high-throughput technologies. To fully exploit such clinical information, appropriate modeling of relevant parameters is required. As the widely used linear kernel function has several disadvantages when applied to clinical data, we propose a new kernel function specifically developed for this data. This "clinical kernel function" more accurately represents similarities between patients. Evidently, three data sets were studied and significantly better performances were obtained with a Least Squares Support Vector Machine when based on the clinical kernel function compared to the linear kernel function.

  1. Manycore Performance-Portability: Kokkos Multidimensional Array Library

    DOE PAGES

    Edwards, H. Carter; Sunderland, Daniel; Porter, Vicki; ...

    2012-01-01

    Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel executionmore » performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].« less

  2. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    PubMed

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  3. Impact of deep learning on the normalization of reconstruction kernel effects in imaging biomarker quantification: a pilot study in CT emphysema

    NASA Astrophysics Data System (ADS)

    Jin, Hyeongmin; Heo, Changyong; Kim, Jong Hyo

    2018-02-01

    Differing reconstruction kernels are known to strongly affect the variability of imaging biomarkers and thus remain as a barrier in translating the computer aided quantification techniques into clinical practice. This study presents a deep learning application to CT kernel conversion which converts a CT image of sharp kernel to that of standard kernel and evaluates its impact on variability reduction of a pulmonary imaging biomarker, the emphysema index (EI). Forty cases of low-dose chest CT exams obtained with 120kVp, 40mAs, 1mm thickness, of 2 reconstruction kernels (B30f, B50f) were selected from the low dose lung cancer screening database of our institution. A Fully convolutional network was implemented with Keras deep learning library. The model consisted of symmetric layers to capture the context and fine structure characteristics of CT images from the standard and sharp reconstruction kernels. Pairs of the full-resolution CT data set were fed to input and output nodes to train the convolutional network to learn the appropriate filter kernels for converting the CT images of sharp kernel to standard kernel with a criterion of measuring the mean squared error between the input and target images. EIs (RA950 and Perc15) were measured with a software package (ImagePrism Pulmo, Seoul, South Korea) and compared for the data sets of B50f, B30f, and the converted B50f. The effect of kernel conversion was evaluated with the mean and standard deviation of pair-wise differences in EI. The population mean of RA950 was 27.65 +/- 7.28% for B50f data set, 10.82 +/- 6.71% for the B30f data set, and 8.87 +/- 6.20% for the converted B50f data set. The mean of pair-wise absolute differences in RA950 between B30f and B50f is reduced from 16.83% to 1.95% using kernel conversion. Our study demonstrates the feasibility of applying the deep learning technique for CT kernel conversion and reducing the kernel-induced variability of EI quantification. The deep learning model has a potential to improve the reliability of imaging biomarker, especially in evaluating the longitudinal changes of EI even when the patient CT scans were performed with different kernels.

  4. Metabolic network prediction through pairwise rational kernels.

    PubMed

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy values have been improved, while maintaining lower construction and execution times. The power of using kernels is that almost any sort of data can be represented using kernels. Therefore, completely disparate types of data can be combined to add power to kernel-based machine learning methods. When we compared our proposal using PRKs with other similar kernel, the execution times were decreased, with no compromise of accuracy. We also proved that by combining PRKs with other kernels that include evolutionary information, the accuracy can also also be improved. As our proposal can use any type of sequence data, genes do not need to be properly annotated, avoiding accumulation errors because of incorrect previous annotations.

  5. Major Depression Detection from EEG Signals Using Kernel Eigen-Filter-Bank Common Spatial Patterns.

    PubMed

    Liao, Shih-Cheng; Wu, Chien-Te; Huang, Hao-Chuan; Cheng, Wei-Teng; Liu, Yi-Hung

    2017-06-14

    Major depressive disorder (MDD) has become a leading contributor to the global burden of disease; however, there are currently no reliable biological markers or physiological measurements for efficiently and effectively dissecting the heterogeneity of MDD. Here we propose a novel method based on scalp electroencephalography (EEG) signals and a robust spectral-spatial EEG feature extractor called kernel eigen-filter-bank common spatial pattern (KEFB-CSP). The KEFB-CSP first filters the multi-channel raw EEG signals into a set of frequency sub-bands covering the range from theta to gamma bands, then spatially transforms the EEG signals of each sub-band from the original sensor space to a new space where the new signals (i.e., CSPs) are optimal for the classification between MDD and healthy controls, and finally applies the kernel principal component analysis (kernel PCA) to transform the vector containing the CSPs from all frequency sub-bands to a lower-dimensional feature vector called KEFB-CSP. Twelve patients with MDD and twelve healthy controls participated in this study, and from each participant we collected 54 resting-state EEGs of 6 s length (5 min and 24 s in total). Our results show that the proposed KEFB-CSP outperforms other EEG features including the powers of EEG frequency bands, and fractal dimension, which had been widely applied in previous EEG-based depression detection studies. The results also reveal that the 8 electrodes from the temporal areas gave higher accuracies than other scalp areas. The KEFB-CSP was able to achieve an average EEG classification accuracy of 81.23% in single-trial analysis when only the 8-electrode EEGs of the temporal area and a support vector machine (SVM) classifier were used. We also designed a voting-based leave-one-participant-out procedure to test the participant-independent individual classification accuracy. The voting-based results show that the mean classification accuracy of about 80% can be achieved by the KEFP-CSP feature and the SVM classifier with only several trials, and this level of accuracy seems to become stable as more trials (i.e., <7 trials) are used. These findings therefore suggest that the proposed method has a great potential for developing an efficient (required only a few 6-s EEG signals from the 8 electrodes over the temporal) and effective (~80% classification accuracy) EEG-based brain-computer interface (BCI) system which may, in the future, help psychiatrists provide individualized and effective treatments for MDD patients.

  6. Spherical integral transforms of second-order gravitational tensor components onto third-order gravitational tensor components

    NASA Astrophysics Data System (ADS)

    Šprlák, Michal; Novák, Pavel

    2017-02-01

    New spherical integral formulas between components of the second- and third-order gravitational tensors are formulated in this article. First, we review the nomenclature and basic properties of the second- and third-order gravitational tensors. Initial points of mathematical derivations, i.e., the second- and third-order differential operators defined in the spherical local North-oriented reference frame and the analytical solutions of the gradiometric boundary-value problem, are also summarized. Secondly, we apply the third-order differential operators to the analytical solutions of the gradiometric boundary-value problem which gives 30 new integral formulas transforming (1) vertical-vertical, (2) vertical-horizontal and (3) horizontal-horizontal second-order gravitational tensor components onto their third-order counterparts. Using spherical polar coordinates related sub-integral kernels can efficiently be decomposed into azimuthal and isotropic parts. Both spectral and closed forms of the isotropic kernels are provided and their limits are investigated. Thirdly, numerical experiments are performed to test the consistency of the new integral transforms and to investigate properties of the sub-integral kernels. The new mathematical apparatus is valid for any harmonic potential field and may be exploited, e.g., when gravitational/magnetic second- and third-order tensor components become available in the future. The new integral formulas also extend the well-known Meissl diagram and enrich the theoretical apparatus of geodesy.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Lin, E-mail: zhanglincsu@163.com; Liu Hengsan, E-mail: lhsj63@sohu.com; He Xinbo, E-mail: xb_he@163.com

    The characteristics of rapidly solidified FGH96 superalloy powder and the thermal evolution behavior of carbides and {gamma} Prime precipitates within powder particles were investigated. It was observed that the reduction of powder size and the increase of cooling rate had transformed the solidification morphologies of atomized powder from dendrite in major to cellular structure. The secondary dendritic spacing was measured to be 1.02-2.55 {mu}m and the corresponding cooling rates were estimated to be in the range of 1.4 Multiplication-Sign 10{sup 4}-4.7 Multiplication-Sign 10{sup 5} K{center_dot}s{sup -1}. An increase in the annealing temperature had rendered the phase transformation of carbides evolvingmore » from non-equilibrium MC Prime carbides to intermediate transition stage of M{sub 23}C{sub 6} carbides, and finally to thermodynamically stable MC carbides. The superfine {gamma} Prime precipitates were formed at the dendritic boundaries of rapidly solidified superalloy powder. The coalescence, growth, and homogenization of {gamma}' precipitates occurred with increasing annealing temperature. With decreasing cooling rate from 650 Degree-Sign C{center_dot}K{sup -1} to 5 Degree-Sign C{center_dot}K{sup -1}, the morphological development of {gamma} Prime precipitates had been shown to proceed from spheroidal to cuboidal and finally to solid state dendrites. Meanwhile, a shift had been observed from dendritic morphology to recrystallized structure between 900 Degree-Sign C and 1050 Degree-Sign C. Moreover, accelerated evolution of carbides and {gamma}' precipitates had been facilitated by the formation of new grain boundaries which provide fast diffusion path for atomic elements. - Highlights: Black-Right-Pointing-Pointer Microstructural characteristic of FGH96 superalloy powder was investigated. Black-Right-Pointing-Pointer The relation between microstructure, particle size, and cooling rate was studied. Black-Right-Pointing-Pointer Thermal evolution behavior of {gamma} Prime and carbides in loose FGH96 powder was studied.« less

  8. Differential metabolome analysis of field-grown maize kernels in response to drought stress

    USDA-ARS?s Scientific Manuscript database

    Drought stress constrains maize kernel development and can exacerbate aflatoxin contamination. In order to identify drought responsive metabolites and explore pathways involved in kernel responses, a metabolomics analysis was conducted on kernels from a drought tolerant line, Lo964, and a sensitive ...

  9. Occurrence of 'super soft' wheat kernel texture in hexaploid and tetraploid wheats

    USDA-ARS?s Scientific Manuscript database

    Wheat kernel texture is a key trait that governs milling performance, flour starch damage, flour particle size, flour hydration properties, and baking quality. Kernel texture is commonly measured using the Perten Single Kernel Characterization System (SKCS). The SKCS returns texture values (Hardness...

  10. 7 CFR 868.203 - Basis of determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Principles Governing..., heat-damaged kernels, red rice and damaged kernels, chalky kernels, other types, color, and the special grade Parboiled rough rice shall be on the basis of the whole and large broken kernels of milled rice...

  11. 7 CFR 868.203 - Basis of determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Principles Governing..., heat-damaged kernels, red rice and damaged kernels, chalky kernels, other types, color, and the special grade Parboiled rough rice shall be on the basis of the whole and large broken kernels of milled rice...

  12. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the use...

  13. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the use...

  14. Performance Characteristics of a Kernel-Space Packet Capture Module

    DTIC Science & Technology

    2010-03-01

    Defense, or the United States Government . AFIT/GCO/ENG/10-03 PERFORMANCE CHARACTERISTICS OF A KERNEL-SPACE PACKET CAPTURE MODULE THESIS Presented to the...3.1.2.3 Prototype. The proof of concept for this research is the design, development, and comparative performance analysis of a kernel level N2d capture...changes to kernel code 5. Can be used for both user-space and kernel-space capture applications in order to control comparative performance analysis to

  15. High-throughput method for ear phenotyping and kernel weight estimation in maize using ear digital imaging.

    PubMed

    Makanza, R; Zaman-Allah, M; Cairns, J E; Eyre, J; Burgueño, J; Pacheco, Ángela; Diepenbrock, C; Magorokosho, C; Tarekegne, A; Olsen, M; Prasanna, B M

    2018-01-01

    Grain yield, ear and kernel attributes can assist to understand the performance of maize plant under different environmental conditions and can be used in the variety development process to address farmer's preferences. These parameters are however still laborious and expensive to measure. A low-cost ear digital imaging method was developed that provides estimates of ear and kernel attributes i.e., ear number and size, kernel number and size as well as kernel weight from photos of ears harvested from field trial plots. The image processing method uses a script that runs in a batch mode on ImageJ; an open source software. Kernel weight was estimated using the total kernel number derived from the number of kernels visible on the image and the average kernel size. Data showed a good agreement in terms of accuracy and precision between ground truth measurements and data generated through image processing. Broad-sense heritability of the estimated parameters was in the range or higher than that for measured grain weight. Limitation of the method for kernel weight estimation is discussed. The method developed in this work provides an opportunity to significantly reduce the cost of selection in the breeding process, especially for resource constrained crop improvement programs and can be used to learn more about the genetic bases of grain yield determinants.

  16. A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.

    2017-05-01

    Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.

  17. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  18. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    DOE PAGES

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...

    2016-07-29

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less

  19. Statistical measurement of the gamma-ray source-count distribution as a function of energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less

  20. Brain tumor image segmentation using kernel dictionary learning.

    PubMed

    Jeon Lee; Seung-Jun Kim; Rong Chen; Herskovits, Edward H

    2015-08-01

    Automated brain tumor image segmentation with high accuracy and reproducibility holds a big potential to enhance the current clinical practice. Dictionary learning (DL) techniques have been applied successfully to various image processing tasks recently. In this work, kernel extensions of the DL approach are adopted. Both reconstructive and discriminative versions of the kernel DL technique are considered, which can efficiently incorporate multi-modal nonlinear feature mappings based on the kernel trick. Our novel discriminative kernel DL formulation allows joint learning of a task-driven kernel-based dictionary and a linear classifier using a K-SVD-type algorithm. The proposed approaches were tested using real brain magnetic resonance (MR) images of patients with high-grade glioma. The obtained preliminary performances are competitive with the state of the art. The discriminative kernel DL approach is seen to reduce computational burden without much sacrifice in performance.

Top