7 CFR 51.2113 - Size requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... of range in count of whole almond kernels per ounce or in terms of minimum, or minimum and maximum diameter. When a range in count is specified, the whole kernels shall be fairly uniform in size, and the average count per ounce shall be within the range specified. Doubles and broken kernels shall not be used...
Discrete element method as an approach to model the wheat milling process
USDA-ARS?s Scientific Manuscript database
It is a well-known phenomenon that break-release, particle size, and size distribution of wheat milling are functions of machine operational parameters and grain properties. Due to the non-uniformity of characteristics and properties of wheat kernels, the kernel physical and mechanical properties af...
NASA Astrophysics Data System (ADS)
Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki
2017-02-01
Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.
Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa; Yasuno, Yoshiaki
2017-01-01
Jones matrix-based polarization sensitive optical coherence tomography (JM-OCT) simultaneously measures optical intensity, birefringence, degree of polarization uniformity, and OCT angiography. The statistics of the optical features in a local region, such as the local mean of the OCT intensity, are frequently used for image processing and the quantitative analysis of JM-OCT. Conventionally, local statistics have been computed with fixed-size rectangular kernels. However, this results in a trade-off between image sharpness and statistical accuracy. We introduce a superpixel method to JM-OCT for generating the flexible kernels of local statistics. A superpixel is a cluster of image pixels that is formed by the pixels’ spatial and signal value proximities. An algorithm for superpixel generation specialized for JM-OCT and its optimization methods are presented in this paper. The spatial proximity is in two-dimensional cross-sectional space and the signal values are the four optical features. Hence, the superpixel method is a six-dimensional clustering technique for JM-OCT pixels. The performance of the JM-OCT superpixels and its optimization methods are evaluated in detail using JM-OCT datasets of posterior eyes. The superpixels were found to well preserve tissue structures, such as layer structures, sclera, vessels, and retinal pigment epithelium. And hence, they are more suitable for local statistics kernels than conventional uniform rectangular kernels. PMID:29082073
The impact of volunteer rice infestation on rice yield and grain quality
USDA-ARS?s Scientific Manuscript database
Volunteer rice (Oryza sativa L.) is a crop stand which emerges from shattered seeds of the previous crop. When present at sufficiently high levels, it can potentially affect the commercial market value of cultivated rice products, especially if it produces kernels with quality, uniformity, or size ...
Regularization techniques on least squares non-uniform fast Fourier transform.
Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena
2013-05-01
Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.
Cid, Jaime A; von Davier, Alina A
2015-05-01
Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.
Xu, Daguang; Huang, Yong; Kang, Jin U
2014-06-16
We implemented the graphics processing unit (GPU) accelerated compressive sensing (CS) non-uniform in k-space spectral domain optical coherence tomography (SD OCT). Kaiser-Bessel (KB) function and Gaussian function are used independently as the convolution kernel in the gridding-based non-uniform fast Fourier transform (NUFFT) algorithm with different oversampling ratios and kernel widths. Our implementation is compared with the GPU-accelerated modified non-uniform discrete Fourier transform (MNUDFT) matrix-based CS SD OCT and the GPU-accelerated fast Fourier transform (FFT)-based CS SD OCT. It was found that our implementation has comparable performance to the GPU-accelerated MNUDFT-based CS SD OCT in terms of image quality while providing more than 5 times speed enhancement. When compared to the GPU-accelerated FFT based-CS SD OCT, it shows smaller background noise and less side lobes while eliminating the need for the cumbersome k-space grid filling and the k-linear calibration procedure. Finally, we demonstrated that by using a conventional desktop computer architecture having three GPUs, real-time B-mode imaging can be obtained in excess of 30 fps for the GPU-accelerated NUFFT based CS SD OCT with frame size 2048(axial) × 1,000(lateral).
NASA Astrophysics Data System (ADS)
Challamel, Noël
2018-04-01
The static and dynamic behaviour of a nonlocal bar of finite length is studied in this paper. The nonlocal integral models considered in this paper are strain-based and relative displacement-based nonlocal models; the latter one is also labelled as a peridynamic model. For infinite media, and for sufficiently smooth displacement fields, both integral nonlocal models can be equivalent, assuming some kernel correspondence rules. For infinite media (or finite media with extended reflection rules), it is also shown that Eringen's differential model can be reformulated into a consistent strain-based integral nonlocal model with exponential kernel, or into a relative displacement-based integral nonlocal model with a modified exponential kernel. A finite bar in uniform tension is considered as a paradigmatic static case. The strain-based nonlocal behaviour of this bar in tension is analyzed for different kernels available in the literature. It is shown that the kernel has to fulfil some normalization and end compatibility conditions in order to preserve the uniform strain field associated with this homogeneous stress state. Such a kernel can be built by combining a local and a nonlocal strain measure with compatible boundary conditions, or by extending the domain outside its finite size while preserving some kinematic compatibility conditions. The same results are shown for the nonlocal peridynamic bar where a homogeneous strain field is also analytically obtained in the elastic bar for consistent compatible kinematic boundary conditions at the vicinity of the end conditions. The results are extended to the vibration of a fixed-fixed finite bar where the natural frequencies are calculated for both the strain-based and the peridynamic models.
Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang
2016-01-01
Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.
Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang
2016-01-01
Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; von Davier, Alina A.
2008-01-01
The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…
MO-G-17A-05: PET Image Deblurring Using Adaptive Dictionary Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valiollahzadeh, S; Clark, J; Mawlawi, O
2014-06-15
Purpose: The aim of this work is to deblur PET images while suppressing Poisson noise effects using adaptive dictionary learning (DL) techniques. Methods: The model that relates a blurred and noisy PET image to the desired image is described as a linear transform y=Hm+n where m is the desired image, H is a blur kernel, n is Poisson noise and y is the blurred image. The approach we follow to recover m involves the sparse representation of y over a learned dictionary, since the image has lots of repeated patterns, edges, textures and smooth regions. The recovery is based onmore » an optimization of a cost function having four major terms: adaptive dictionary learning term, sparsity term, regularization term, and MLEM Poisson noise estimation term. The optimization is solved by a variable splitting method that introduces additional variables. We simulated a 128×128 Hoffman brain PET image (baseline) with varying kernel types and sizes (Gaussian 9×9, σ=5.4mm; Uniform 5×5, σ=2.9mm) with additive Poisson noise (Blurred). Image recovery was performed once when the kernel type was included in the model optimization and once with the model blinded to kernel type. The recovered image was compared to the baseline as well as another recovery algorithm PIDSPLIT+ (Setzer et. al.) by calculating PSNR (Peak SNR) and normalized average differences in pixel intensities (NADPI) of line profiles across the images. Results: For known kernel types, the PSNR of the Gaussian (Uniform) was 28.73 (25.1) and 25.18 (23.4) for DL and PIDSPLIT+ respectively. For blinded deblurring the PSNRs were 25.32 and 22.86 for DL and PIDSPLIT+ respectively. NADPI between baseline and DL, and baseline and blurred for the Gaussian kernel was 2.5 and 10.8 respectively. Conclusion: PET image deblurring using dictionary learning seems to be a good approach to restore image resolution in presence of Poisson noise. GE Health Care.« less
Effect of Local TOF Kernel Miscalibrations on Contrast-Noise in TOF PET
NASA Astrophysics Data System (ADS)
Clementel, Enrico; Mollet, Pieter; Vandenberghe, Stefaan
2013-06-01
TOF PET imaging requires specific calibrations: accurate characterization of the system timing resolution and timing offset is required to achieve the full potential image quality. Current system models used in image reconstruction assume a spatially uniform timing resolution kernel. Furthermore, although the timing offset errors are often pre-corrected, this correction becomes less accurate with the time since, especially in older scanners, the timing offsets are often calibrated only during the installation, as the procedure is time-consuming. In this study, we investigate and compare the effects of local mismatch of timing resolution when a uniform kernel is applied to systems with local variations in timing resolution and the effects of uncorrected time offset errors on image quality. A ring-like phantom was acquired on a Philips Gemini TF scanner and timing histograms were obtained from coincidence events to measure timing resolution along all sets of LORs crossing the scanner center. In addition, multiple acquisitions of a cylindrical phantom, 20 cm in diameter with spherical inserts, and a point source were simulated. A location-dependent timing resolution was simulated, with a median value of 500 ps and increasingly large local variations, and timing offset errors ranging from 0 to 350 ps were also simulated. Images were reconstructed with TOF MLEM with a uniform kernel corresponding to the effective timing resolution of the data, as well as with purposefully mismatched kernels. To CRC vs noise curves were measured over the simulated cylinder realizations, while the simulated point source was processed to generate timing histograms of the data. Results show that timing resolution is not uniform over the FOV of the considered scanner. The simulated phantom data indicate that CRC is moderately reduced in data sets with locally varying timing resolution reconstructed with a uniform kernel, while still performing better than non-TOF reconstruction. On the other hand, uncorrected offset errors in our setup have a larger potential for decreasing image quality and can lead to a reduction of CRC of up to 15% and an increase in the measured timing resolution kernel up to 40%. However, in realistic conditions in frequently calibrated systems, using a larger effective timing kernel in image reconstruction can compensate uncorrected offset errors.
Improved scatter correction using adaptive scatter kernel superposition
NASA Astrophysics Data System (ADS)
Sun, M.; Star-Lack, J. M.
2010-11-01
Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.
7 CFR 51.1447 - Fairly uniform in color.
Code of Federal Regulations, 2014 CFR
2014-01-01
... § 51.1447 Fairly uniform in color. Fairly uniform in color means that 90 percent or more of the kernels in the lot have skin color within the range of one or two color classifications. ... 7 Agriculture 2 2014-01-01 2014-01-01 false Fairly uniform in color. 51.1447 Section 51.1447...
7 CFR 51.1447 - Fairly uniform in color.
Code of Federal Regulations, 2013 CFR
2013-01-01
... § 51.1447 Fairly uniform in color. Fairly uniform in color means that 90 percent or more of the kernels in the lot have skin color within the range of one or two color classifications. ... 7 Agriculture 2 2013-01-01 2013-01-01 false Fairly uniform in color. 51.1447 Section 51.1447...
Fine-mapping of qGW4.05, a major QTL for kernel weight and size in maize.
Chen, Lin; Li, Yong-xiang; Li, Chunhui; Wu, Xun; Qin, Weiwei; Li, Xin; Jiao, Fuchao; Zhang, Xiaojing; Zhang, Dengfeng; Shi, Yunsu; Song, Yanchun; Li, Yu; Wang, Tianyu
2016-04-12
Kernel weight and size are important components of grain yield in cereals. Although some information is available concerning the map positions of quantitative trait loci (QTL) for kernel weight and size in maize, little is known about the molecular mechanisms of these QTLs. qGW4.05 is a major QTL that is associated with kernel weight and size in maize. We combined linkage analysis and association mapping to fine-map and identify candidate gene(s) at qGW4.05. QTL qGW4.05 was fine-mapped to a 279.6-kb interval in a segregating population derived from a cross of Huangzaosi with LV28. By combining the results of regional association mapping and linkage analysis, we identified GRMZM2G039934 as a candidate gene responsible for qGW4.05. Candidate gene-based association mapping was conducted using a panel of 184 inbred lines with variable kernel weights and kernel sizes. Six polymorphic sites in the gene GRMZM2G039934 were significantly associated with kernel weight and kernel size. The results of linkage analysis and association mapping revealed that GRMZM2G039934 is the most likely candidate gene for qGW4.05. These results will improve our understanding of the genetic architecture and molecular mechanisms underlying kernel development in maize.
The Conserved and Unique Genetic Architecture of Kernel Size and Weight in Maize and Rice1[OPEN
Lan, Liu; Wang, Hongze; Xu, Yuancheng; Yang, Xiaohong; Li, Wenqiang; Tong, Hao; Xiao, Yingjie; Pan, Qingchun; Qiao, Feng; Raihan, Mohammad Sharif; Liu, Haijun; Yang, Ning; Wang, Xiaqing; Deng, Min; Jin, Minliang; Zhao, Lijun; Luo, Xin; Zhan, Wei; Liu, Nannan; Wang, Hong; Chen, Gengshen
2017-01-01
Maize (Zea mays) is a major staple crop. Maize kernel size and weight are important contributors to its yield. Here, we measured kernel length, kernel width, kernel thickness, hundred kernel weight, and kernel test weight in 10 recombinant inbred line populations and dissected their genetic architecture using three statistical models. In total, 729 quantitative trait loci (QTLs) were identified, many of which were identified in all three models, including 22 major QTLs that each can explain more than 10% of phenotypic variation. To provide candidate genes for these QTLs, we identified 30 maize genes that are orthologs of 18 rice (Oryza sativa) genes reported to affect rice seed size or weight. Interestingly, 24 of these 30 genes are located in the identified QTLs or within 1 Mb of the significant single-nucleotide polymorphisms. We further confirmed the effects of five genes on maize kernel size/weight in an independent association mapping panel with 540 lines by candidate gene association analysis. Lastly, the function of ZmINCW1, a homolog of rice GRAIN INCOMPLETE FILLING1 that affects seed size and weight, was characterized in detail. ZmINCW1 is close to QTL peaks for kernel size/weight (less than 1 Mb) and contains significant single-nucleotide polymorphisms affecting kernel size/weight in the association panel. Overexpression of this gene can rescue the reduced weight of the Arabidopsis (Arabidopsis thaliana) homozygous mutant line in the AtcwINV2 gene (Arabidopsis ortholog of ZmINCW1). These results indicate that the molecular mechanisms affecting seed development are conserved in maize, rice, and possibly Arabidopsis. PMID:28811335
The Conserved and Unique Genetic Architecture of Kernel Size and Weight in Maize and Rice.
Liu, Jie; Huang, Juan; Guo, Huan; Lan, Liu; Wang, Hongze; Xu, Yuancheng; Yang, Xiaohong; Li, Wenqiang; Tong, Hao; Xiao, Yingjie; Pan, Qingchun; Qiao, Feng; Raihan, Mohammad Sharif; Liu, Haijun; Zhang, Xuehai; Yang, Ning; Wang, Xiaqing; Deng, Min; Jin, Minliang; Zhao, Lijun; Luo, Xin; Zhou, Yang; Li, Xiang; Zhan, Wei; Liu, Nannan; Wang, Hong; Chen, Gengshen; Li, Qing; Yan, Jianbing
2017-10-01
Maize ( Zea mays ) is a major staple crop. Maize kernel size and weight are important contributors to its yield. Here, we measured kernel length, kernel width, kernel thickness, hundred kernel weight, and kernel test weight in 10 recombinant inbred line populations and dissected their genetic architecture using three statistical models. In total, 729 quantitative trait loci (QTLs) were identified, many of which were identified in all three models, including 22 major QTLs that each can explain more than 10% of phenotypic variation. To provide candidate genes for these QTLs, we identified 30 maize genes that are orthologs of 18 rice ( Oryza sativa ) genes reported to affect rice seed size or weight. Interestingly, 24 of these 30 genes are located in the identified QTLs or within 1 Mb of the significant single-nucleotide polymorphisms. We further confirmed the effects of five genes on maize kernel size/weight in an independent association mapping panel with 540 lines by candidate gene association analysis. Lastly, the function of ZmINCW1 , a homolog of rice GRAIN INCOMPLETE FILLING1 that affects seed size and weight, was characterized in detail. ZmINCW1 is close to QTL peaks for kernel size/weight (less than 1 Mb) and contains significant single-nucleotide polymorphisms affecting kernel size/weight in the association panel. Overexpression of this gene can rescue the reduced weight of the Arabidopsis ( Arabidopsis thaliana ) homozygous mutant line in the AtcwINV2 gene (Arabidopsis ortholog of ZmINCW1 ). These results indicate that the molecular mechanisms affecting seed development are conserved in maize, rice, and possibly Arabidopsis. © 2017 American Society of Plant Biologists. All Rights Reserved.
7 CFR 51.1447 - Fairly uniform in color.
Code of Federal Regulations, 2011 CFR
2011-01-01
... color. Fairly uniform in color means that 90 percent or more of the kernels in the lot have skin color within the range of one or two color classifications. ... 7 Agriculture 2 2011-01-01 2011-01-01 false Fairly uniform in color. 51.1447 Section 51.1447...
7 CFR 51.1447 - Fairly uniform in color.
Code of Federal Regulations, 2012 CFR
2012-01-01
... color. Fairly uniform in color means that 90 percent or more of the kernels in the lot have skin color within the range of one or two color classifications. ... 7 Agriculture 2 2012-01-01 2012-01-01 false Fairly uniform in color. 51.1447 Section 51.1447...
7 CFR 51.1447 - Fairly uniform in color.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Fairly uniform in color. 51.1447 Section 51.1447... color. Fairly uniform in color means that 90 percent or more of the kernels in the lot have skin color within the range of one or two color classifications. ...
SOME ENGINEERING PROPERTIES OF SHELLED AND KERNEL TEA (Camellia sinensis) SEEDS.
Altuntas, Ebubekir; Yildiz, Merve
2017-01-01
Camellia sinensis is the source of tea leaves and it is an economic crop now grown around the World. Tea seed oil has been used for cooking in China and other Asian countries for more than a thousand years. Tea is the most widely consumed beverages after water in the world. It is mainly produced in Asia, central Africa, and exported throughout the World. Some engineering properties (size dimensions, sphericity, volume, bulk and true densities, friction coefficient, colour characteristics and mechanical behaviour as rupture force of shelled and kernel tea ( Camellia sinensis ) seeds were determined in this study. This research was carried out for shelled and kernel tea seeds. The shelled tea seeds used in this study were obtained from East-Black Sea Tea Cooperative Institution in Rize city of Turkey. Shelled and kernel tea seeds were characterized as large and small sizes. The average geometric mean diameter and seed mass of the shelled tea seeds were 15.8 mm, 10.7 mm (large size); 1.47 g, 0.49 g (small size); while the average geometric mean diameter and seed mass of the kernel tea seeds were 11.8 mm, 8 mm for large size; 0.97 g, 0.31 g for small size, respectively. The sphericity, surface area and volume values were found to be higher in a larger size than small size for the shelled and kernel tea samples. The shelled tea seed's colour intensity (Chroma) were found between 59.31 and 64.22 for large size, while the kernel tea seed's chroma values were found between 56.04 68.34 for large size, respectively. The rupture force values of kernel tea seeds were higher than shelled tea seeds for the large size along X axis; whereas, the rupture force values of along X axis were higher than Y axis for large size of shelled tea seeds. The static coefficients of friction of shelled and kernel tea seeds for the large and small sizes higher values for rubber than the other friction surfaces. Some engineering properties, such as geometric mean diameter, sphericity, volume, bulk and true densities, the coefficient of friction, L*, a*, b* colour characteristics and rupture force of shelled and kernel tea ( Camellia sinensis ) seeds will serve to design the equipment used in postharvest treatments.
7 CFR 51.2284 - Size classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
...: “Halves”, “Pieces and Halves”, “Pieces” or “Small Pieces”. The size of portions of kernels in the lot... consists of 85 percent or more, by weight, half kernels, and the remainder three-fourths half kernels. (See § 51.2285.) (b) Pieces and halves. Lot consists of 20 percent or more, by weight, half kernels, and the...
Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate
NASA Astrophysics Data System (ADS)
Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.
2008-08-01
The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.
USDA-ARS?s Scientific Manuscript database
Experiments with Crop Year (CY) 2014 samples from the Uniform Peanut Performance Trials (UPPT) revealed that color and flavor profile development were related to kernel moisture content (MC) during dry roasting. That work was repeated with CY 2015 UPPT samples with additional replication. Raw MC, ...
A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions
NASA Astrophysics Data System (ADS)
Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.
2017-05-01
Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.
Cui, Fa; Fan, Xiaoli; Chen, Mei; Zhang, Na; Zhao, Chunhua; Zhang, Wei; Han, Jie; Ji, Jun; Zhao, Xueqiang; Yang, Lijuan; Zhao, Zongwu; Tong, Yiping; Wang, Tao; Li, Junming
2016-03-01
QTLs for kernel characteristics and tolerance to N stress were identified, and the functions of ten known genes with regard to these traits were specified. Kernel size and quality characteristics in wheat (Triticum aestivum L.) ultimately determine the end use of the grain and affect its commodity price, both of which are influenced by the application of nitrogen (N) fertilizer. This study characterized quantitative trait loci (QTLs) for kernel size and quality and examined the responses of these traits to low-N stress using a recombinant inbred line population derived from Kenong 9204 × Jing 411. Phenotypic analyses were conducted in five trials that each included low- and high-N treatments. We identified 109 putative additive QTLs for 11 kernel size and quality characteristics and 49 QTLs for tolerance to N stress, 27 and 14 of which were stable across the tested environments, respectively. These QTLs were distributed across all wheat chromosomes except for chromosomes 3A, 4D, 6D, and 7B. Eleven QTL clusters that simultaneously affected kernel size- and quality-related traits were identified. At nine locations, 25 of the 49 QTLs for N deficiency tolerance coincided with the QTLs for kernel characteristics, indicating their genetic independence. The feasibility of indirect selection of a superior genotype for kernel size and quality under high-N conditions in breeding programs designed for a lower input management system are discussed. In addition, we specified the functions of Glu-A1, Glu-B1, Glu-A3, Glu-B3, TaCwi-A1, TaSus2, TaGS2-D1, PPO-D1, Rht-B1, and Ha with regard to kernel characteristics and the sensitivities of these characteristics to N stress. This study provides useful information for the genetic improvement of wheat kernel size, quality, and resistance to N stress.
The site, size, spatial stability, and energetics of an X-ray flare kernel
NASA Technical Reports Server (NTRS)
Petrasso, R.; Gerassimenko, M.; Nolte, J.
1979-01-01
The site, size evolution, and energetics of an X-ray kernel that dominated a solar flare during its rise and somewhat during its peak are investigated. The position of the kernel remained stationary to within about 3 arc sec over the 30-min interval of observations, despite pulsations in the kernel X-ray brightness in excess of a factor of 10. This suggests a tightly bound, deeply rooted magnetic structure, more plausibly associated with the near chromosphere or low corona rather than with the high corona. The H-alpha flare onset coincided with the appearance of the kernel, again suggesting a close spatial and temporal coupling between the chromospheric H-alpha event and the X-ray kernel. At the first kernel brightness peak its size was no larger than about 2 arc sec, when it accounted for about 40% of the total flare flux. In the second rise phase of the kernel, a source power input of order 2 times 10 to the 24th ergs/sec is minimally required.
2011-08-01
flocs within a radius of 2 flocs’ centerline would be intercepted by the settling particle . The curvilinear kernel assumes only smaller particle hit...Aerobic Sediment Slurry……………………………………………………………...11 Study 4. Modeling the Impact of Flocculation on the Fate of Organic and Inorganic Particles ...suspended particles at the beginning of free settling period………....………46 Figure 4.2: Three fOC distribution trends: small, uniform, and size-variable
USDA-ARS?s Scientific Manuscript database
Optimization of flour yield and quality is important in the milling industry. The objective of this study was to determine the effect of kernel size and mill type on flour yield and end-use quality. A hard red spring wheat composite sample was segregated, based on kernel size, into large, medium, ...
Inferring planetary obliquity using rotational and orbital photometry
NASA Astrophysics Data System (ADS)
Schwartz, J. C.; Sekowski, C.; Haggard, H. M.; Pallé, E.; Cowan, N. B.
2016-03-01
The obliquity of a terrestrial planet is an important clue about its formation and critical to its climate. Previous studies using simulated photometry of Earth show that continuous observations over most of a planet's orbit can be inverted to infer obliquity. However, few studies of more general planets with arbitrary albedo markings have been made and, in particular, a simple theoretical understanding of why it is possible to extract obliquity from light curves is missing. Reflected light seen by a distant observer is the product of a planet's albedo map, its host star's illumination, and the visibility of different regions. It is useful to treat the product of illumination and visibility as the kernel of a convolution. Time-resolved photometry constrains both the albedo map and the kernel, the latter of which sweeps over the planet due to rotational and orbital motion. The kernel's movement distinguishes prograde from retrograde rotation for planets with non-zero obliquity on inclined orbits. We demonstrate that the kernel's longitudinal width and mean latitude are distinct functions of obliquity and axial orientation. Notably, we find that a planet's spin axis affects the kernel - and hence time-resolved photometry - even if this planet is east-west uniform or spinning rapidly, or if it is north-south uniform. We find that perfect knowledge of the kernel at 2-4 orbital phases is usually sufficient to uniquely determine a planet's spin axis. Surprisingly, we predict that east-west albedo contrast is more useful for constraining obliquity than north-south contrast.
Performance analysis and kernel size study of the Lynx real-time operating system
NASA Technical Reports Server (NTRS)
Liu, Yuan-Kwei; Gibson, James S.; Fernquist, Alan R.
1993-01-01
This paper analyzes the Lynx real-time operating system (LynxOS), which has been selected as the operating system for the Space Station Freedom Data Management System (DMS). The features of LynxOS are compared to other Unix-based operating system (OS). The tools for measuring the performance of LynxOS, which include a high-speed digital timer/counter board, a device driver program, and an application program, are analyzed. The timings for interrupt response, process creation and deletion, threads, semaphores, shared memory, and signals are measured. The memory size of the DMS Embedded Data Processor (EDP) is limited. Besides, virtual memory is not suitable for real-time applications because page swap timing may not be deterministic. Therefore, the DMS software, including LynxOS, has to fit in the main memory of an EDP. To reduce the LynxOS kernel size, the following steps are taken: analyzing the factors that influence the kernel size; identifying the modules of LynxOS that may not be needed in an EDP; adjusting the system parameters of LynxOS; reconfiguring the device drivers used in the LynxOS; and analyzing the symbol table. The reductions in kernel disk size, kernel memory size and total kernel size reduction from each step mentioned above are listed and analyzed.
Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.
Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D
2016-04-01
Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the kernel fraction was redried at 60°C for 48 h in a forced-air oven and dry sieved to determine GMPS and surface area. Linear relationships between CSPS from WPCS (n=80) and kernel fraction GMPS, surface area, and proportion passing through the 4.75-mm screen were poor. Strong quadratic relationships between proportion of kernel fraction passing through the 4.75-mm screen and kernel fraction GMPS and surface area were observed. These findings suggest that hydrodynamic separation and dry sieving of the kernel fraction may provide a better assessment of kernel breakage in WPCS than CSPS. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Kumar, Ajay; Mantovani, E E; Seetan, R; Soltani, A; Echeverry-Solarte, M; Jain, S; Simsek, S; Doehlert, D; Alamri, M S; Elias, E M; Kianian, S F; Mergoum, M
2016-03-01
Wheat kernel shape and size has been under selection since early domestication. Kernel morphology is a major consideration in wheat breeding, as it impacts grain yield and quality. A population of 160 recombinant inbred lines (RIL), developed using an elite (ND 705) and a nonadapted genotype (PI 414566), was extensively phenotyped in replicated field trials and genotyped using Infinium iSelect 90K assay to gain insight into the genetic architecture of kernel shape and size. A high density genetic map consisting of 10,172 single nucleotide polymorphism (SNP) markers, with an average marker density of 0.39 cM/marker, identified a total of 29 genomic regions associated with six grain shape and size traits; ∼80% of these regions were associated with multiple traits. The analyses showed that kernel length (KL) and width (KW) are genetically independent, while a large number (∼59%) of the quantitative trait loci (QTL) for kernel shape traits were in common with genomic regions associated with kernel size traits. The most significant QTL was identified on chromosome 4B, and could be an ortholog of major rice grain size and shape gene or . Major and stable loci also were identified on the homeologous regions of Group 5 chromosomes, and in the regions of (6A) and (7A) genes. Both parental genotypes contributed equivalent positive QTL alleles, suggesting that the nonadapted germplasm has a great potential for enhancing the gene pool for grain shape and size. This study provides new knowledge on the genetic dissection of kernel morphology, with a much higher resolution, which may aid further improvement in wheat yield and quality using genomic tools. Copyright © 2016 Crop Science Society of America.
NASA Astrophysics Data System (ADS)
Hunt, R. D.; Silva, G. W. C. M.; Lindemer, T. B.; Anderson, K. K.; Collins, J. L.
2012-08-01
The US Department of Energy continues to use the internal gelation process in its preparation of tristructural isotropic coated fuel particles. The focus of this work is to develop uranium fuel kernels with adequately dispersed silicon carbide (SiC) nanoparticles, high crush strengths, uniform particle diameter, and good sphericity. During irradiation to high burnup, the SiC in the uranium kernels will serve as getters for excess oxygen and help control the oxygen potential in order to minimize the potential for kernel migration. The hardness of SiC required modifications to the gelation system that was used to make uranium kernels. Suitable processing conditions and potential equipment changes were identified so that the SiC could be homogeneously dispersed in gel spheres. Finally, dilute hydrogen rather than argon should be used to sinter the uranium kernels with SiC.
Effects of sample size on KERNEL home range estimates
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
1999-01-01
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
Searching for efficient Markov chain Monte Carlo proposal kernels
Yang, Ziheng; Rodríguez, Carlos E.
2013-01-01
Markov chain Monte Carlo (MCMC) or the Metropolis–Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis–Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals. PMID:24218600
7 CFR 51.2559 - Size classifications.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Size classifications. 51.2559 Section 51.2559....2559 Size classifications. (a) The size of pistachio kernels may be specified in connection with the grade in accordance with one of the following size classifications. (1) Jumbo Whole Kernels: 80 percent...
7 CFR 51.2559 - Size classifications.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Size classifications. 51.2559 Section 51.2559....2559 Size classifications. (a) The size of pistachio kernels may be specified in connection with the grade in accordance with one of the following size classifications. (1) Jumbo Whole Kernels: 80 percent...
A new iterative scheme for solving the discrete Smoluchowski equation
NASA Astrophysics Data System (ADS)
Smith, Alastair J.; Wells, Clive G.; Kraft, Markus
2018-01-01
This paper introduces a new iterative scheme for solving the discrete Smoluchowski equation and explores the numerical convergence properties of the method for a range of kernels admitting analytical solutions, in addition to some more physically realistic kernels typically used in kinetics applications. The solver is extended to spatially dependent problems with non-uniform velocities and its performance investigated in detail.
Makanza, R; Zaman-Allah, M; Cairns, J E; Eyre, J; Burgueño, J; Pacheco, Ángela; Diepenbrock, C; Magorokosho, C; Tarekegne, A; Olsen, M; Prasanna, B M
2018-01-01
Grain yield, ear and kernel attributes can assist to understand the performance of maize plant under different environmental conditions and can be used in the variety development process to address farmer's preferences. These parameters are however still laborious and expensive to measure. A low-cost ear digital imaging method was developed that provides estimates of ear and kernel attributes i.e., ear number and size, kernel number and size as well as kernel weight from photos of ears harvested from field trial plots. The image processing method uses a script that runs in a batch mode on ImageJ; an open source software. Kernel weight was estimated using the total kernel number derived from the number of kernels visible on the image and the average kernel size. Data showed a good agreement in terms of accuracy and precision between ground truth measurements and data generated through image processing. Broad-sense heritability of the estimated parameters was in the range or higher than that for measured grain weight. Limitation of the method for kernel weight estimation is discussed. The method developed in this work provides an opportunity to significantly reduce the cost of selection in the breeding process, especially for resource constrained crop improvement programs and can be used to learn more about the genetic bases of grain yield determinants.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
USDA-ARS?s Scientific Manuscript database
Wheat kernel shape and size has been under selection since early domestication. Kernel morphology is a major consideration in wheat breeding, as it impacts grain yield and quality. A population of 160 recombinant inbred lines (RIL), developed using an elite (ND 705) and a nonadapted genotype (PI 414...
Increasing the Size of a Piece of Popcorn
NASA Astrophysics Data System (ADS)
Quinn, Paul; Hong, Daniel C.; Both, Joseph
2003-03-01
Popcorn is an extremely popular snack food in the world today. Thermodynamics can be used to analyze how popcorn is produced. By treating the popping mechanism of the corn as a thermodynamic expansion, a method of increasing the volume or size of a kernel of popcorn can be studied. By lowering the pressure surrounding the unpopped kernel, one can use a thermodynamic argument to show that the expanded volume of the kernel when it pops must increase. In this project, a variety of experiments are run to test the validity of this theory. The results show that there is a significant increase in the average kernel size when the pressure of the surroundings is reduced.
Increasing the size of a piece of popcorn
NASA Astrophysics Data System (ADS)
Quinn, Paul V.; Hong, Daniel C.; Both, J. A.
2005-08-01
Popcorn is an extremely popular snack food in the world today. Thermodynamics can be used to analyze how popcorn is produced. By treating the popping mechanism of the corn as a thermodynamic expansion, a method of increasing the volume or size of a kernel of popcorn can be studied. By lowering the pressure surrounding the unpopped kernel, one can use a thermodynamic argument to show that the expanded volume of the kernel when it pops must increase. In this project, a variety of experiments are run to test the qualitative validity of this theory. The results show that there is a significant increase in the average kernel size when the pressure of the surroundings is reduced.
Anato, F M; Sinzogan, A A C; Offenberg, J; Adandonon, A; Wargui, R B; Deguenon, J M; Ayelo, P M; Vayssières, J-F; Kossou, D K
2017-06-01
Weaver ants, Oecophylla spp., are known to positively affect cashew, Anacardium occidentale L., raw nut yield, but their effects on the kernels have not been reported. We compared nut size and the proportion of marketable kernels between raw nuts collected from trees with and without ants. Raw nuts collected from trees with weaver ants were 2.9% larger than nuts from control trees (i.e., without weaver ants), leading to 14% higher proportion of marketable kernels. On trees with ants, the kernel: raw nut ratio from nuts damaged by formic acid was 4.8% lower compared with nondamaged nuts from the same trees. Weaver ants provided three benefits to cashew production by increasing yields, yielding larger nuts, and by producing greater proportions of marketable kernel mass. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Occurrence of 'super soft' wheat kernel texture in hexaploid and tetraploid wheats
USDA-ARS?s Scientific Manuscript database
Wheat kernel texture is a key trait that governs milling performance, flour starch damage, flour particle size, flour hydration properties, and baking quality. Kernel texture is commonly measured using the Perten Single Kernel Characterization System (SKCS). The SKCS returns texture values (Hardness...
Weighted Feature Gaussian Kernel SVM for Emotion Recognition
Jia, Qingxuan
2016-01-01
Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443
Pollen source effects on growth of kernel structures and embryo chemical compounds in maize.
Tanaka, W; Mantese, A I; Maddonni, G A
2009-08-01
Previous studies have reported effects of pollen source on the oil concentration of maize (Zea mays) kernels through modifications to both the embryo/kernel ratio and embryo oil concentration. The present study expands upon previous analyses by addressing pollen source effects on the growth of kernel structures (i.e. pericarp, endosperm and embryo), allocation of embryo chemical constituents (i.e. oil, protein, starch and soluble sugars), and the anatomy and histology of the embryos. Maize kernels with different oil concentration were obtained from pollinations with two parental genotypes of contrasting oil concentration. The dynamics of the growth of kernel structures and allocation of embryo chemical constituents were analysed during the post-flowering period. Mature kernels were dissected to study the anatomy (embryonic axis and scutellum) and histology [cell number and cell size of the scutellums, presence of sub-cellular structures in scutellum tissue (starch granules, oil and protein bodies)] of the embryos. Plants of all crosses exhibited a similar kernel number and kernel weight. Pollen source modified neither the growth period of kernel structures, nor pericarp growth rate. By contrast, pollen source determined a trade-off between embryo and endosperm growth rates, which impacted on the embryo/kernel ratio of mature kernels. Modifications to the embryo size were mediated by scutellum cell number. Pollen source also affected (P < 0.01) allocation of embryo chemical compounds. Negative correlations among embryo oil concentration and those of starch (r = 0.98, P < 0.01) and soluble sugars (r = 0.95, P < 0.05) were found. Coincidently, embryos with low oil concentration had an increased (P < 0.05-0.10) scutellum cell area occupied by starch granules and fewer oil bodies. The effects of pollen source on both embryo/kernel ratio and allocation of embryo chemicals seems to be related to the early established sink strength (i.e. sink size and sink activity) of the embryos.
Security Tagged Architecture Co-Design (STACD)
2015-09-01
components have access to all other system components whether they need it or not. Microkernels [8, 9, 10] seek to reduce the kernel size to improve...does not provide the fine-grained control to allow for formal verification. Microkernels reduce the size of the kernel enough to allow for a formal...verification of the kernel. Tanenbaum [14] documents many of the security virtues of microkernels and argues that the Ring 3 Ring 2 Ring 1
A method of smoothed particle hydrodynamics using spheroidal kernels
NASA Technical Reports Server (NTRS)
Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.
1995-01-01
We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.
Effects of milling on functional properties of rice flour.
Kadan, R S; Bryant, R J; Miller, J A
2008-05-01
A commercial long-grain rice flour (CRF) and the flours made by using a pin mill and the Udy mill from the same batch of broken second-head white long-grain rice were evaluated for their particle size and functional properties. The purpose of this study was to compare the commercial rice flour milling method to the pin and Udy milling methods used in our laboratory and pilot plant. The results showed that pin milled flour had more uniform particle size than the other 2 milled flours. The chalky kernels found in broken white milled rice were pulverized more into fines in both Udy milled flour and CRF than in the pin milled flour. The excessive amount of fines in flours affected their functional properties, for example, WSI and their potential usage in the novel foods such as rice breads (RB). The RB made from CRF collapsed more than loaves made from pin milled Cypress long-grain flours.
Relationship of source and sink in determining kernel composition of maize
Seebauer, Juliann R.; Singletary, George W.; Krumpelman, Paulette M.; Ruffo, Matías L.; Below, Frederick E.
2010-01-01
The relative role of the maternal source and the filial sink in controlling the composition of maize (Zea mays L.) kernels is unclear and may be influenced by the genotype and the N supply. The objective of this study was to determine the influence of assimilate supply from the vegetative source and utilization of assimilates by the grain sink on the final composition of maize kernels. Intermated B73×Mo17 recombinant inbred lines (IBM RILs) which displayed contrasting concentrations of endosperm starch were grown in the field with deficient or sufficient N, and the source supply altered by ear truncation (45% reduction) at 15 d after pollination (DAP). The assimilate supply into the kernels was determined at 19 DAP using the agar trap technique, and the final kernel composition was measured. The influence of N supply and kernel ear position on final kernel composition was also determined for a commercial hybrid. Concentrations of kernel protein and starch could be altered by genotype or the N supply, but remained fairly constant along the length of the ear. Ear truncation also produced a range of variation in endosperm starch and protein concentrations. The C/N ratio of the assimilate supply at 19 DAP was directly related to the final kernel composition, with an inverse relationship between the concentrations of starch and protein in the mature endosperm. The accumulation of kernel starch and protein in maize is uniform along the ear, yet adaptable within genotypic limits, suggesting that kernel composition is source limited in maize. PMID:19917600
Steckel, S; Stewart, S D
2015-06-01
Ear-feeding larvae, such as corn earworm, Helicoverpa zea Boddie (Lepidoptera: Noctuidae), can be important insect pests of field corn, Zea mays L., by feeding on kernels. Recently introduced, stacked Bacillus thuringiensis (Bt) traits provide improved protection from ear-feeding larvae. Thus, our objective was to evaluate how injury to kernels in the ear tip might affect yield when this injury was inflicted at the blister and milk stages. In 2010, simulated corn earworm injury reduced total kernel weight (i.e., yield) at both the blister and milk stage. In 2011, injury to ear tips at the milk stage affected total kernel weight. No differences in total kernel weight were found in 2013, regardless of when or how much injury was inflicted. Our data suggested that kernels within the same ear could compensate for injury to ear tips by increasing in size, but this increase was not always statistically significant or sufficient to overcome high levels of kernel injury. For naturally occurring injury observed on multiple corn hybrids during 2011 and 2012, our analyses showed either no or a minimal relationship between number of kernels injured by ear-feeding larvae and the total number of kernels per ear, total kernel weight, or the size of individual kernels. The results indicate that intraear compensation for kernel injury to ear tips can occur under at least some conditions. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The effect of relatedness and pack size on territory overlap in African wild dogs.
Jackson, Craig R; Groom, Rosemary J; Jordan, Neil R; McNutt, J Weldon
2017-01-01
Spacing patterns mediate competitive interactions between conspecifics, ultimately increasing fitness. The degree of territorial overlap between neighbouring African wild dog ( Lycaon pictus ) packs varies greatly, yet the role of factors potentially affecting the degree of overlap, such as relatedness and pack size, remain unclear. We used movement data from 21 wild dog packs to calculate the extent of territory overlap (20 dyads). On average, unrelated neighbouring packs had low levels of overlap restricted to the peripheral regions of their 95% utilisation kernels. Related neighbours had significantly greater levels of peripheral overlap. Only one unrelated dyad included overlap between 75%-75% kernels, but no 50%-50% kernels overlapped. However, eight of 12 related dyads overlapped between their respective 75% kernels and six between the frequented 50% kernels. Overlap between these more frequented kernels confers a heightened likelihood of encounter, as the mean utilisation intensity per unit area within the 50% kernels was 4.93 times greater than in the 95% kernels, and 2.34 times greater than in the 75% kernels. Related packs spent significantly more time in their 95% kernel overlap zones than did unrelated packs. Pack size appeared to have little effect on overlap between related dyads, yet among unrelated neighbours larger packs tended to overlap more onto smaller packs' territories. However, the true effect is unclear given that the model's confidence intervals overlapped zero. Evidence suggests that costly intraspecific aggression is greatly reduced between related packs. Consequently, the tendency for dispersing individuals to establish territories alongside relatives, where intensively utilised portions of ranges regularly overlap, may extend kin selection and inclusive fitness benefits from the intra-pack to inter-pack level. This natural spacing system can affect survival parameters and the carrying capacity of protected areas, having important management implications for intensively managed populations of this endangered species.
Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta
2016-09-01
An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.
Genetic Analysis of Kernel Traits in Maize-Teosinte Introgression Populations.
Liu, Zhengbin; Garcia, Arturo; McMullen, Michael D; Flint-Garcia, Sherry A
2016-08-09
Seed traits have been targeted by human selection during the domestication of crop species as a way to increase the caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is most directly related to overall grain yield. Additional seed traits involved in seed shape may have also contributed to larger grain. Maize (Zea mays ssp. mays) kernel weight has increased more than 10-fold in the 9000 years since domestication from its wild ancestor, teosinte (Z. mays ssp. parviglumis). In order to study how size and shape affect kernel weight, we analyzed kernel morphometric traits in a set of 10 maize-teosinte introgression populations using digital imaging software. We identified quantitative trait loci (QTL) for kernel area and length with moderate allelic effects that colocalize with kernel weight QTL. Several genomic regions with strong effects during maize domestication were detected, and a genetic framework for kernel traits was characterized by complex pleiotropic interactions. Our results both confirm prior reports of kernel domestication loci and identify previously uncharacterized QTL with a range of allelic effects, enabling future research into the genetic basis of these traits. Copyright © 2016 Liu et al.
Genetic Analysis of Kernel Traits in Maize-Teosinte Introgression Populations
Liu, Zhengbin; Garcia, Arturo; McMullen, Michael D.; Flint-Garcia, Sherry A.
2016-01-01
Seed traits have been targeted by human selection during the domestication of crop species as a way to increase the caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is most directly related to overall grain yield. Additional seed traits involved in seed shape may have also contributed to larger grain. Maize (Zea mays ssp. mays) kernel weight has increased more than 10-fold in the 9000 years since domestication from its wild ancestor, teosinte (Z. mays ssp. parviglumis). In order to study how size and shape affect kernel weight, we analyzed kernel morphometric traits in a set of 10 maize-teosinte introgression populations using digital imaging software. We identified quantitative trait loci (QTL) for kernel area and length with moderate allelic effects that colocalize with kernel weight QTL. Several genomic regions with strong effects during maize domestication were detected, and a genetic framework for kernel traits was characterized by complex pleiotropic interactions. Our results both confirm prior reports of kernel domestication loci and identify previously uncharacterized QTL with a range of allelic effects, enabling future research into the genetic basis of these traits. PMID:27317774
Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.
2013-01-01
Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507
Increasing the Size of Microwave Popcorn
NASA Astrophysics Data System (ADS)
Smoyer, Justin
2005-03-01
Each year Americans consume approximately 17 billion quarts of popcorn. Since the 1940s, microwaves have been the heating source of choice for most. By treating the popcorn mechanism as a thermodynamic system, it has been shown mathematically and experimentally that reducing the surrounding pressure of the unpopped kernels, results in an increased volume of the kernels [Quinn et al, http://xxx.lanl.gov/abs/cond-mat/0409434 v1 2004]. In this project an alternate method of popping with the microwave was used to further test and confirm this hypothesis. Numerous experimental trials where run to test the validity of the theory. The results show that there is a significant increase in the average kernel size as well as a reduction in the number of unpopped kernels.
Church, Cody; Mawko, George; Archambault, John Paul; Lewandowski, Robert; Liu, David; Kehoe, Sharon; Boyd, Daniel; Abraham, Robert; Syme, Alasdair
2018-02-01
Radiopaque microspheres may provide intraprocedural and postprocedural feedback during transarterial radioembolization (TARE). Furthermore, the potential to use higher resolution x-ray imaging techniques as opposed to nuclear medicine imaging suggests that significant improvements in the accuracy and precision of radiation dosimetry calculations could be realized for this type of therapy. This study investigates the absorbed dose kernel for novel radiopaque microspheres including contributions of both short and long-lived contaminant radionuclides while concurrently quantifying the self-shielding of the glass network. Monte Carlo simulations using EGSnrc were performed to determine the dose kernels for all monoenergetic electron emissions and all beta spectra for radionuclides reported in a neutron activation study of the microspheres. Simulations were benchmarked against an accepted 90 Y dose point kernel. Self-shielding was quantified for the microspheres by simulating an isotropically emitting, uniformly distributed source, in glass and in water. The ratio of the absorbed doses was scored as a function of distance from a microsphere. The absorbed dose kernel for the microspheres was calculated for (a) two bead formulations following (b) two different durations of neutron activation, at (c) various time points following activation. Self-shielding varies with time postremoval from the reactor. At early time points, it is less pronounced due to the higher energies of the emissions. It is on the order of 0.4-2.8% at a radial distance of 5.43 mm with increased size from 10 to 50 μm in diameter during the time that the microspheres would be administered to a patient. At long time points, self-shielding is more pronounced and can reach values in excess of 20% near the end of the range of the emissions. Absorbed dose kernels for 90 Y, 90m Y, 85m Sr, 85 Sr, 87m Sr, 89 Sr, 70 Ga, 72 Ga, and 31 Si are presented and used to determine an overall kernel for the microspheres based on weighted activities. The shapes of the absorbed dose kernels are dominated at short times postactivation by the contributions of 70 Ga and 72 Ga. Following decay of the short-lived contaminants, the absorbed dose kernel is effectively that of 90 Y. After approximately 1000 h postactivation, the contributions of 85 Sr and 89 Sr become increasingly dominant, though the absorbed dose-rate around the beads drops by roughly four orders of magnitude. The introduction of high atomic number elements for the purpose of increasing radiopacity necessarily leads to the production of radionuclides other than 90 Y in the microspheres. Most of the radionuclides in this study are short-lived and are likely not of any significant concern for this therapeutic agent. The presence of small quantities of longer lived radionuclides will change the shape of the absorbed dose kernel around a microsphere at long time points postadministration when activity levels are significantly reduced. © 2017 American Association of Physicists in Medicine.
Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters
Torres-Huitzil, Cesar
2013-01-01
Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k 2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456
Uniform Decay for Solutions of an Axially Moving Viscoelastic Beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelleche, Abdelkarim, E-mail: kellecheabdelkarim@gmail.com; Tatar, Nasser-eddine, E-mail: tatarn@Kfupm.edu.sa
2017-06-15
The paper deals with an axially moving viscoelastic structure modeled as an Euler–Bernoulli beam. The aim is to suppress the transversal displacement (transversal vibrations) that occur during the axial motion of the beam. It is assumed that the beam is moving with a constant axial speed and it is subject to a nonlinear force at the right boundary. We prove that when the axial speed of the beam is smaller than a critical value, the dissipation produced by the viscoelastic material is sufficient to suppress the transversal vibrations. It is shown that the rate of decay of the energy dependsmore » on the kernel which arise in the viscoelastic term. We consider a general kernel and notice that solutions cannot decay faster than the kernel.« less
Winter home-range characteristics of American Marten (Martes americana) in Northern Wisconsin
Joseph B. Dumyahn; Patrick A. Zollner
2007-01-01
We estimated home-range size for American marten (Martes americana) in northern Wisconsin during the winter months of 2001-2004, and compared the proportion of cover-type selection categories (highly used, neutral and avoided) among home-ranges (95% fixed-kernel), core areas (50% fixed-kernel) and the study area. Average winter homerange size was 3....
Travel-time sensitivity kernels in long-range propagation.
Skarsoulis, E K; Cornuelle, B D; Dzieciuch, M A
2009-11-01
Wave-theoretic travel-time sensitivity kernels (TSKs) are calculated in two-dimensional (2D) and three-dimensional (3D) environments and their behavior with increasing propagation range is studied and compared to that of ray-theoretic TSKs and corresponding Fresnel-volumes. The differences between the 2D and 3D TSKs average out when horizontal or cross-range marginals are considered, which indicates that they are not important in the case of range-independent sound-speed perturbations or perturbations of large scale compared to the lateral TSK extent. With increasing range, the wave-theoretic TSKs expand in the horizontal cross-range direction, their cross-range extent being comparable to that of the corresponding free-space Fresnel zone, whereas they remain bounded in the vertical. Vertical travel-time sensitivity kernels (VTSKs)-one-dimensional kernels describing the effect of horizontally uniform sound-speed changes on travel-times-are calculated analytically using a perturbation approach, and also numerically, as horizontal marginals of the corresponding TSKs. Good agreement between analytical and numerical VTSKs, as well as between 2D and 3D VTSKs, is found. As an alternative method to obtain wave-theoretic sensitivity kernels, the parabolic approximation is used; the resulting TSKs and VTSKs are in good agreement with normal-mode results. With increasing range, the wave-theoretic VTSKs approach the corresponding ray-theoretic sensitivity kernels.
The generalization ability of online SVM classification based on Markov sampling.
Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang
2015-03-01
In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.
1980-12-01
Commun- ications Corporation, Palo Alto, CA (March 1978). g. [Walter at al. 74] Walter, K.G. et al., " Primitive Models for Computer .. Security", ESD-TR...discussion is followed by a presenta- tion of the Kernel primitive operations upon these objects. All Kernel objects shall be referenced by a common...set of sizes. All process segments, regardless of domain, shall be manipulated by the same set of Kernel segment primitives . User domain segments
CompareSVM: supervised, Support Vector Machine (SVM) inference of gene regularity networks.
Gillani, Zeeshan; Akash, Muhammad Sajid Hamid; Rahaman, M D Matiur; Chen, Ming
2014-11-30
Predication of gene regularity network (GRN) from expression data is a challenging task. There are many methods that have been developed to address this challenge ranging from supervised to unsupervised methods. Most promising methods are based on support vector machine (SVM). There is a need for comprehensive analysis on prediction accuracy of supervised method SVM using different kernels on different biological experimental conditions and network size. We developed a tool (CompareSVM) based on SVM to compare different kernel methods for inference of GRN. Using CompareSVM, we investigated and evaluated different SVM kernel methods on simulated datasets of microarray of different sizes in detail. The results obtained from CompareSVM showed that accuracy of inference method depends upon the nature of experimental condition and size of the network. For network with nodes (<200) and average (over all sizes of networks), SVM Gaussian kernel outperform on knockout, knockdown, and multifactorial datasets compared to all the other inference methods. For network with large number of nodes (~500), choice of inference method depend upon nature of experimental condition. CompareSVM is available at http://bis.zju.edu.cn/CompareSVM/ .
Multineuron spike train analysis with R-convolution linear combination kernel.
Tezuka, Taro
2018-06-01
A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Considering causal genes in the genetic dissection of kernel traits in common wheat.
Mohler, Volker; Albrecht, Theresa; Castell, Adelheid; Diethelm, Manuela; Schweizer, Günther; Hartl, Lorenz
2016-11-01
Genetic factors controlling thousand-kernel weight (TKW) were characterized for their association with other seed traits, including kernel width, kernel length, ratio of kernel width to kernel length (KW/KL), kernel area, and spike number per m 2 (SN). For this purpose, a genetic map was established utilizing a doubled haploid population derived from a cross between German winter wheat cultivars Pamier and Format. Association studies in a diversity panel of elite cultivars supplemented genetic analysis of kernel traits. In both populations, genomic signatures of 13 candidate genes for TKW and kernel size were analyzed. Major quantitative trait loci (QTL) for TKW were identified on chromosomes 1B, 2A, 2D, and 4D, and their locations coincided with major QTL for kernel size traits, supporting the common belief that TKW is a function of other kernel traits. The QTL on chromosome 2A was associated with TKW candidate gene TaCwi-A1 and the QTL on chromosome 4D was associated with dwarfing gene Rht-D1. A minor QTL for TKW on chromosome 6B coincided with TaGW2-6B. The QTL for kernel dimensions that did not affect TKW were detected on eight chromosomes. A major QTL for KW/KL located at the distal tip of chromosome arm 5AS is being reported for the first time. TaSus1-7A and TaSAP-A1, closely linked to each other on chromosome 7A, could be related to a minor QTL for KW/KL. Genetic analysis of SN confirmed its negative correlation with TKW in this cross. In the diversity panel, TaSus1-7A was associated with TKW. Compared to the Pamier/Format bi-parental population where TaCwi-A1a was associated with higher TKW, the same allele reduced grain yield in the diversity panel, suggesting opposite effects of TaCwi-A1 on these two traits.
Many Molecular Properties from One Kernel in Chemical Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole
We introduce property-independent kernels for machine learning modeling of arbitrarily many molecular properties. The kernels encode molecular structures for training sets of varying size, as well as similarity measures sufficiently diffuse in chemical space to sample over all training molecules. Corresponding molecular reference properties provided, they enable the instantaneous generation of ML models which can systematically be improved through the addition of more data. This idea is exemplified for single kernel based modeling of internal energy, enthalpy, free energy, heat capacity, polarizability, electronic spread, zero-point vibrational energy, energies of frontier orbitals, HOMOLUMO gap, and the highest fundamental vibrational wavenumber. Modelsmore » of these properties are trained and tested using 112 kilo organic molecules of similar size. Resulting models are discussed as well as the kernels’ use for generating and using other property models.« less
NASA Technical Reports Server (NTRS)
Erdol, R.; Erdogan, F.
1976-01-01
The elastostatic axisymmetric problem for a long thick-walled cylinder containing a ring-shaped internal or edge crack is considered. Using the standard transform technique the problem is formulated in terms of an integral equation which has a simple Cauchy kernel for the internal crack and a generalized Cauchy kernel for the edge crack as the dominant part. As examples the uniform axial load and the steady-state thermal stress problems have been solved and the related stress intensity factors have been calculated. Among other findings the results show that in the cylinder under uniform axial stress containing an internal crack the stress intensity factor at the inner tip is always greater than that at the outer tip for equal net ligament thicknesses and in the cylinder with an edge crack which is under a state of thermal stress the stress intensity factor is a decreasing function of the crack depth, tending to zero as the crack depth approaches the wall thickness.
Gradient-based adaptation of general gaussian kernels.
Glasmachers, Tobias; Igel, Christian
2005-10-01
Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.
The nonuniformity of antibody distribution in the kidney and its influence on dosimetry.
Flynn, Aiden A; Pedley, R Barbara; Green, Alan J; Dearling, Jason L; El-Emir, Ethaar; Boxer, Geoffrey M; Boden, Robert; Begent, Richard H J
2003-02-01
The therapeutic efficacy of radiolabeled antibody fragments can be limited by nephrotoxicity, particularly when the kidney is the major route of extraction from the circulation. Conventional dose estimates in kidney assume uniform dose deposition, but we have shown increased antibody localization in the cortex after glomerular filtration. The purpose of this study was to measure the radioactivity in cortex relative to medulla for a range of antibodies and to assess the validity of the assumption of uniformity of dose deposition in the whole kidney and in the cortex for these antibodies with a range of radionuclides. Storage phosphor plate technology (radioluminography) was used to acquire images of the distributions of a range of antibodies of various sizes, labeled with 125I, in kidney sections. This allowed the calculation of the antibody concentration in the cortex relative to the medulla. Beta-particle point dose kernels were then used to generate the dose-rate distributions from 14C, 131I, 186Re, 32P and 90Y. The correlation between the actual dose-rate distribution and the corresponding distribution calculated assuming uniform antibody distribution throughout the kidney was used to test the validity of estimating dose by assuming uniformity in the kidney and in the cortex. There was a strong inverse relationship between the ratio of the radioactivity in the cortex relative to that in the medulla and the antibody size. The nonuniformity of dose deposition was greatest with the smallest antibody fragments but became more uniform as the range of the emissions from the radionuclide increased. Furthermore, there was a strong correlation between the actual dose-rate distribution and the distribution when assuming a uniform source in the kidney for intact antibodies along with medium- to long-range radionuclides, but there was no correlation for small antibody fragments with any radioisotope or for short-range radionuclides with any antibody. However, when the cortex was separated from the whole kidney, the correlation between the actual dose-rate distribution and the assumed dose-rate distribution, if the source was uniform, increased significantly. During radioimmunotherapy, the extent of nonuniformity of dose deposition in the kidney depends on the properties of the antibody and radionuclide. For dosimetry estimates, the cortex should be taken as a separate source region when the radiopharmaceutical is small enough to be filtered by the glomerulus.
NASA Astrophysics Data System (ADS)
Donlon, Kevan; Ninkov, Zoran; Baum, Stefi
2016-08-01
Interpixel capacitance (IPC) is a deterministic electronic coupling by which signal generated in one pixel is measured in neighboring pixels. Examination of dark frames from test NIRcam arrays corroborates earlier results and simulations illustrating a signal dependent coupling. When the signal on an individual pixel is larger, the fractional coupling to nearest neighbors is lesser than when the signal is lower. Frames from test arrays indicate a drop in average coupling from approximately 1.0% at low signals down to approximately 0.65% at high signals depending on the particular array in question. The photometric ramifications for this non-uniformity are not fully understood. This non-uniformity intro-duces a non-linearity in the current mathematical model for IPC coupling. IPC coupling has been mathematically formalized as convolution by a blur kernel. Signal dependence requires that the blur kernel be locally defined as a function of signal intensity. Through application of a signal dependent coupling kernel, the IPC coupling can be modeled computationally. This method allows for simultaneous knowledge of the intrinsic parameters of the image scene, the result of applying a constant IPC, and the result of a signal dependent IPC. In the age of sub-pixel precision in astronomy these effects must be properly understood and accounted for in order for the data to accurately represent the object of observation. Implementation of this method is done through python scripted processing of images. The introduction of IPC into simulated frames is accomplished through convolution of the image with a blur kernel whose parameters are themselves locally defined functions of the image. These techniques can be used to enhance the data processing pipeline for NIRcam.
Quantitative comparison of noise texture across CT scanners from different manufacturers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solomon, Justin B.; Christianson, Olav; Samei, Ehsan
2012-10-15
Purpose: To quantitatively compare noise texture across computed tomography (CT) scanners from different manufacturers using the noise power spectrum (NPS). Methods: The American College of Radiology CT accreditation phantom (Gammex 464, Gammex, Inc., Middleton, WI) was imaged on two scanners: Discovery CT 750HD (GE Healthcare, Waukesha, WI), and SOMATOM Definition Flash (Siemens Healthcare, Germany), using a consistent acquisition protocol (120 kVp, 0.625/0.6 mm slice thickness, 250 mAs, and 22 cm field of view). Images were reconstructed using filtered backprojection and a wide selection of reconstruction kernels. For each image set, the 2D NPS were estimated from the uniform section ofmore » the phantom. The 2D spectra were normalized by their integral value, radially averaged, and filtered by the human visual response function. A systematic kernel-by-kernel comparison across manufacturers was performed by computing the root mean square difference (RMSD) and the peak frequency difference (PFD) between the NPS from different kernels. GE and Siemens kernels were compared and kernel pairs that minimized the RMSD and |PFD| were identified. Results: The RMSD (|PFD|) values between the NPS of GE and Siemens kernels varied from 0.01 mm{sup 2} (0.002 mm{sup -1}) to 0.29 mm{sup 2} (0.74 mm{sup -1}). The GE kernels 'Soft,''Standard,''Chest,' and 'Lung' closely matched the Siemens kernels 'B35f,''B43f,''B41f,' and 'B80f' (RMSD < 0.05 mm{sup 2}, |PFD| < 0.02 mm{sup -1}, respectively). The GE 'Bone,''Bone+,' and 'Edge' kernels all matched most closely with Siemens 'B75f' kernel but with sizeable RMSD and |PFD| values up to 0.18 mm{sup 2} and 0.41 mm{sup -1}, respectively. These sizeable RMSD and |PFD| values corresponded to visually perceivable differences in the noise texture of the images. Conclusions: It is possible to use the NPS to quantitatively compare noise texture across CT systems. The degree to which similar texture across scanners could be achieved varies and is limited by the kernels available on each scanner.« less
Quantitative comparison of noise texture across CT scanners from different manufacturers.
Solomon, Justin B; Christianson, Olav; Samei, Ehsan
2012-10-01
To quantitatively compare noise texture across computed tomography (CT) scanners from different manufacturers using the noise power spectrum (NPS). The American College of Radiology CT accreditation phantom (Gammex 464, Gammex, Inc., Middleton, WI) was imaged on two scanners: Discovery CT 750HD (GE Healthcare, Waukesha, WI), and SOMATOM Definition Flash (Siemens Healthcare, Germany), using a consistent acquisition protocol (120 kVp, 0.625∕0.6 mm slice thickness, 250 mAs, and 22 cm field of view). Images were reconstructed using filtered backprojection and a wide selection of reconstruction kernels. For each image set, the 2D NPS were estimated from the uniform section of the phantom. The 2D spectra were normalized by their integral value, radially averaged, and filtered by the human visual response function. A systematic kernel-by-kernel comparison across manufacturers was performed by computing the root mean square difference (RMSD) and the peak frequency difference (PFD) between the NPS from different kernels. GE and Siemens kernels were compared and kernel pairs that minimized the RMSD and |PFD| were identified. The RMSD (|PFD|) values between the NPS of GE and Siemens kernels varied from 0.01 mm(2) (0.002 mm(-1)) to 0.29 mm(2) (0.74 mm(-1)). The GE kernels "Soft," "Standard," "Chest," and "Lung" closely matched the Siemens kernels "B35f," "B43f," "B41f," and "B80f" (RMSD < 0.05 mm(2), |PFD| < 0.02 mm(-1), respectively). The GE "Bone," "Bone+," and "Edge" kernels all matched most closely with Siemens "B75f" kernel but with sizeable RMSD and |PFD| values up to 0.18 mm(2) and 0.41 mm(-1), respectively. These sizeable RMSD and |PFD| values corresponded to visually perceivable differences in the noise texture of the images. It is possible to use the NPS to quantitatively compare noise texture across CT systems. The degree to which similar texture across scanners could be achieved varies and is limited by the kernels available on each scanner.
Adaptive kernel regression for freehand 3D ultrasound reconstruction
NASA Astrophysics Data System (ADS)
Alshalalfah, Abdel-Latif; Daoud, Mohammad I.; Al-Najar, Mahasen
2017-03-01
Freehand three-dimensional (3D) ultrasound imaging enables low-cost and flexible 3D scanning of arbitrary-shaped organs, where the operator can freely move a two-dimensional (2D) ultrasound probe to acquire a sequence of tracked cross-sectional images of the anatomy. Often, the acquired 2D ultrasound images are irregularly and sparsely distributed in the 3D space. Several 3D reconstruction algorithms have been proposed to synthesize 3D ultrasound volumes based on the acquired 2D images. A challenging task during the reconstruction process is to preserve the texture patterns in the synthesized volume and ensure that all gaps in the volume are correctly filled. This paper presents an adaptive kernel regression algorithm that can effectively reconstruct high-quality freehand 3D ultrasound volumes. The algorithm employs a kernel regression model that enables nonparametric interpolation of the voxel gray-level values. The kernel size of the regression model is adaptively adjusted based on the characteristics of the voxel that is being interpolated. In particular, when the algorithm is employed to interpolate a voxel located in a region with dense ultrasound data samples, the size of the kernel is reduced to preserve the texture patterns. On the other hand, the size of the kernel is increased in areas that include large gaps to enable effective gap filling. The performance of the proposed algorithm was compared with seven previous interpolation approaches by synthesizing freehand 3D ultrasound volumes of a benign breast tumor. The experimental results show that the proposed algorithm outperforms the other interpolation approaches.
Bhattacharya, Abhishek; Dunson, David B.
2012-01-01
This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295
Kernel Tuning and Nonuniform Influence on Optical and Electrochemical Gaps of Bimetal Nanoclusters.
He, Lizhong; Yuan, Jinyun; Xia, Nan; Liao, Lingwen; Liu, Xu; Gan, Zibao; Wang, Chengming; Yang, Jinlong; Wu, Zhikun
2018-03-14
Fine tuning nanoparticles with atomic precision is exciting and challenging and is critical for tuning the properties, understanding the structure-property correlation and determining the practical applications of nanoparticles. Some ultrasmall thiolated metal nanoparticles (metal nanoclusters) have been shown to be precisely doped, and even the protecting staple metal atom could be precisely reduced. However, the precise addition or reduction of the kernel atom while the other metal atoms in the nanocluster remain the same has not been successful until now, to the best of our knowledge. Here, by carefully selecting the protecting ligand with adequate steric hindrance, we synthesized a novel nanocluster in which the kernel can be regarded as that formed by the addition of two silver atoms to both ends of the Pt@Ag 12 icosohedral kernel of the Ag 24 Pt(SR) 18 (SR: thiolate) nanocluster, as revealed by single crystal X-ray crystallography. Interestingly, compared with the previously reported Ag 24 Pt(SR) 18 nanocluster, the as-obtained novel bimetal nanocluster exhibits a similar absorption but a different electrochemical gap. One possible explanation for this result is that the kernel tuning does not essentially change the electronic structure, but obviously influences the charge on the Pt@Ag 12 kernel, as demonstrated by natural population analysis, thus possibly resulting in the large electrochemical gap difference between the two nanoclusters. This work not only provides a novel strategy to tune metal nanoclusters but also reveals that the kernel change does not necessarily alter the optical and electrochemical gaps in a uniform manner, which has important implications for the structure-property correlation of nanoparticles.
Dielectric properties of almond kernels associated with radio frequency and microwave pasteurization
NASA Astrophysics Data System (ADS)
Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin
2017-02-01
To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10-300 MHz), but gradually over the measured MW range (300-3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R2 greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27-2450 MHz), moisture content (4.2-19.6% w.b.) and temperature (20-90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity.
Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin
2017-02-10
To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10-300 MHz), but gradually over the measured MW range (300-3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R 2 greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27-2450 MHz), moisture content (4.2-19.6% w.b.) and temperature (20-90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity.
Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.
Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit
2018-02-13
Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Modeling end-use quality in U. S. soft wheat germplasm
USDA-ARS?s Scientific Manuscript database
End-use quality in soft wheat (Triticum aestivum L.) can be assessed by a wide array of measurements, generally categorized into grain, milling, and baking characteristics. Samples were obtained from four regional nurseries. Selected parameters included: test weight, kernel hardness, kernel size, ke...
Influence of wheat kernel physical properties on the pulverizing process.
Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula
2014-10-01
The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.
Muhitch, M. J.; Felker, F. C.; Taliercio, E. W.; Chourey, P. S.
1995-01-01
The pedicel (basal maternal tissue) of maize (Zea mays L.) kernels contains a physically and kinetically unique form of glutamine synthetase (GSp1) that is involved in the conversion of transport forms of nitrogen into glutamine for uptake by the developing endosperm (M.J. Muhitch [1989] Plant Physiol 91: 868-875). A monoclonal antibody has been raised against this kernel-specific GS that does not cross-react either with a second GS isozyme found in the pedicel or with the GS isozymes from the embryo, roots, or leaves. When used as a probe for tissue printing, the antibody labeled the pedicel tissue uniformly and also labeled some of the pericarp surrounding the lower endosperm. Silver-enhanced immunogold staining of whole-kernel paraffin sections revealed the presence of GSp1 in both the vascular tissue that terminates in the pedicel and the pedicel parenchyma cells, which are located between the vascular tissue and the basal endosperm transfer cells. Light staining of the subaleurone was also noted. The tissue-specific localization of GSp1 within the pedicel is consistent with its role in the metabolism of nitrogenous transport compounds as they are unloaded from the phloem. PMID:12228400
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.
Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis
2017-10-16
Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods
Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.
2017-01-01
Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333
ASIC-based architecture for the real-time computation of 2D convolution with large kernel size
NASA Astrophysics Data System (ADS)
Shao, Rui; Zhong, Sheng; Yan, Luxin
2015-12-01
Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.
7 CFR 51.2559 - Size classifications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2559 Size classifications. (a) The size of pistachio kernels may be specified in connection with the grade in accordance with one of...
Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies
Manitz, Juliane; Burger, Patricia; Amos, Christopher I.; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike
2017-01-01
The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility. PMID:28785300
Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies.
Friedrichs, Stefanie; Manitz, Juliane; Burger, Patricia; Amos, Christopher I; Risch, Angela; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike; Hofner, Benjamin
2017-01-01
The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility.
GPU-accelerated Kernel Regression Reconstruction for Freehand 3D Ultrasound Imaging.
Wen, Tiexiang; Li, Ling; Zhu, Qingsong; Qin, Wenjian; Gu, Jia; Yang, Feng; Xie, Yaoqin
2017-07-01
Volume reconstruction method plays an important role in improving reconstructed volumetric image quality for freehand three-dimensional (3D) ultrasound imaging. By utilizing the capability of programmable graphics processing unit (GPU), we can achieve a real-time incremental volume reconstruction at a speed of 25-50 frames per second (fps). After incremental reconstruction and visualization, hole-filling is performed on GPU to fill remaining empty voxels. However, traditional pixel nearest neighbor-based hole-filling fails to reconstruct volume with high image quality. On the contrary, the kernel regression provides an accurate volume reconstruction method for 3D ultrasound imaging but with the cost of heavy computational complexity. In this paper, a GPU-based fast kernel regression method is proposed for high-quality volume after the incremental reconstruction of freehand ultrasound. The experimental results show that improved image quality for speckle reduction and details preservation can be obtained with the parameter setting of kernel window size of [Formula: see text] and kernel bandwidth of 1.0. The computational performance of the proposed GPU-based method can be over 200 times faster than that on central processing unit (CPU), and the volume with size of 50 million voxels in our experiment can be reconstructed within 10 seconds.
Quantum kernel applications in medicinal chemistry.
Huang, Lulu; Massa, Lou
2012-07-01
Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.
Aggressiveness of loose kernel smut isolate from Johnson grass on sorghum line BTx643
USDA-ARS?s Scientific Manuscript database
An isolate of loose kernel smut obtained from Johnson grass was inoculated unto six BTx643 sorghum plants in the greenhouse to determine its aggressiveness. All the BTx643 sorghum plants inoculated with the Johnson grass isolate were infected. Mean size of the teliospores from the Johnson grass, i...
A Fast Reduced Kernel Extreme Learning Machine.
Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua
2016-04-01
In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lesion contrast and detection using sonoelastographic shear velocity imaging: preliminary results
NASA Astrophysics Data System (ADS)
Hoyt, Kenneth; Parker, Kevin J.
2007-03-01
This paper assesses lesion contrast and detection using sonoelastographic shear velocity imaging. Shear wave interference patterns, termed crawling waves, for a two phase medium were simulated assuming plane wave conditions. Shear velocity estimates were computed using a spatial autocorrelation algorithm that operates in the direction of shear wave propagation for a given kernel size. Contrast was determined by analyzing shear velocity estimate transition between mediums. Experimental results were obtained using heterogeneous phantoms with spherical inclusions (5 or 10 mm in diameter) characterized by elevated shear velocities. Two vibration sources were applied to opposing phantom edges and scanned (orthogonal to shear wave propagation) with an ultrasound scanner equipped for sonoelastography. Demodulated data was saved and transferred to an external computer for processing shear velocity images. Simulation results demonstrate shear velocity transition between contrasting mediums is governed by both estimator kernel size and source vibration frequency. Experimental results from phantoms further indicates that decreasing estimator kernel size produces corresponding decrease in shear velocity estimate transition between background and inclusion material albeit with an increase in estimator noise. Overall, results demonstrate the ability to generate high contrast shear velocity images using sonoelastographic techniques and detect millimeter-sized lesions.
How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.
Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J
2014-09-01
Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Comparisons of geoid models over Alaska computed with different Stokes' kernel modifications
NASA Astrophysics Data System (ADS)
Li, X.; Wang, Y.
2011-01-01
Various Stokes kernel modification methods have been developed over the years. The goal of this paper is to test the most commonly used Stokes kernel modifications numerically by using Alaska as a test area and EGM08 as a reference model. The tests show that some methods are more sensitive than others to the integration cap sizes. For instance, using the methods of Vaníček and Kleusberg or Featherstone et al. with kernel modification at degree 60, the geoid decreases by 30 cm (on average) when the cap size increases from 1° to 25°. The corresponding changes in the methods of Wong and Gore and Heck and Grüninger are only at the 1 cm level. At high modification degrees, above 360, the methods of Vaníček and Kleusberg and Featherstone et al become unstable because of numerical problems in the modification coefficients; similar conclusions have been reported by Featherstone (2003). In contrast, the methods of Wong and Gore, Heck and Grüninger and the least-squares spectral combination are stable at any modification degree, though they do not provide as good fit as the best case of the Molodenskii-type methods at the GPS/Leveling benchmarks. However, certain tests for choosing the cap size and modification degree have to be performed in advance to avoid abrupt mean geoid changes if the latter methods are applied.
Kandianis, Catherine B.; Michenfelder, Abigail S.; Simmons, Susan J.; Grusak, Michael A.; Stapleton, Ann E.
2013-01-01
The improvement of grain nutrient profiles for essential minerals and vitamins through breeding strategies is a target important for agricultural regions where nutrient poor crops like maize contribute a large proportion of the daily caloric intake. Kernel iron concentration in maize exhibits a broad range. However, the magnitude of genotype by environment (GxE) effects on this trait reduces the efficacy and predictability of selection programs, particularly when challenged with abiotic stress such as water and nitrogen limitations. Selection has also been limited by an inverse correlation between kernel iron concentration and the yield component of kernel size in target environments. Using 25 maize inbred lines for which extensive genome sequence data is publicly available, we evaluated the response of kernel iron density and kernel mass to water and nitrogen limitation in a managed field stress experiment using a factorial design. To further understand GxE interactions we used partition analysis to characterize response of kernel iron and weight to abiotic stressors among all genotypes, and observed two patterns: one characterized by higher kernel iron concentrations in control over stress conditions, and another with higher kernel iron concentration under drought and combined stress conditions. Breeding efforts for this nutritional trait could exploit these complementary responses through combinations of favorable allelic variation from these already well-characterized genetic stocks. PMID:24363659
USDA-ARS?s Scientific Manuscript database
Wheat powdery mildew is an economically important disease in cool and humid 2 environments. Powdery mildew causes yield losses as high as 48 percent through a reduction in 3 tiller survival, kernels per head and kernel size. Race-specific host resistance is the most 4 consistent, environmentally fri...
ERIC Educational Resources Information Center
Moses, Tim; Holland, Paul
2007-01-01
The purpose of this study was to empirically evaluate the impact of loglinear presmoothing accuracy on equating bias and variability across chained and post-stratification equating methods, kernel and percentile-rank continuization methods, and sample sizes. The results of evaluating presmoothing on equating accuracy generally agreed with those of…
Image quality of mixed convolution kernel in thoracic computed tomography.
Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar
2016-11-01
The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.
7 CFR 51.2284 - Size classification.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Size classification. 51.2284 Section 51.2284...) Size Requirements § 51.2284 Size classification. The following classifications are provided to describe... of kernels in the lot shall conform to the requirements of the specified classification as defined...
7 CFR 51.2284 - Size classification.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Size classification. 51.2284 Section 51.2284...) Size Requirements § 51.2284 Size classification. The following classifications are provided to describe... of kernels in the lot shall conform to the requirements of the specified classification as defined...
Spectral degree of polarization uniformity for polarization-sensitive OCT
NASA Astrophysics Data System (ADS)
Baumann, Bernhard; Zotter, Stefan; Pircher, Michael; Götzinger, Erich; Rauscher, Sabine; Glösmann, Martin; Lammer, Jan; Schmidt-Erfurth, Ursula; Gröger, Marion; Hitzenberger, Christoph K.
2015-12-01
Depolarization of light can be measured by polarization-sensitive optical coherence tomography (PS-OCT) and has been used to improve tissue discrimination as well as segmentation of pigmented structures. Most approaches to depolarization assessment for PS-OCT - such as the degree of polarization uniformity (DOPU) - rely on measuring the uniformity of polarization states using spatial evaluation kernels. In this article, we present a different approach which exploits the spectral dimension. We introduce the spectral DOPU for the pixelwise analysis of polarization state variations between sub-bands of the broadband light source spectrum. Alongside a comparison with conventional spatial and temporal DOPU algorithms, we demonstrate imaging in the healthy human retina, and apply the technique for contrasting hard exudates in diabetic retinopathy and investigating the pigment epithelium of the rat iris.
Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space
Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred
2016-01-01
Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112
Indetermination of particle sizing by laser diffraction in the anomalous size ranges
NASA Astrophysics Data System (ADS)
Pan, Linchao; Ge, Baozhen; Zhang, Fugen
2017-09-01
The laser diffraction method is widely used to measure particle size distributions. It is generally accepted that the scattering angle becomes smaller and the angles to the location of the main peak of scattered energy distributions in laser diffraction instruments shift to smaller values with increasing particle size. This specific principle forms the foundation of the laser diffraction method. However, this principle is not entirely correct for non-absorbing particles in certain size ranges and these particle size ranges are called anomalous size ranges. Here, we derive the analytical formulae for the bounds of the anomalous size ranges and discuss the influence of the width of the size segments on the signature of the Mie scattering kernel. This anomalous signature of the Mie scattering kernel will result in an indetermination of the particle size distribution when measured by laser diffraction instruments in the anomalous size ranges. By using the singular-value decomposition method we interpret the mechanism of occurrence of this indetermination in detail and then validate its existence by using inversion simulations.
7 CFR 51.2559 - Size classifications.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Size classifications. 51.2559 Section 51.2559... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2559 Size classifications. (a... the following size classifications. (1) Jumbo Whole Kernels: 80 percent or more by weight shall be...
7 CFR 51.2559 - Size classifications.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Size classifications. 51.2559 Section 51.2559... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2559 Size classifications. (a... the following size classifications. (1) Jumbo Whole Kernels: 80 percent or more by weight shall be...
NASA Astrophysics Data System (ADS)
Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei
2018-05-01
A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.
NASA Astrophysics Data System (ADS)
Chang, Jessie S. L.; Chan, Y. S.; Law, M. C.; Leo, C. P.
2017-07-01
The implementation of microwave technology in palm oil processing offers numerous advantages; besides elimination of polluted palm oil mill effluent, it also reduces energy consumption, processing time and space. However, microwave exposure could damage a material’s microstructure which affected the quality of fruit that can be related to its physical structure including the texture and appearance. In this work, empty fruit bunches, mesocarp and kernel was microwave dried and their respective microstructures were examined. The microwave pretreatments were conducted at 100W and 200W and the microstructure investigation of both treated and untreated samples were evaluated using scanning electron microscope. The micrographs demonstrated that microwave does not significantly influence kernel and mesocarp but noticeable change was found on the empty fruit bunches where the sizes of the granular starch were reduced and a small portion of the silica bodies were disrupted. From the experimental data, the microwave irradiation was shown to be efficiently applied on empty fruit bunches followed by mesocarp and kernel as significant weight loss and size reduction was observed after the microwave treatments. The current work showed that microwave treatment did not change the physical surfaces of samples but sample shrinkage is observed.
Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.
Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726
Leveraging the Cloud for Integrated Network Experimentation
2014-03-01
kernel settings, or any of the low-level subcomponents. 3. Scalable Solutions: Businesses can build scalable solutions for their clients , ranging from...values. These values 13 can assume several distributions that include normal, Pareto , uniform, exponential and Poisson, among others [21]. Additionally, D...communication, the web client establishes a connection to the server before traffic begins to flow. Web servers do not initiate connections to clients in
Optimisation of quantitative lung SPECT applied to mild COPD: a software phantom simulation study.
Norberg, Pernilla; Olsson, Anna; Alm Carlsson, Gudrun; Sandborg, Michael; Gustafsson, Agnetha
2015-01-01
The amount of inhomogeneities in a (99m)Tc Technegas single-photon emission computed tomography (SPECT) lung image, caused by reduced ventilation in lung regions affected by chronic obstructive pulmonary disease (COPD), is correlated to disease advancement. A quantitative analysis method, the CVT method, measuring these inhomogeneities was proposed in earlier work. To detect mild COPD, which is a difficult task, optimised parameter values are needed. In this work, the CVT method was optimised with respect to the parameter values of acquisition, reconstruction and analysis. The ordered subset expectation maximisation (OSEM) algorithm was used for reconstructing the lung SPECT images. As a first step towards clinical application of the CVT method in detecting mild COPD, this study was based on simulated SPECT images of an advanced anthropomorphic lung software phantom including respiratory and cardiac motion, where the mild COPD lung had an overall ventilation reduction of 5%. The best separation between healthy and mild COPD lung images as determined using the CVT measure of ventilation inhomogeneity and 125 MBq (99m)Tc was obtained using a low-energy high-resolution collimator (LEHR) and a power 6 Butterworth post-filter with a cutoff frequency of 0.6 to 0.7 cm(-1). Sixty-four reconstruction updates and a small kernel size should be used when the whole lung is analysed, and for the reduced lung a greater number of updates and a larger kernel size are needed. A LEHR collimator and 125 (99m)Tc MBq together with an optimal combination of cutoff frequency, number of updates and kernel size, gave the best result. Suboptimal selections of either cutoff frequency, number of updates and kernel size will reduce the imaging system's ability to detect mild COPD in the lung phantom.
Rapid scatter estimation for CBCT using the Boltzmann transport equation
NASA Astrophysics Data System (ADS)
Sun, Mingshan; Maslowski, Alex; Davis, Ian; Wareing, Todd; Failla, Gregory; Star-Lack, Josh
2014-03-01
Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU's running in parallel.
NASA Astrophysics Data System (ADS)
Feng, Guang; Li, Hengjian; Dong, Jiwen; Chen, Xi; Yang, Huiru
2018-04-01
In this paper, we proposed a joint and collaborative representation with Volterra kernel convolution feature (JCRVK) for face recognition. Firstly, the candidate face images are divided into sub-blocks in the equal size. The blocks are extracted feature using the two-dimensional Voltera kernels discriminant analysis, which can better capture the discrimination information from the different faces. Next, the proposed joint and collaborative representation is employed to optimize and classify the local Volterra kernels features (JCR-VK) individually. JCR-VK is very efficiently for its implementation only depending on matrix multiplication. Finally, recognition is completed by using the majority voting principle. Extensive experiments on the Extended Yale B and AR face databases are conducted, and the results show that the proposed approach can outperform other recently presented similar dictionary algorithms on recognition accuracy.
Optimal focal-plane restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1989-01-01
Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.
An integral formulation for wave propagation on weakly non-uniform potential flows
NASA Astrophysics Data System (ADS)
Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel
2016-12-01
An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.
An automatic optimum kernel-size selection technique for edge enhancement
Chavez, Pat S.; Bauer, Brian P.
1982-01-01
Edge enhancement is a technique that can be considered, to a first order, a correction for the modulation transfer function of an imaging system. Digital imaging systems sample a continuous function at discrete intervals so that high-frequency information cannot be recorded at the same precision as lower frequency data. Because of this, fine detail or edge information in digital images is lost. Spatial filtering techniques can be used to enhance the fine detail information that does exist in the digital image, but the filter size is dependent on the type of area being processed. A technique has been developed by the authors that uses the horizontal first difference to automatically select the optimum kernel-size that should be used to enhance the edges that are contained in the image.
Spectral analysis of pair-correlation bandwidth: application to cell biology images.
Binder, Benjamin J; Simpson, Matthew J
2015-02-01
Images from cell biology experiments often indicate the presence of cell clustering, which can provide insight into the mechanisms driving the collective cell behaviour. Pair-correlation functions provide quantitative information about the presence, or absence, of clustering in a spatial distribution of cells. This is because the pair-correlation function describes the ratio of the abundance of pairs of cells, separated by a particular distance, relative to a randomly distributed reference population. Pair-correlation functions are often presented as a kernel density estimate where the frequency of pairs of objects are grouped using a particular bandwidth (or bin width), Δ>0. The choice of bandwidth has a dramatic impact: choosing Δ too large produces a pair-correlation function that contains insufficient information, whereas choosing Δ too small produces a pair-correlation signal dominated by fluctuations. Presently, there is little guidance available regarding how to make an objective choice of Δ. We present a new technique to choose Δ by analysing the power spectrum of the discrete Fourier transform of the pair-correlation function. Using synthetic simulation data, we confirm that our approach allows us to objectively choose Δ such that the appropriately binned pair-correlation function captures known features in uniform and clustered synthetic images. We also apply our technique to images from two different cell biology assays. The first assay corresponds to an approximately uniform distribution of cells, while the second assay involves a time series of images of a cell population which forms aggregates over time. The appropriately binned pair-correlation function allows us to make quantitative inferences about the average aggregate size, as well as quantifying how the average aggregate size changes with time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacko, M; Aldoohan, S
Purpose: The low contrast detectability (LCD) of a CT scanner is its ability to detect and display faint lesions. The current approach to quantify LCD is achieved using vendor-specific methods and phantoms, typically by subjectively observing the smallest size object at a contrast level above phantom background. However, this approach does not yield clinically applicable values for LCD. The current study proposes a statistical LCD metric using software tools to not only to assess scanner performance, but also to quantify the key factors affecting LCD. This approach was developed using uniform QC phantoms, and its applicability was then extended undermore » simulated clinical conditions. Methods: MATLAB software was developed to compute LCD using a uniform image of a QC phantom. For a given virtual object size, the software randomly samples the image within a selected area, and uses statistical analysis based on Student’s t-distribution to compute the LCD as the minimal Hounsfield Unit’s that can be distinguished from the background at the 95% confidence level. Its validity was assessed by comparison with the behavior of a known QC phantom under various scan protocols and a tissue-mimicking phantom. The contributions of beam quality and scattered radiation upon the computed LCD were quantified by using various external beam-hardening filters and phantom lengths. Results: As expected, the LCD was inversely related to object size under all scan conditions. The type of image reconstruction kernel filter and tissue/organ type strongly influenced the background noise characteristics and therefore, the computed LCD for the associated image. Conclusion: The proposed metric and its associated software tools are vendor-independent and can be used to analyze any LCD scanner performance. Furthermore, the method employed can be used in conjunction with the relationships established in this study between LCD and tissue type to extend these concepts to patients’ clinical CT images.« less
Speeding Up the Bilateral Filter: A Joint Acceleration Way.
Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng
2016-06-01
Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.
Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment
NASA Astrophysics Data System (ADS)
Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty
2017-12-01
Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.
NASA Astrophysics Data System (ADS)
Wang, Y.
2012-01-01
The authors thank professor Sjöberg for having interest in our paper. The main goal of the paper is to test kernel modification methods used in geoid computations. Our tests found that Vanicek/Kleusberg's and Featherstone's methods fit the GPS/leveling data the best in the relative sense at various cap sizes. At the same time, we also pointed out that their methods are unstable and the mean values change from dm to meters by just changing the cap size. By contrast, the modification of the Wong and Gore type (including the spectral combination, method of Heck and Grüninger) is stable and insensitive to the truncation degree and cap size. This feature is especially useful when we know the accuracy of the gravity field at different frequency bands. For instance, it is advisable to truncate Stokes' kernel at a degree to which the satellite model is believed to be more accurate than surface data. The method of the Wong and Goretype does this job quite well. In contrast, the low degrees of Stokes' kernel are modified by Molodensky's coefficients
Effect of finite sample size on feature selection and classification: a simulation study.
Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping
2010-02-01
The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.
NASA Technical Reports Server (NTRS)
Jin, Zhonghai; Wielicki, Bruce A.; Loukachine, Constantin; Charlock, Thomas P.; Young, David; Noeel, Stefan
2011-01-01
The radiative kernel approach provides a simple way to separate the radiative response to different climate parameters and to decompose the feedback into radiative and climate response components. Using CERES/MODIS/Geostationary data, we calculated and analyzed the solar spectral reflectance kernels for various climate parameters on zonal, regional, and global spatial scales. The kernel linearity is tested. Errors in the kernel due to nonlinearity can vary strongly depending on climate parameter, wavelength, surface, and solar elevation; they are large in some absorption bands for some parameters but are negligible in most conditions. The spectral kernels are used to calculate the radiative responses to different climate parameter changes in different latitudes. The results show that the radiative response in high latitudes is sensitive to the coverage of snow and sea ice. The radiative response in low latitudes is contributed mainly by cloud property changes, especially cloud fraction and optical depth. The large cloud height effect is confined to absorption bands, while the cloud particle size effect is found mainly in the near infrared. The kernel approach, which is based on calculations using CERES retrievals, is then tested by direct comparison with spectral measurements from Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) (a different instrument on a different spacecraft). The monthly mean interannual variability of spectral reflectance based on the kernel technique is consistent with satellite observations over the ocean, but not over land, where both model and data have large uncertainty. RMS errors in kernel ]derived monthly global mean reflectance over the ocean compared to observations are about 0.001, and the sampling error is likely a major component.
Miller, Nathan D; Haase, Nicholas J; Lee, Jonghyun; Kaeppler, Shawn M; de Leon, Natalia; Spalding, Edgar P
2017-01-01
Grain yield of the maize plant depends on the sizes, shapes, and numbers of ears and the kernels they bear. An automated pipeline that can measure these components of yield from easily-obtained digital images is needed to advance our understanding of this globally important crop. Here we present three custom algorithms designed to compute such yield components automatically from digital images acquired by a low-cost platform. One algorithm determines the average space each kernel occupies along the cob axis using a sliding-window Fourier transform analysis of image intensity features. A second counts individual kernels removed from ears, including those in clusters. A third measures each kernel's major and minor axis after a Bayesian analysis of contour points identifies the kernel tip. Dimensionless ear and kernel shape traits that may interrelate yield components are measured by principal components analysis of contour point sets. Increased objectivity and speed compared to typical manual methods are achieved without loss of accuracy as evidenced by high correlations with ground truth measurements and simulated data. Millimeter-scale differences among ear, cob, and kernel traits that ranged more than 2.5-fold across a diverse group of inbred maize lines were resolved. This system for measuring maize ear, cob, and kernel attributes is being used by multiple research groups as an automated Web service running on community high-throughput computing and distributed data storage infrastructure. Users may create their own workflow using the source code that is staged for download on a public repository. © 2016 The Authors. The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.
Single image super-resolution based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia
2018-03-01
We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.
Double Ramp Loss Based Reject Option Classifier
2015-05-22
choose 10% of these points uniformly at random and flip their labels. 2. Ionosphere Dataset [2] : This dataset describes the problem of discrimi- nating...good versus bad radars based on whether they send some useful infor- mation about the Ionosphere . There are 34 variables and 351 observations. 3... Ionosphere dataset (nonlinear classifiers using RBF kernel for both the approaches) d LDR (C = 2, γ = 0.125) LDH (C = 16, γ = 0.125) Risk RR Acc(unrej
CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties
2017-03-01
inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse
Agile convolutional neural network for pulmonary nodule classification using CT images.
Zhao, Xinzhuo; Liu, Liyao; Qi, Shouliang; Teng, Yueyang; Li, Jianhua; Qian, Wei
2018-04-01
To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.
Extraction process of palm kernel cake as a source of mannan for feed additive on poultry diet
NASA Astrophysics Data System (ADS)
Tafsin, M.; Hanafi, N. D.; Yusraini, E.
2017-05-01
Palm Kernel Cake (PKC) is a by-product of palm kernel oil extraction and found in large quantity in Indonesia. The inclusion of PKC on poultry diet are limited due to some nutritional problems such as anti-nutritional properties (mannan). On the other hand, mannan containing polysaccharides play in various biological functions particularly in enhancing the immune response and to control pathogen in poultry. The research objective to find out the extraction process of PKC and conducted at animal nutrition and Feed Science Laboratory, Agricultural Faculty, University of Sumatera Utara. Various extraction methode were used in this experiment, including fraction analysis used 7 number sieves, and followed by water and acetic acid extraction. The result indicated that PKC had different particle size according to sieve size and dominated by particle size 850 um. The analysis of sugar content indicated that each particle size had different characteristic on treatment by hot water extraction. The particle size 180—850 um had higher sugar content than coarse PKC (2000—3000 um). The total sugar content were recovered vary between 0.9—3,2% from PKC were extracted. Treatment grinding method followed by hot water extraction (100—120 °C, 1 h) increased total sugar content than previous treatments and reach 8% from PKC were extracted. Utilisation acetic acid decreased the total amount of total sugar from PKC were extracted. It is concluded that treatment by hot temperature (110—120 °C) for 1 h show highest yield to extract sugar from PKC.
A DSP-based neural network non-uniformity correction algorithm for IRFPA
NASA Astrophysics Data System (ADS)
Liu, Chong-liang; Jin, Wei-qi; Cao, Yang; Liu, Xiu
2009-07-01
An effective neural network non-uniformity correction (NUC) algorithm based on DSP is proposed in this paper. The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise(FPN).We introduced and analyzed the artificial neural network scene-based non-uniformity correction (SBNUC) algorithm. A design of DSP-based NUC development platform for IRFPA is described. The DSP hardware platform designed is of low power consumption, with 32-bit fixed point DSP TMS320DM643 as the kernel processor. The dependability and expansibility of the software have been improved by DSP/BIOS real-time operating system and Reference Framework 5. In order to realize real-time performance, the calibration parameters update is set at a lower task priority then video input and output in DSP/BIOS. In this way, calibration parameters updating will not affect video streams. The work flow of the system and the strategy of real-time realization are introduced. Experiments on real infrared imaging sequences demonstrate that this algorithm requires only a few frames to obtain high quality corrections. It is computationally efficient and suitable for all kinds of non-uniformity.
NASA Technical Reports Server (NTRS)
Cunningham, A. M., Jr.
1976-01-01
The theory, results and user instructions for an aerodynamic computer program are presented. The theory is based on linear lifting surface theory, and the method is the kernel function. The program is applicable to multiple interfering surfaces which may be coplanar or noncoplanar. Local linearization was used to treat nonuniform flow problems without shocks. For cases with imbedded shocks, the appropriate boundary conditions were added to account for the flow discontinuities. The data describing nonuniform flow fields must be input from some other source such as an experiment or a finite difference solution. The results are in the form of small linear perturbations about nonlinear flow fields. The method was applied to a wide variety of problems for which it is demonstrated to be significantly superior to the uniform flow method. Program user instructions are given for easy access.
Subramanian, Sundarraman
2008-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423
Subramanian, Sundarraman
2006-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.
Effects of Japanese beetle (Coleoptera: Scarabaeidae) and silk clipping in field corn.
Steckel, Sandy; Stewart, S D; Tindall, K V
2013-10-01
Japanese beetle (Popillia japonica Newman) is an emerging silk-feeding insect found in fields in the lower Corn Belt and Midsouthern United States. Studies were conducted in 2010 and 2011 to evaluate how silk clipping in corn affects pollination and yield parameters. Manually clipping silks once daily had modest effects on yield parameters. Sustained clipping by either manually clipping silks three times per day or by caging Japanese beetles onto ears affected total kernel weight if it occurred during early silking (R1 growth stage). Manually clipping silks three times per day for the first 5 d of silking affected the number of kernels per ear, total kernel weight, and the weight of individual kernels. Caged beetles fed on silks and, depending on the number of beetles caged per ear, reduced the number of kernels per ear. Caging eight beetles per ear significantly reduced total kernel weight compared with noninfested ears. Drought stress before anthesis appeared to magnify the impact of silk clipping by Japanese beetles. There was evidence of some compensation for reduced pollination by increasing the size of pollinated kernels within the ear. Our results showed that it requires sustained silk clipping during the first week of silking to have substantial impacts on pollination and yield parameters, at least under good growing conditions. Some states recommend treating for Japanese beetle when three Japanese beetles per ear are found, silks are clipped to < 13 mm, and pollination is < 50% complete, and that recommendation appears to be adequate.
Size and moisture distribution characteristics of walnuts and their components
USDA-ARS?s Scientific Manuscript database
The objective of this study was to determine the size characteristics and moisture content (MC) distributions of individual walnuts and their components, including hulls, shells and kernels under different harvest conditions. Measurements were carried out for three walnut varieties, Tulare, Howard a...
Yeung, Dit-Yan; Chang, Hong; Dai, Guang
2008-11-01
In recent years, metric learning in the semisupervised setting has aroused a lot of research interest. One type of semisupervised metric learning utilizes supervisory information in the form of pairwise similarity or dissimilarity constraints. However, most methods proposed so far are either limited to linear metric learning or unable to scale well with the data set size. In this letter, we propose a nonlinear metric learning method based on the kernel approach. By applying low-rank approximation to the kernel matrix, our method can handle significantly larger data sets. Moreover, our low-rank approximation scheme can naturally lead to out-of-sample generalization. Experiments performed on both artificial and real-world data show very promising results.
Vokoun, Jason C.; Rabeni, Charles F.
2005-01-01
Flathead catfish Pylodictis olivaris were radio-tracked in the Grand River and Cuivre River, Missouri, from late July until they moved to overwintering habitats in late October. Fish moved within a definable area, and although occasional long-distance movements occurred, the fish typically returned to the previously occupied area. Seasonal home range was calculated with the use of kernel density estimation, which can be interpreted as a probabilistic utilization distribution that documents the internal structure of the estimate by delineating portions of the range that was used a specified percentage of the time. A traditional linear range also was reported. Most flathead catfish (89%) had one 50% kernel-estimated core area, whereas 11% of the fish split their time between two core areas. Core areas were typically in the middle of the 90% kernel-estimated home range (58%), although several had core areas in upstream (26%) and downstream (16%) portions of the home range. Home-range size did not differ based on river, sex, or size and was highly variable among individuals. The median 95% kernel estimate was 1,085 m (range, 70– 69,090 m) for all fish. The median 50% kernel-estimated core area was 135 m (10–2,260 m). The median linear range was 3,510 m (150–50,400 m). Fish pairs with core areas in the same and neighboring pools had static joint space use values of up to 49% (area of intersection index), indicating substantial overlap and use of the same area. However, all fish pairs had low dynamic joint space use values (<0.07; coefficient of association), indicating that fish pairs were temporally segregated, rarely occurring in the same location at the same time.
Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use
Arthur, Steve M.; Schwartz, Charles C.
1999-01-01
We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.
Morris, Craig F; Beecher, Brian S
2012-07-01
Kernel vitreosity is an important trait of wheat grain, but its developmental control is not completely known. We developed back-cross seven (BC(7)) near-isogenic lines in the soft white spring wheat cultivar Alpowa that lack the distal portion of chromosome 5D short arm. From the final back-cross, 46 BC(7)F(2) plants were isolated. These plants exhibited a complete and perfect association between kernel vitreosity (i.e. vitreous, non-vitreous or mixed) and Single Kernel Characterization System (SKCS) hardness. Observed segregation of 10:28:7 fit a 1:2:1 Chi-square. BC(7)F(2) plants classified as heterozygous for both SKCS hardness and kernel vitreosity (n = 29) were selected and a single vitreous and non-vitreous kernel were selected, and grown to maturity and subjected to SKCS analysis. The resultant phenotypic ratios were, from non-vitreous kernels, 23:6:0, and from vitreous kernels, 0:1:28, soft:heterozygous:hard, respectively. Three of these BC(7)F(2) heterozygous plants were selected and 40 kernels each drawn at random, grown to maturity and subjected to SKCS analysis. Phenotypic segregation ratios were 7:27:6, 11:20:9, and 3:28:9, soft:heterozygous:hard. Chi-square analysis supported a 1:2:1 segregation for one plant but not the other two, in which cases the two homozygous classes were under-represented. Twenty-two paired BC(7)F(2):F(3) full sibs were compared for kernel hardness, weight, size, density and protein content. SKCS hardness index differed markedly, 29.4 for the lines with a complete 5DS, and 88.6 for the lines possessing the deletion. The soft non-vitreous kernels were on average significantly heavier, by nearly 20%, and were slightly larger. Density and protein contents were similar, however. The results provide strong genetic evidence that gene(s) on distal 5DS control not only kernel hardness but also the manner in which the endosperm develops, viz. whether it is vitreous or non-vitreous.
7 CFR 52.1007 - Uniformity of size.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Uniformity of size. 52.1007 Section 52.1007... PROCESSED FOOD PRODUCTS 1 United States Standards for Grades of Dates Factors of Quality § 52.1007... uniform in size may be given a score of 9 or 10 points. “Practically uniform in size” means that not more...
Efficient nonparametric n -body force fields from machine learning
NASA Astrophysics Data System (ADS)
Glielmo, Aldo; Zeni, Claudio; De Vita, Alessandro
2018-05-01
We provide a definition and explicit expressions for n -body Gaussian process (GP) kernels, which can learn any interatomic interaction occurring in a physical system, up to n -body contributions, for any value of n . The series is complete, as it can be shown that the "universal approximator" squared exponential kernel can be written as a sum of n -body kernels. These recipes enable the choice of optimally efficient force models for each target system, as confirmed by extensive testing on various materials. We furthermore describe how the n -body kernels can be "mapped" on equivalent representations that provide database-size-independent predictions and are thus crucially more efficient. We explicitly carry out this mapping procedure for the first nontrivial (three-body) kernel of the series, and we show that this reproduces the GP-predicted forces with meV /Å accuracy while being orders of magnitude faster. These results pave the way to using novel force models (here named "M-FFs") that are computationally as fast as their corresponding standard parametrized n -body force fields, while retaining the nonparametric character, the ease of training and validation, and the accuracy of the best recently proposed machine-learning potentials.
Chen, Lidong; Basu, Anup; Zhang, Maojun; Wang, Wei; Liu, Yu
2014-03-20
A complementary catadioptric imaging technique was proposed to solve the problem of low and nonuniform resolution in omnidirectional imaging. To enhance this research, our paper focuses on how to generate a high-resolution panoramic image from the captured omnidirectional image. To avoid the interference between the inner and outer images while fusing the two complementary views, a cross-selection kernel regression method is proposed. First, in view of the complementarity of sampling resolution in the tangential and radial directions between the inner and the outer images, respectively, the horizontal gradients in the expected panoramic image are estimated based on the scattered neighboring pixels mapped from the outer, while the vertical gradients are estimated using the inner image. Then, the size and shape of the regression kernel are adaptively steered based on the local gradients. Furthermore, the neighboring pixels in the next interpolation step of kernel regression are also selected based on the comparison between the horizontal and vertical gradients. In simulation and real-image experiments, the proposed method outperforms existing kernel regression methods and our previous wavelet-based fusion method in terms of both visual quality and objective evaluation.
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.
NASA Astrophysics Data System (ADS)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.
2017-11-01
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; ...
2017-10-24
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
LoCoH: Non-parameteric kernel methods for constructing home ranges and utilization distributions
Getz, Wayne M.; Fortmann-Roe, Scott; Cross, Paul C.; Lyons, Andrew J.; Ryan, Sadie J.; Wilmers, Christopher C.
2007-01-01
Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: ‘‘fixed sphere-of-influence,’’ or r -LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an ‘‘adaptive sphere-of-influence,’’ or a -LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a ), and compare them to the original ‘‘fixed-number-of-points,’’ or k -LoCoH (all kernels constructed from k -1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a -LoCoH is generally superior to k - and r -LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).
Data-Driven Hierarchical Structure Kernel for Multiscale Part-Based Object Recognition
Wang, Botao; Xiong, Hongkai; Jiang, Xiaoqian; Zheng, Yuan F.
2017-01-01
Detecting generic object categories in images and videos are a fundamental issue in computer vision. However, it faces the challenges from inter and intraclass diversity, as well as distortions caused by viewpoints, poses, deformations, and so on. To solve object variations, this paper constructs a structure kernel and proposes a multiscale part-based model incorporating the discriminative power of kernels. The structure kernel would measure the resemblance of part-based objects in three aspects: 1) the global similarity term to measure the resemblance of the global visual appearance of relevant objects; 2) the part similarity term to measure the resemblance of the visual appearance of distinctive parts; and 3) the spatial similarity term to measure the resemblance of the spatial layout of parts. In essence, the deformation of parts in the structure kernel is penalized in a multiscale space with respect to horizontal displacement, vertical displacement, and scale difference. Part similarities are combined with different weights, which are optimized efficiently to maximize the intraclass similarities and minimize the interclass similarities by the normalized stochastic gradient ascent algorithm. In addition, the parameters of the structure kernel are learned during the training process with regard to the distribution of the data in a more discriminative way. With flexible part sizes on scale and displacement, it can be more robust to the intraclass variations, poses, and viewpoints. Theoretical analysis and experimental evaluations demonstrate that the proposed multiscale part-based representation model with structure kernel exhibits accurate and robust performance, and outperforms state-of-the-art object classification approaches. PMID:24808345
Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.
Zhang, Ying-Ying; Yang, Cai; Zhang, Ping
2017-05-01
In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sung, Kristine K; Goff, H Douglas
2010-04-01
The development of a structural fat network in ice cream as influenced by the solid:liquid fat ratio at the time of freezing/whipping was investigated. The solid fat content was varied with blends of a hard fraction of palm kernel oil (PKO) and high-oleic sunflower oil ranging from 40% to 100% PKO. Fat globule size and adsorbed protein levels in mix and overrun, fat destabilization, meltdown resistance, and air bubble size in ice cream were measured. It was found that blends comprising 60% to 80% solid fat produced the highest rates of fat destabilization that could be described as partial coalescence (as opposed to coalescence), lowest rates of meltdown, and smallest air bubble sizes. Lower levels of solid fat produced fat destabilization that was better characterized as coalescence, leading to loss of structural integrity, whereas higher levels of solid fat led to lower levels of fat network formation and thus also to reduced structural integrity. Blends of highly saturated palm kernel oil and monounsaturated high-oleic sunflower oil were used to modify the solid:liquid ratio of fat blends used for ice cream manufacture. Blends that contained 60% to 80% solid fat at freezing/whipping temperatures produced optimal structures leading to low rates of meltdown. This provides a useful reference for manufacturers to help in the selection of appropriate fat blends for nondairy-fat ice cream.
NASA Astrophysics Data System (ADS)
Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng
2018-04-01
In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.
Fábián, Attila; Jäger, Katalin; Rakszegi, Mariann; Barnabás, Beáta
2011-04-01
The aim of the present work was to reveal the histological alterations triggered in developing wheat kernels by soil drought stress during early seed development resulting in yield losses at harvest. For this purpose, observations were made on the effect of drought stress, applied in a controlled environment from the 5th to the 9th day after pollination, on the kernel morphology, starch content and grain yield of the drought-sensitive Cappelle Desprez and drought-tolerant Plainsman V winter wheat (Triticum aestivum L.) varieties. As a consequence of water withdrawal, there was a decrease in the size of the embryos and the number of A-type starch granules deposited in the endosperm, while the development of aleurone cells and the degradation of the cell layers surrounding the ovule were significantly accelerated in both genotypes. In addition, the number of B-type starch granules per cell was significantly reduced. Drought stress affected the rate of grain filling shortened the grain-filling and ripening period and severely reduced the yield. With respect to the recovery of vegetative tissues, seed set and yield, the drought-tolerant Plainsman V responded significantly better to drought stress than Cappelle Desprez. The reduction in the size of the mature embryos was significantly greater in the sensitive genotype. Compared to Plainsman V, the endosperm cells of Cappelle Desprez accumulated significantly fewer B-type starch granules. In stressed kernels of the tolerant genotype, the accumulation of protein bodies occurred significantly earlier than in the sensitive variety.
NASA Astrophysics Data System (ADS)
Uddin, Salah; Mohamad, Mahathir; Khalid, Kamil; Abdulhammed, Mohammed; Saifullah Rusiman, Mohd; Che – Him, Norziha; Roslan, Rozaini
2018-04-01
In this paper, the flow of blood mixed with magnetic particles subjected to uniform transverse magnetic field and pressure gradient in an axisymmetric circular cylinder is studied by using a new trend of fractional derivative without singular kernel. The governing equations are fractional partial differential equations derived based on the Caputo-Fabrizio time-fractional derivatives NFDt. The current result agrees considerably well with that of the previous Caputo fractional derivatives UFDt.
QTL Analysis of Kernel-Related Traits in Maize Using an Immortalized F2 Population
Hu, Yanmin; Li, Weihua; Fu, Zhiyuan; Ding, Dong; Li, Haochuan; Qiao, Mengmeng; Tang, Jihua
2014-01-01
Kernel size and weight are important determinants of grain yield in maize. In this study, multivariate conditional and unconditional quantitative trait loci (QTL), and digenic epistatic analyses were utilized in order to elucidate the genetic basis for these kernel-related traits. Five kernel-related traits, including kernel weight (KW), volume (KV), length (KL), thickness (KT), and width (KWI), were collected from an immortalized F2 (IF2) maize population comprising of 243 crosses performed at two separate locations over a span of two years. A total of 54 unconditional main QTL for these five kernel-related traits were identified, many of which were clustered in chromosomal bins 6.04–6.06, 7.02–7.03, and 10.06–10.07. In addition, qKL3, qKWI6, qKV10a, qKV10b, qKW10a, and qKW7a were detected across multiple environments. Sixteen main QTL were identified for KW conditioned on the other four kernel traits (KL, KWI, KT, and KV). Thirteen main QTL were identified for KV conditioned on three kernel-shape traits. Conditional mapping analysis revealed that KWI and KV had the strongest influence on KW at the individual QTL level, followed by KT, and then KL; KV was mostly strongly influenced by KT, followed by KWI, and was least impacted by KL. Digenic epistatic analysis identified 18 digenic interactions involving 34 loci over the entire genome. However, only a small proportion of them were identical to the main QTL we detected. Additionally, conditional digenic epistatic analysis revealed that the digenic epistasis for KW and KV were entirely determined by their constituent traits. The main QTL identified in this study for determining kernel-related traits with high broad-sense heritability may play important roles during kernel development. Furthermore, digenic interactions were shown to exert relatively large effects on KL (the highest AA and DD effects were 4.6% and 6.7%, respectively) and KT (the highest AA effects were 4.3%). PMID:24586932
Defining space use and movements of Canada lynx with global positioning system telemetry
Burdett, C.L.; Moen, R.A.; Niemi, G.J.; Mech, L.D.
2007-01-01
Space use and movements of Canada lynx (Lynx canadensis) are difficult to study with very-high-frequency radiocollars. We deployed global positioning system (GPS) collars on 11 lynx in Minnesota to study their seasonal space-use patterns. We estimated home ranges with minimum-convex-polygon and fixed-kernel methods and estimated core areas with area/probability curves. Fixed-kernel home ranges of males (range = 29-522 km2) were significantly larger than those of females (range = 5-95 km2) annually and during the denning season. Some male lynx increased movements during March, the month most influenced by breeding activity. Lynx core areas were predicted by the 60% fixed-kernel isopleth in most seasons. The mean core-area size of males (range = 6-190 km2) was significantly larger than that of females (range = 1-19 km2) annually and during denning. Most female lynx were reproductive animals with reduced movements, whereas males often ranged widely between Minnesota and Ontario. Sensitivity analyses examining the effect of location frequency on home-range size suggest that the home-range sizes of breeding females are less sensitive to sample size than those of males. Longer periods between locations decreased home-range and core-area overlap relative to the home range estimated from daily locations. GPS collars improve our understanding of space use and movements by lynx by increasing the spatial extent and temporal frequency of monitoring and allowing home ranges to be estimated over short periods that are relevant to life-history characteristics. ?? 2007 American Society of Mammalogists.
Chen, Yumin; Fritz, Ronald D; Kock, Lindsay; Garg, Dinesh; Davis, R Mark; Kasturi, Prabhakar
2018-02-01
A step-wise, 'test-all-positive-gluten' analytical methodology has been developed and verified to assess kernel-based gluten contamination (i.e., wheat, barley and rye kernels) during gluten-free (GF) oat production. It targets GF claim compliance at the serving-size level (of a pouch or approximately 40-50g). Oat groats are collected from GF oat production following a robust attribute-based sampling plan then split into 75-g subsamples, and ground. R-Biopharm R5 sandwich ELISA R7001 is used for analysis of all the first15-g portions of the ground sample. A >20-ppm result disqualifies the production lot, while a >5 to <20-ppm result triggers complete analysis of the remaining 60-g of ground sample, analyzed in 15-g portions. If all five 15-g test results are <20ppm, and their average is <10.67ppm (since a 20-ppm contaminant in 40g of oats would dilute to 10.67ppm in 75-g), the lot is passed. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
7 CFR 51.1437 - Size classifications for halves.
Code of Federal Regulations, 2010 CFR
2010-01-01
... weight of half-kernels after all pieces, particles and dust, shell, center wall, and foreign material..., particles, and dust. In order to allow for variations incident to proper sizing and handling, not more than 15 percent, by weight, of any lot may consist of pieces, particles, and dust: Provided, That not more...
7 CFR 51.1437 - Size classifications for halves.
Code of Federal Regulations, 2012 CFR
2012-01-01
... weight of half-kernels after all pieces, particles and dust, shell, center wall, and foreign material..., particles, and dust. In order to allow for variations incident to proper sizing and handling, not more than 15 percent, by weight, of any lot may consist of pieces, particles, and dust: Provided, That not more...
7 CFR 51.1437 - Size classifications for halves.
Code of Federal Regulations, 2011 CFR
2011-01-01
... weight of half-kernels after all pieces, particles and dust, shell, center wall, and foreign material..., particles, and dust. In order to allow for variations incident to proper sizing and handling, not more than 15 percent, by weight, of any lot may consist of pieces, particles, and dust: Provided, That not more...
TaGS5-3A, a grain size gene selected during wheat improvement for larger kernel and yield.
Ma, Lin; Li, Tian; Hao, Chenyang; Wang, Yuquan; Chen, Xinhong; Zhang, Xueyong
2016-05-01
Grain size is a dominant component of grain weight in cereals. Earlier studies have shown that OsGS5 plays a major role in regulating both grain size and weight in rice via promotion of cell division. In this study, we isolated TaGS5 homoeologues in wheat and mapped them on chromosomes 3A, 3B and 3D. Temporal and spatial expression analysis showed that TaGS5 homoeologues were preferentially expressed in young spikes and developing grains. Two alleles of TaGS5-3A, TaGS5-3A-T and TaGS5-3A-G were identified in wheat accessions, and a functional marker was developed to discriminate them. Association analysis revealed that TaGS5-3A-T was significantly correlated with larger grain size and higher thousand kernel weight. Biochemical assays showed that TaGS5-3A-T possesses a higher enzymatic activity than TaGS5-3A-G. Transgenic rice lines overexpressing TaGS5-3A-T also exhibited larger grain size and higher thousand kernel weight than TaGS5-3A-G lines, and the transcript levels of cell cycle-related genes in TaGS5-3A-T lines were higher than those in TaGS5-3A-G lines. Furthermore, systematic evolution analysis in diploid, tetraploid and hexaploid wheat showed that TaGS5-3A underwent strong artificial selection during wheat polyploidization events and the frequency changes of two alleles demonstrated that TaGS5-3A-T was favoured in global modern wheat cultivars. These results suggest that TaGS5-3A is a positive regulator of grain size and its favoured allele TaGS5-3A-T exhibits a larger potential application in wheat high-yield breeding. © 2015 Society for Experimental Biology, Association of Applied Biologists and John Wiley & Sons Ltd.
Code of Federal Regulations, 2011 CFR
2011-01-01
... rubbed off with the fingers; (c) Gum, when a film of shiny, resinous appearing substance affects an area... the kernel is excessively thin for its size, or when materially withered, shrunken, leathery, tough or...
SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Tian, Z; Song, T
Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less
Prediction of heterotrimeric protein complexes by two-phase learning using neighboring kernels
2014-01-01
Background Protein complexes play important roles in biological systems such as gene regulatory networks and metabolic pathways. Most methods for predicting protein complexes try to find protein complexes with size more than three. It, however, is known that protein complexes with smaller sizes occupy a large part of whole complexes for several species. In our previous work, we developed a method with several feature space mappings and the domain composition kernel for prediction of heterodimeric protein complexes, which outperforms existing methods. Results We propose methods for prediction of heterotrimeric protein complexes by extending techniques in the previous work on the basis of the idea that most heterotrimeric protein complexes are not likely to share the same protein with each other. We make use of the discriminant function in support vector machines (SVMs), and design novel feature space mappings for the second phase. As the second classifier, we examine SVMs and relevance vector machines (RVMs). We perform 10-fold cross-validation computational experiments. The results suggest that our proposed two-phase methods and SVM with the extended features outperform the existing method NWE, which was reported to outperform other existing methods such as MCL, MCODE, DPClus, CMC, COACH, RRW, and PPSampler for prediction of heterotrimeric protein complexes. Conclusions We propose two-phase prediction methods with the extended features, the domain composition kernel, SVMs and RVMs. The two-phase method with the extended features and the domain composition kernel using SVM as the second classifier is particularly useful for prediction of heterotrimeric protein complexes. PMID:24564744
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, J; Followill, D; Howell, R
2015-06-15
Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less
Effects of study area size on home range estimates of common bottlenose dolphins Tursiops truncatus
Nekolny, Samantha R; Denny, Matthew; Biedenbach, George; Howells, Elisabeth M; Mazzoil, Marilyn; Durden, Wendy N; Moreland, Lydia; David Lambert, J
2017-01-01
Abstract Knowledge of an animal’s home range is a crucial component in making informed management decisions. However, many home range studies are limited by study area size, and therefore may underestimate the size of the home range. In many cases, individuals have been shown to travel outside of the study area and utilize a larger area than estimated by the study design. In this study, data collected by multiple research groups studying bottlenose dolphins on the east coast of Florida were combined to determine how home range estimates increased with increasing study area size. Home range analyses utilized photo-identification data collected from 6 study areas throughout the St Johns River (SJR; Jacksonville, FL, USA) and adjacent waterways, extending a total of 253 km to the southern end of Mosquito Lagoon in the Indian River Lagoon Estuarine System. Univariate kernel density estimates (KDEs) were computed for individuals with 10 or more sightings (n = 20). Kernels were calculated for the primary study area (SJR) first, then additional kernels were calculated by combining the SJR and the next adjacent waterway; this continued in an additive fashion until all study areas were included. The 95% and 50% KDEs calculated for the SJR alone ranged from 21 to 35 km and 4 to 19 km, respectively. The 95% and 50% KDEs calculated for all combined study areas ranged from 116 to 217 km and 9 to 70 km, respectively. This study illustrates the degree to which home range may be underestimated by the use of limited study areas and demonstrates the benefits of conducting collaborative science. PMID:29492031
Effects of study area size on home range estimates of common bottlenose dolphins Tursiops truncatus.
Nekolny, Samantha R; Denny, Matthew; Biedenbach, George; Howells, Elisabeth M; Mazzoil, Marilyn; Durden, Wendy N; Moreland, Lydia; David Lambert, J; Gibson, Quincy A
2017-12-01
Knowledge of an animal's home range is a crucial component in making informed management decisions. However, many home range studies are limited by study area size, and therefore may underestimate the size of the home range. In many cases, individuals have been shown to travel outside of the study area and utilize a larger area than estimated by the study design. In this study, data collected by multiple research groups studying bottlenose dolphins on the east coast of Florida were combined to determine how home range estimates increased with increasing study area size. Home range analyses utilized photo-identification data collected from 6 study areas throughout the St Johns River (SJR; Jacksonville, FL, USA) and adjacent waterways, extending a total of 253 km to the southern end of Mosquito Lagoon in the Indian River Lagoon Estuarine System. Univariate kernel density estimates (KDEs) were computed for individuals with 10 or more sightings ( n = 20). Kernels were calculated for the primary study area (SJR) first, then additional kernels were calculated by combining the SJR and the next adjacent waterway; this continued in an additive fashion until all study areas were included. The 95% and 50% KDEs calculated for the SJR alone ranged from 21 to 35 km and 4 to 19 km, respectively. The 95% and 50% KDEs calculated for all combined study areas ranged from 116 to 217 km and 9 to 70 km, respectively. This study illustrates the degree to which home range may be underestimated by the use of limited study areas and demonstrates the benefits of conducting collaborative science.
Kakakhel, M B; Jirasek, A; Johnston, H; Kairn, T; Trapp, J V
2017-03-01
This study evaluated the feasibility of combining the 'zero-scan' (ZS) X-ray computed tomography (CT) based polymer gel dosimeter (PGD) readout with adaptive mean (AM) filtering for improving the signal to noise ratio (SNR), and to compare these results with available average scan (AS) X-ray CT readout techniques. NIPAM PGD were manufactured, irradiated with 6 MV photons, CT imaged and processed in Matlab. AM filter for two iterations, with 3 × 3 and 5 × 5 pixels (kernel size), was used in two scenarios (a) the CT images were subjected to AM filtering (pre-processing) and these were further employed to generate AS and ZS gel images, and (b) the AS and ZS images were first reconstructed from the CT images and then AM filtering was carried out (post-processing). SNR was computed in an ROI of 30 × 30 for different pre and post processing cases. Results showed that the ZS technique combined with AM filtering resulted in improved SNR. Using the previously-recommended 25 images for reconstruction the ZS pre-processed protocol can give an increase of 44% and 80% in SNR for 3 × 3 and 5 × 5 kernel sizes respectively. However, post processing using both techniques and filter sizes introduced blur and a reduction in the spatial resolution. Based on this work, it is possible to recommend that the ZS method may be combined with pre-processed AM filtering using appropriate kernel size, to produce a large increase in the SNR of the reconstructed PGD images.
Coalescence of repelling colloidal droplets: a route to monodisperse populations.
Roger, Kevin; Botet, Robert; Cabane, Bernard
2013-05-14
Populations of droplets or particles dispersed in a liquid may evolve through Brownian collisions, aggregation, and coalescence. We have found a set of conditions under which these populations evolve spontaneously toward a narrow size distribution. The experimental system consists of poly(methyl methacrylate) (PMMA) nanodroplets dispersed in a solvent (acetone) + nonsolvent (water) mixture. These droplets carry electrical charges, located on the ionic end groups of the macromolecules. We used time-resolved small angle X-ray scattering to determine their size distribution. We find that the droplets grow through coalescence events: the average radius (R) increases logarithmically with elapsed time while the relative width σR/(R) of the distribution decreases as the inverse square root of (R). We interpret this evolution as resulting from coalescence events that are hindered by ionic repulsions between droplets. We generalize this evolution through a simulation of the Smoluchowski kinetic equation, with a kernel that takes into account the interactions between droplets. In the case of vanishing or attractive interactions, all droplet encounters lead to coalescence. The corresponding kernel leads to the well-known "self-preserving" particle distribution of the coalescence process, where σR/(R) increases to a plateau value. However, for droplets that interact through long-range ionic repulsions, "large + small" droplet encounters are more successful at coalescence than "large + large" encounters. We show that the corresponding kernel leads to a particular scaling of the droplet-size distribution-known as the "second-scaling law" in the theory of critical phenomena, where σR/(R) decreases as 1/√(R) and becomes independent of the initial distribution. We argue that this scaling explains the narrow size distributions of colloidal dispersions that have been synthesized through aggregation processes.
Zhang, Ying-Ying; Yang, Cai; Zhang, Ping
2017-08-01
In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.
Morphologically and size uniform monodisperse particles and their shape-directed self-assembly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Joshua E.; Bell, Howard Y.; Ye, Xingchen
2017-09-12
Monodisperse particles having: a single pure crystalline phase of a rare earth-containing lattice, a uniform three-dimensional size, and a uniform polyhedral morphology are disclosed. Due to their uniform size and shape, the monodisperse particles self assemble into superlattices. The particles may be luminescent particles such as down-converting phosphor particles and up-converting phosphors. The monodisperse particles of the invention have a rare earth-containing lattice which in one embodiment may be an yttrium-containing lattice or in another may be a lanthanide-containing lattice. The monodisperse particles may have different optical properties based on their composition, their size, and/or their morphology (or shape). Alsomore » disclosed is a combination of at least two types of monodisperse particles, where each type is a plurality of monodisperse particles having a single pure crystalline phase of a rare earth-containing lattice, a uniform three-dimensional size, and a uniform polyhedral morphology; and where the types of monodisperse particles differ from one another by composition, by size, or by morphology. In a preferred embodiment, the types of monodisperse particles have the same composition but different morphologies. Methods of making and methods of using the monodisperse particles are disclosed.« less
Morphologically and size uniform monodisperse particles and their shape-directed self-assembly
Collins, Joshua E.; Bell, Howard Y.; Ye, Xingchen; Murray, Christopher Bruce
2015-11-17
Monodisperse particles having: a single pure crystalline phase of a rare earth-containing lattice, a uniform three-dimensional size, and a uniform polyhedral morphology are disclosed. Due to their uniform size and shape, the monodisperse particles self assemble into superlattices. The particles may be luminescent particles such as down-converting phosphor particles and up-converting phosphors. The monodisperse particles of the invention have a rare earth-containing lattice which in one embodiment may be an yttrium-containing lattice or in another may be a lanthanide-containing lattice. The monodisperse particles may have different optical properties based on their composition, their size, and/or their morphology (or shape). Also disclosed is a combination of at least two types of monodisperse particles, where each type is a plurality of monodisperse particles having a single pure crystalline phase of a rare earth-containing lattice, a uniform three-dimensional size, and a uniform polyhedral morphology; and where the types of monodisperse particles differ from one another by composition, by size, or by morphology. In a preferred embodiment, the types of monodisperse particles have the same composition but different morphologies. Methods of making and methods of using the monodisperse particles are disclosed.
A locally adaptive kernel regression method for facies delineation
NASA Astrophysics Data System (ADS)
Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.
2015-12-01
Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.
Visualization of Oil Body Distribution in Jatropha curcas L. by Four-Wave Mixing Microscopy
NASA Astrophysics Data System (ADS)
Ishii, Makiko; Uchiyama, Susumu; Ozeki, Yasuyuki; Kajiyama, Sin'ichiro; Itoh, Kazuyoshi; Fukui, Kiichi
2013-06-01
Jatropha curcas L. (jatropha) is a superior oil crop for biofuel production. To improve the oil yield of jatropha by breeding, the development of effective and reliable tools to evaluate the oil production efficiency is essential. The characteristics of the jatropha kernel, which contains a large amount of oil, are not fully understood yet. Here, we demonstrate the application of four-wave mixing (FWM) microscopy to visualize the distribution of oil bodies in a jatropha kernel without staining. FWM microscopy enables us to visualize the size and morphology of oil bodies and to determine the oil content in the kernel to be 33.2%. The signal obtained from FWM microscopy comprises both of stimulated parametric emission (SPE) and coherent anti-Stokes Raman scattering (CARS) signals. In the present situation, where a very short pump pulse is employed, the SPE signal is believed to dominate the FWM signal.
A linear-RBF multikernel SVM to classify big text corpora.
Romero, R; Iglesias, E L; Borrajo, L
2015-01-01
Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers.
Liu, Shixuan; Ginzberg, Miriam Bracha; Patel, Nish; Hild, Marc; Leung, Bosco; Li, Zhengda; Chen, Yen-Chi; Chang, Nancy; Wang, Yuan; Tan, Ceryl; Diena, Shulamit; Trimble, William; Wasserman, Larry; Jenkins, Jeremy L; Kirschner, Marc W; Kafri, Ran
2018-03-29
Animal cells within a tissue typically display a striking regularity in their size. To date, the molecular mechanisms that control this uniformity are still unknown. We have previously shown that size uniformity in animal cells is promoted, in part, by size-dependent regulation of G1 length. To identify the molecular mechanisms underlying this process, we performed a large-scale small molecule screen and found that the p38 MAPK pathway is involved in coordinating cell size and cell cycle progression. Small cells display higher p38 activity and spend more time in G1 than larger cells. Inhibition of p38 MAPK leads to loss of the compensatory G1 length extension in small cells, resulting in faster proliferation, smaller cell size and increased size heterogeneity. We propose a model wherein the p38 pathway responds to changes in cell size and regulates G1 exit accordingly, to increase cell size uniformity. © 2017, Liu et al.
Use of computer code for dose distribution studies in A 60CO industrial irradiator
NASA Astrophysics Data System (ADS)
Piña-Villalpando, G.; Sloan, D. P.
1995-09-01
This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).
Bias correction for estimated QTL effects using the penalized maximum likelihood method.
Zhang, J; Yue, C; Zhang, Y-M
2012-04-01
A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.
Active impulsive noise control using maximum correntropy with adaptive kernel size
NASA Astrophysics Data System (ADS)
Lu, Lu; Zhao, Haiquan
2017-03-01
The active noise control (ANC) based on the principle of superposition is an attractive method to attenuate the noise signals. However, the impulsive noise in the ANC systems will degrade the performance of the controller. In this paper, a filtered-x recursive maximum correntropy (FxRMC) algorithm is proposed based on the maximum correntropy criterion (MCC) to reduce the effect of outliers. The proposed FxRMC algorithm does not requires any priori information of the noise characteristics and outperforms the filtered-x least mean square (FxLMS) algorithm for impulsive noise. Meanwhile, in order to adjust the kernel size of FxRMC algorithm online, a recursive approach is proposed through taking into account the past estimates of error signals over a sliding window. Simulation and experimental results in the context of active impulsive noise control demonstrate that the proposed algorithms achieve much better performance than the existing algorithms in various noise environments.
Initial Simulations of RF Waves in Hot Plasmas Using the FullWave Code
NASA Astrophysics Data System (ADS)
Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo
2017-10-01
FullWave is a simulation tool that models RF fields in hot inhomogeneous magnetized plasmas. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. In an rf field, the hot plasma dielectric response is limited to the distance of a few particles' Larmor radii, near the magnetic field line passing through the test point. The localization of the hot plasma dielectric response results in a sparse matrix of the problem thus significantly reduces the size of the problem and makes the simulations faster. We will present the initial results of modeling of rf waves using the Fullwave code, including calculation of nonlocal conductivity kernel in 2D Tokamak geometry; the interpolation of conductivity kernel from test points to adaptive cloud of computational points; and the results of self-consistent simulations of 2D rf fields using calculated hot plasma conductivity kernel in a tokamak plasma with reduced parameters. Work supported by the US DOE ``SBIR program.
Ruan, Peiying; Hayashida, Morihiro; Maruyama, Osamu; Akutsu, Tatsuya
2013-01-01
Since many proteins express their functional activity by interacting with other proteins and forming protein complexes, it is very useful to identify sets of proteins that form complexes. For that purpose, many prediction methods for protein complexes from protein-protein interactions have been developed such as MCL, MCODE, RNSC, PCP, RRW, and NWE. These methods have dealt with only complexes with size of more than three because the methods often are based on some density of subgraphs. However, heterodimeric protein complexes that consist of two distinct proteins occupy a large part according to several comprehensive databases of known complexes. In this paper, we propose several feature space mappings from protein-protein interaction data, in which each interaction is weighted based on reliability. Furthermore, we make use of prior knowledge on protein domains to develop feature space mappings, domain composition kernel and its combination kernel with our proposed features. We perform ten-fold cross-validation computational experiments. These results suggest that our proposed kernel considerably outperforms the naive Bayes-based method, which is the best existing method for predicting heterodimeric protein complexes. PMID:23776458
A fast non-local means algorithm based on integral image and reconstructed similar kernel
NASA Astrophysics Data System (ADS)
Lin, Zheng; Song, Enmin
2018-03-01
Image denoising is one of the essential methods in digital image processing. The non-local means (NLM) denoising approach is a remarkable denoising technique. However, its time complexity of the computation is high. In this paper, we design a fast NLM algorithm based on integral image and reconstructed similar kernel. First, the integral image is introduced in the traditional NLM algorithm. In doing so, it reduces a great deal of repetitive operations in the parallel processing, which will greatly improves the running speed of the algorithm. Secondly, in order to amend the error of the integral image, we construct a similar window resembling the Gaussian kernel in the pyramidal stacking pattern. Finally, in order to eliminate the influence produced by replacing the Gaussian weighted Euclidean distance with Euclidean distance, we propose a scheme to construct a similar kernel with a size of 3 x 3 in a neighborhood window which will reduce the effect of noise on a single pixel. Experimental results demonstrate that the proposed algorithm is about seventeen times faster than the traditional NLM algorithm, yet produce comparable results in terms of Peak Signal-to- Noise Ratio (the PSNR increased 2.9% in average) and perceptual image quality.
NASA Astrophysics Data System (ADS)
Sole-Mari, G.; Fernandez-Garcia, D.
2016-12-01
Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.
Locally adaptive methods for KDE-based random walk models of reactive transport in porous media
NASA Astrophysics Data System (ADS)
Sole-Mari, G.; Fernandez-Garcia, D.
2017-12-01
Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.
Rapid simulation of spatial epidemics: a spectral method.
Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J
2015-04-07
Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. Copyright © 2015 Elsevier Ltd. All rights reserved.
Eelderink, Coby; Noort, Martijn W J; Sozer, Nesli; Koehorst, Martijn; Holst, Jens J; Deacon, Carolyn F; Rehfeld, Jens F; Poutanen, Kaisa; Vonk, Roel J; Oudhuis, Lizette; Priebe, Marion G
2017-04-01
Underlying mechanisms of the beneficial health effects of low glycemic index starchy foods are not fully elucidated yet. We varied the wheat particle size to obtain fiber-rich breads with a high and low glycemic response and investigated the differences in postprandial glucose kinetics and metabolic response after their consumption. Ten healthy male volunteers participated in a randomized, crossover study, consuming 13 C-enriched breads with different structures; a control bread (CB) made from wheat flour combined with wheat bran, and a kernel bread (KB) where 85 % of flour was substituted with broken wheat kernels. The structure of the breads was characterized extensively. The use of stable isotopes enabled calculation of glucose kinetics: rate of appearance of exogenous glucose, endogenous glucose production, and glucose clearance rate. Additionally, postprandial plasma concentrations of glucose, insulin, glucagon, incretins, cholecystokinin, and bile acids were analyzed. Despite the attempt to obtain a bread with a low glycemic response by replacing flour by broken kernels, the glycemic response and glucose kinetics were quite similar after consumption of CB and KB. Interestingly, the glucagon-like peptide-1 (GLP-1) response was much lower after KB compared to CB (iAUC, P < 0.005). A clear postprandial increase in plasma conjugated bile acids was observed after both meals. Substitution of 85 % wheat flour by broken kernels in bread did not result in a difference in glucose response and kinetics, but in a pronounced difference in GLP-1 response. Thus, changing the processing conditions of wheat for baking bread can influence the metabolic response beyond glycemia and may therefore influence health.
The Latent Structure of Dictionaries.
Vincent-Lamarre, Philippe; Massé, Alexandre Blondin; Lopes, Marcos; Lord, Mélanie; Marcotte, Odile; Harnad, Stevan
2016-07-01
How many words-and which ones-are sufficient to define all other words? When dictionaries are analyzed as directed graphs with links from defining words to defined words, they reveal a latent structure. Recursively removing all words that are reachable by definition but that do not define any further words reduces the dictionary to a Kernel of about 10% of its size. This is still not the smallest number of words that can define all the rest. About 75% of the Kernel turns out to be its Core, a "Strongly Connected Subset" of words with a definitional path to and from any pair of its words and no word's definition depending on a word outside the set. But the Core cannot define all the rest of the dictionary. The 25% of the Kernel surrounding the Core consists of small strongly connected subsets of words: the Satellites. The size of the smallest set of words that can define all the rest-the graph's "minimum feedback vertex set" or MinSet-is about 1% of the dictionary, about 15% of the Kernel, and part-Core/part-Satellite. But every dictionary has a huge number of MinSets. The Core words are learned earlier, more frequent, and less concrete than the Satellites, which are in turn learned earlier, more frequent, but more concrete than the rest of the Dictionary. In principle, only one MinSet's words would need to be grounded through the sensorimotor capacity to recognize and categorize their referents. In a dual-code sensorimotor/symbolic model of the mental lexicon, the symbolic code could do all the rest through recombinatory definition. Copyright © 2016 Cognitive Science Society, Inc.
Cui, J; Lv, Y; Yang, X J; Fan, Y L; Zhong, Z; Jiang, Z M
2011-03-25
The size uniformity of self-assembled SiGe quantum rings, which are formed by capping SiGe quantum dots with a thin Si layer, is found to be greatly influenced by the growth temperature and the areal density of SiGe quantum dots. Higher growth temperature benefits the size uniformity of quantum dots, but results in low Ge concentration as well as asymmetric Ge distribution in the dots, which induces the subsequently formed quantum rings to be asymmetric in shape or even broken somewhere in the ridge of rings. Low growth temperature degrades the size uniformity of quantum dots, and thus that of quantum rings. A high areal density results in the expansion and coalescence of neighboring quantum dots to form a chain, rather than quantum rings. Uniform quantum rings with a size dispersion of 4.6% and an areal density of 7.8×10(8) cm(-2) are obtained at the optimized growth temperature of 640°C.
Uniform deposition of size-selected clusters using Lissajous scanning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beniya, Atsushi; Watanabe, Yoshihide, E-mail: e0827@mosk.tytlabs.co.jp; Hirata, Hirohito
2016-05-15
Size-selected clusters can be deposited on the surface using size-selected cluster ion beams. However, because of the cross-sectional intensity distribution of the ion beam, it is difficult to define the coverage of the deposited clusters. The aggregation probability of the cluster depends on coverage, whereas cluster size on the surface depends on the position, despite the size-selected clusters are deposited. It is crucial, therefore, to deposit clusters uniformly on the surface. In this study, size-selected clusters were deposited uniformly on surfaces by scanning the cluster ions in the form of Lissajous pattern. Two sets of deflector electrodes set in orthogonalmore » directions were placed in front of the sample surface. Triangular waves were applied to the electrodes with an irrational frequency ratio to ensure that the ion trajectory filled the sample surface. The advantages of this method are simplicity and low cost of setup compared with raster scanning method. The authors further investigated CO adsorption on size-selected Pt{sub n} (n = 7, 15, 20) clusters uniformly deposited on the Al{sub 2}O{sub 3}/NiAl(110) surface and demonstrated the importance of uniform deposition.« less
Maize early endosperm growth and development: from fertilization through cell type differentiation.
Leroux, Brian M; Goodyke, Austin J; Schumacher, Katelyn I; Abbott, Chelsi P; Clore, Amy M; Yadegari, Ramin; Larkins, Brian A; Dannenhoffer, Joanne M
2014-08-01
• Given the worldwide economic importance of maize endosperm, it is surprising that its development is not the most comprehensively studied of the cereals. We present detailed morphometric and cytological descriptions of endosperm development in the maize inbred line B73, for which the genome has been sequenced, and compare its growth with four diverse Nested Association Mapping (NAM) founder lines.• The first 12 d of B73 endosperm development were described using semithin sections of plastic-embedded kernels and confocal microscopy. Longitudinal sections were used to compare endosperm length, thickness, and area.• Morphometric comparison between Arizona- and Michigan-grown B73 showed a common pattern. Early endosperm development was divided into four stages: coenocytic, cellularization through alveolation, cellularization through partitioning, and differentiation. We observed tightly synchronous nuclear divisions in the coenocyte, elucidated that the onset of cellularization was coincident with endosperm size, and identified a previously undefined cell type (basal intermediate zone, BIZ). NAM founders with small mature kernels had larger endosperms (0-6 d after pollination) than lines with large mature kernels.• Our B73-specific model of early endosperm growth links developmental events to relative endosperm size, while accounting for diverse growing conditions. Maize endosperm cellularizes through alveolation, then random partitioning of the central vacuole. This unique cellularization feature of maize contrasts with the smaller endosperms of Arabidopsis, barley, and rice that strictly cellularize through repeated alveolation. NAM analysis revealed differences in endosperm size during early development, which potentially relates to differences in timing of cellularization across diverse lines of maize. © 2014 Botanical Society of America, Inc.
Kwon, Soon Gu; Hyeon, Taeghwan
2008-12-01
Nanocrystals exhibit interesting electrical, optical, magnetic, and chemical properties not achieved by their bulk counterparts. Consequently, to fully exploit the potential of nanocrystals, the synthesis of nanocrystals must focus on producing materials with uniform size and shape. Top-down physical processes can produce large quantities of nanocrystals, but controlling the size is difficult with these methods. On the other hand, colloidal chemical synthetic methods can produce uniform nanocrystals with a controlled particle size. In this Account, we present our synthesis of uniform nanocrystals of various shapes and materials, and we discuss the kinetics of nanocrystal formation. We employed four different synthetic approaches including thermal decomposition, nonhydrolytic sol-gel reactions, thermal reduction, and use of reactive chalcogen reagents. We synthesized uniform oxide nanocrystals via heat-up methods. This method involved slowly heat-up reaction mixtures composed of metal precursors, surfactants, and solvents from room temperature to high temperature. We then held reaction mixtures at an aging temperature for a few minutes to a few hours. Kinetics studies revealed a three-step mechanism for the synthesis of nanocrystals through the heat-up method with size distribution control. First, as metal precursors thermally decompose, monomers accumulate. At the aging temperature, burst nucleation occurs rapidly; at the end of this second phase, nucleation stops, but continued diffusion-controlled growth leads to size focusing to produce uniform nanocrystals. We used nonhydrolytic sol-gel reactions to synthesize various transition metal oxide nanocrystals. We employed ester elimination reactions for the synthesis of ZnO and TiO(2) nanocrystals. Uniform Pd nanoparticles were synthesized via a thermal reduction reaction induced by heating up a mixture of Pd(acac)(2), tri-n-octylphosphine, and oleylamine to the aging temperature. Similarly, we synthesized nanoparticles of copper and nickel using metal(II) acetylacetonates. Ni/Pd core/shell nanoparticles were synthesized by simply heating the reaction mixture composed of acetylacetonates of nickel and palladium. Using alternative chalcogen reagents, we synthesized uniform nanocrystals of various metal chalcogenides. Uniform nanocrystals of PbS, ZnS, CdS, and MnS were obtained by heating reaction mixtures composed of metal chlorides and sulfur dissolved in oleylamine. In the future, a detailed understanding of nanocrystal formation kinetics and synthetic chemistry will lead to the synthesis of uniform nanocrystals with controlled size, shape, and composition. In particular, the synthesis of uniform nanocrystals of doped materials, core/shell materials, and multicomponent materials is still a challenge. We expect that these uniformly sized nanocrystals will find important applications in areas including information technology, biomedicine, and energy/environmental technology.
Electroformed screens with uniform hole size
NASA Technical Reports Server (NTRS)
Schaer, G. R.
1968-01-01
Efficient method electroforms fine-mesh nickel screens, or plagues, with uniform hole size and accurate spacing between holes. An electroformed nickel mandrel has nonconducting silicone rubber projections that duplicate the desired hole size and shape in the finished nickel screen.
Application of kernel functions for accurate similarity search in large chemical databases.
Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H
2010-04-29
Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.
A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.
Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying
2015-09-01
Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.
SU-E-T-104: An Examination of Dose in the Buildup and Build-Down Regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tome, W; Kuo, H; Phillips, J
2015-06-15
Purpose: To examine dose in the buildup and build-down regions and compare measurements made with various models and dosimeters Methods: Dose was examined in a 30×30cm {sup 2} phantom of water-equivalent plastic with 10cm of backscatter for various field sizes. Examination was performed with radiochromic film and optically-stimulated-luminescent-dosimeter (OSLD) chips, and compared against a plane-parallel chamber with a correction factor applied to approximate the response of an extrapolation chamber. For the build-down region, a correction factor to account for table absorption and chamber orientation in the posterior-anterior direction was applied. The measurement depths used for the film were halfway throughmore » their sensitive volumes, and a polynomial best fit curve was used to determine the dose to their surfaces. This chamber was also compared with the dose expected in a clinical kernel-based computer model, and a clinical Boltzmann-transport-equation-based (BTE) computer model. The two models were also compared against each other for cases with air gaps in the buildup region. Results: Within 3mm, all dosimeters and models agreed with the chamber within 10% for all field sizes. At the entrance surface, film differed in comparison with the chamber from +90% to +15%, the BTE-model by +140 to +3%, and the kernel-based model by +20% to −25%, decreasing with increasing field size. At the exit surface, film differed in comparison with the chamber from −10% to −15%, the BTE-model by −53% to −50%, the kernel-based model by −55% to −57%, mostly independent of field size. Conclusion: The largest differences compared with the chamber were found at the surface for all field sizes. Differences decreased with increasing field size and increasing depth in phantom. Air gaps in the buildup region cause dose buildup to occur again post-gap, but the effect decreases with increasing phantom thickness prior to the gap.« less
NASA Astrophysics Data System (ADS)
Creusen, I. M.; Hazelhoff, L.; De With, P. H. N.
2013-10-01
In large-scale automatic traffic sign surveying systems, the primary computational effort is concentrated at the traffic sign detection stage. This paper focuses on reducing the computational load of particularly the sliding window object detection algorithm which is employed for traffic sign detection. Sliding-window object detectors often use a linear SVM to classify the features in a window. In this case, the classification can be seen as a convolution of the feature maps with the SVM kernel. It is well known that convolution can be efficiently implemented in the frequency domain, for kernels larger than a certain size. We show that by careful reordering of sliding-window operations, most of the frequency-domain transformations can be eliminated, leading to a substantial increase in efficiency. Additionally, we suggest to use the overlap-add method to keep the memory use within reasonable bounds. This allows us to keep all the transformed kernels in memory, thereby eliminating even more domain transformations, and allows all scales in a multiscale pyramid to be processed using the same set of transformed kernels. For a typical sliding-window implementation, we have found that the detector execution performance improves with a factor of 5.3. As a bonus, many of the detector improvements from literature, e.g. chi-squared kernel approximations, sub-class splitting algorithms etc., can be more easily applied at a lower performance penalty because of an improved scalability.
Development of full wave code for modeling RF fields in hot non-uniform plasmas
NASA Astrophysics Data System (ADS)
Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo
2016-10-01
FAR-TECH, Inc. is developing a full wave RF modeling code to model RF fields in fusion devices and in general plasma applications. As an important component of the code, an adaptive meshless technique is introduced to solve the wave equations, which allows resolving plasma resonances efficiently and adapting to the complexity of antenna geometry and device boundary. The computational points are generated using either a point elimination method or a force balancing method based on the monitor function, which is calculated by solving the cold plasma dispersion equation locally. Another part of the code is the conductivity kernel calculation, used for modeling the nonlocal hot plasma dielectric response. The conductivity kernel is calculated on a coarse grid of test points and then interpolated linearly onto the computational points. All the components of the code are parallelized using MPI and OpenMP libraries to optimize the execution speed and memory. The algorithm and the results of our numerical approach to solving 2-D wave equations in a tokamak geometry will be presented. Work is supported by the U.S. DOE SBIR program.
Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.
Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E
2018-06-01
An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.
Front propagation and clustering in the stochastic nonlocal Fisher equation
NASA Astrophysics Data System (ADS)
Ganan, Yehuda A.; Kessler, David A.
2018-04-01
In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.
Front propagation and clustering in the stochastic nonlocal Fisher equation.
Ganan, Yehuda A; Kessler, David A
2018-04-01
In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.
Asymptotics of nonparametric L-1 regression models with dependent data
ZHAO, ZHIBIAO; WEI, YING; LIN, DENNIS K.J.
2013-01-01
We investigate asymptotic properties of least-absolute-deviation or median quantile estimates of the location and scale functions in nonparametric regression models with dependent data from multiple subjects. Under a general dependence structure that allows for longitudinal data and some spatially correlated data, we establish uniform Bahadur representations for the proposed median quantile estimates. The obtained Bahadur representations provide deep insights into the asymptotic behavior of the estimates. Our main theoretical development is based on studying the modulus of continuity of kernel weighted empirical process through a coupling argument. Progesterone data is used for an illustration. PMID:24955016
NASA Astrophysics Data System (ADS)
Gao, Ling; Ren, Shouxin
2005-10-01
Simultaneous determination of Ni(II), Cd(II), Cu(II) and Zn(II) was studied by two methods, kernel partial least squares (KPLS) and wavelet packet transform partial least squares (WPTPLS), with xylenol orange and cetyltrimethyl ammonium bromide as reagents in the medium pH = 9.22 borax-hydrochloric acid buffer solution. Two programs, PKPLS and PWPTPLS, were designed to perform the calculations. Data reduction was performed using kernel matrices and wavelet packet transform, respectively. In the KPLS method, the size of the kernel matrix is only dependent on the number of samples, thus the method was suitable for the data matrix with many wavelengths and fewer samples. Wavelet packet representations of signals provide a local time-frequency description, thus in the wavelet packet domain, the quality of the noise removal can be improved. In the WPTPLS by optimization, wavelet function and decomposition level were selected as Daubeches 12 and 5, respectively. Experimental results showed both methods to be successful even where there was severe overlap of spectra.
NASA Astrophysics Data System (ADS)
Constantin, Lucian A.; Fabiano, Eduardo; Della Sala, Fabio
2018-05-01
Orbital-free density functional theory (OF-DFT) promises to describe the electronic structure of very large quantum systems, being its computational cost linear with the system size. However, the OF-DFT accuracy strongly depends on the approximation made for the kinetic energy (KE) functional. To date, the most accurate KE functionals are nonlocal functionals based on the linear-response kernel of the homogeneous electron gas, i.e., the jellium model. Here, we use the linear-response kernel of the jellium-with-gap model to construct a simple nonlocal KE functional (named KGAP) which depends on the band-gap energy. In the limit of vanishing energy gap (i.e., in the case of metals), the KGAP is equivalent to the Smargiassi-Madden (SM) functional, which is accurate for metals. For a series of semiconductors (with different energy gaps), the KGAP performs much better than SM, and results are close to the state-of-the-art functionals with sophisticated density-dependent kernels.
Kornilov, Oleg; Toennies, J Peter
2015-02-21
The size distribution of para-H2 (pH2) clusters produced in free jet expansions at a source temperature of T0 = 29.5 K and pressures of P0 = 0.9-1.96 bars is reported and analyzed according to a cluster growth model based on the Smoluchowski theory with kernel scaling. Good overall agreement is found between the measured and predicted, Nk = A k(a) e(-bk), shape of the distribution. The fit yields values for A and b for values of a derived from simple collision models. The small remaining deviations between measured abundances and theory imply a (pH2)k magic number cluster of k = 13 as has been observed previously by Raman spectroscopy. The predicted linear dependence of b(-(a+1)) on source gas pressure was verified and used to determine the value of the basic effective agglomeration reaction rate constant. A comparison of the corresponding effective growth cross sections σ11 with results from a similar analysis of He cluster size distributions indicates that the latter are much larger by a factor 6-10. An analysis of the three body recombination rates, the geometric sizes and the fact that the He clusters are liquid independent of their size can explain the larger cross sections found for He.
NASA Astrophysics Data System (ADS)
Lee, Minho; Kim, Namkug; Lee, Sang Min; Seo, Joon Beom; Oh, Sang Young
2015-03-01
To quantify low attenuation area (LAA) of emphysematous regions according to cluster size in 3D volumetric CT data of chronic obstructive pulmonary disease (COPD) patients and to compare these indices with their pulmonary functional test (PFT). Sixty patients with COPD were scanned by a more than 16-multi detector row CT scanner (Siemens Sensation 16 and 64) within 0.75mm collimation. Based on these LAA masks, a length scale analysis to estimate each emphysema LAA's size was performed as follows. At first, Gaussian low pass filter from 30mm to 1mm kernel size with 1mm interval on the mask was performed from large to small size, iteratively. Centroid voxels resistant to the each filter were selected and dilated by the size of the kernel, which was regarded as the specific size emphysema mask. The slopes of area and number of size based LAA (slope of semi-log plot) were analyzed and compared with PFT. PFT parameters including DLco, FEV1, and FEV1/FVC were significantly (all p-value< 0.002) correlated with the slopes (r-values; -0.73, 0.54, 0.69, respectively) and EI (r-values; -0.84, -0.60, -0.68, respectively). In addition, the D independently contributed regression for FEV1 and FEV1/FVC (adjust R sq. of regression study: EI only, 0.70, 0.45; EI and D, 0.71, 0.51, respectively). By the size based LAA segmentation and analysis, we evaluated the Ds of area, number, and distribution of size based LAA, which would be independent factors for predictor of PFT parameters.
Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki
2016-03-01
The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference between double and triple Gaussian kernel models. The authors found that the difference between the two studied kernel models appeared at mid-depths and the accuracy of predicting the double Gaussian model deteriorated at the low-dose bump that appeared at mid-depths. When the authors employed the double Gaussian kernel model, the accuracy of calculations for the absolute dose at the center of the SOBP varied with irradiation conditions and the maximum difference was 3.4%. In contrast, the results obtained from calculations with the triple Gaussian kernel model indicated good agreement with the measurements within ±1.1%, regardless of the irradiation conditions. The difference between the results obtained with the two types of studied kernel models was distinct in the high energy region. The accuracy of calculations with the double Gaussian kernel model varied with the field size and SOBP width because the accuracy of prediction with the double Gaussian model was insufficient at the low-dose bump. The evaluation was only qualitative under limited volumetric irradiation conditions. Further accumulation of measured data would be needed to quantitatively comprehend what influence the double and triple Gaussian kernel models had on the accuracy of dose calculations.
Transient and asymptotic behaviour of the binary breakage problem
NASA Astrophysics Data System (ADS)
Mantzaris, Nikos V.
2005-06-01
The general binary breakage problem with power-law breakage functions and two families of symmetric and asymmetric breakage kernels is studied in this work. A useful transformation leads to an equation that predicts self-similar solutions in its asymptotic limit and offers explicit knowledge of the mean size and particle density at each point in dimensionless time. A novel moving boundary algorithm in the transformed coordinate system is developed, allowing the accurate prediction of the full transient behaviour of the system from the initial condition up to the point where self-similarity is achieved, and beyond if necessary. The numerical algorithm is very rapid and its results are in excellent agreement with known analytical solutions. In the case of the symmetric breakage kernels only unimodal, self-similar number density functions are obtained asymptotically for all parameter values and independent of the initial conditions, while in the case of asymmetric breakage kernels, bimodality appears for high degrees of asymmetry and sharp breakage functions. For symmetric and discrete breakage kernels, self-similarity is not achieved. The solution exhibits sustained oscillations with amplitude that depends on the initial condition and the sharpness of the breakage mechanism, while the period is always fixed and equal to ln 2 with respect to dimensionless time.
Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density
NASA Astrophysics Data System (ADS)
Hohl, A.; Delmelle, E. M.; Tang, W.
2015-07-01
Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.
Kernel machine methods for integrative analysis of genome-wide methylation and genotyping studies.
Zhao, Ni; Zhan, Xiang; Huang, Yen-Tsung; Almli, Lynn M; Smith, Alicia; Epstein, Michael P; Conneely, Karen; Wu, Michael C
2018-03-01
Many large GWAS consortia are expanding to simultaneously examine the joint role of DNA methylation in addition to genotype in the same subjects. However, integrating information from both data types is challenging. In this paper, we propose a composite kernel machine regression model to test the joint epigenetic and genetic effect. Our approach works at the gene level, which allows for a common unit of analysis across different data types. The model compares the pairwise similarities in the phenotype to the pairwise similarities in the genotype and methylation values; and high correspondence is suggestive of association. A composite kernel is constructed to measure the similarities in the genotype and methylation values between pairs of samples. We demonstrate through simulations and real data applications that the proposed approach can correctly control type I error, and is more robust and powerful than using only the genotype or methylation data in detecting trait-associated genes. We applied our method to investigate the genetic and epigenetic regulation of gene expression in response to stressful life events using data that are collected from the Grady Trauma Project. Within the kernel machine testing framework, our methods allow for heterogeneity in effect sizes, nonlinear, and interactive effects, as well as rapid P-value computation. © 2017 WILEY PERIODICALS, INC.
NASA Astrophysics Data System (ADS)
Talbot, C.; McClure, J. E.; Armstrong, R. T.; Mostaghimi, P.; Hu, Y.; Miller, C. T.
2017-12-01
Microscale experimental and computational methods can be used to evaluate fundamental microscale mechanisms and deduce macroscale constitutive relationships and parameter values. The link between the microscale and the macroscale is especially demanding, because technical issues arise regarding the necessary scale of the system needed for a meaningful set of macroscale measures to be insensitive to the size of the system, which is known as a representative elementary volume (REV). While the REV scale is routinely determined for single-phase flow in porous media, no systematic study of the scale of a REV for the comprehensive set of macroscale measures considered here has been reported in the literature. A comprehensive set of measures of the macroscale state is developed. We further develop and apply methods to predict the REV scale and quantify the uncertainty of the estimate for this set of macroscale quantities. We model the system state in terms of standard errors of macroscale quantities as a multivariate Gaussian process dependent on the size of the domain simulated. We determine predictive distributions of function values and a posterior distributions of weights using standard kernels, as well as a kernel constructed using relationships between physical quantities. For each kernel, we discuss the decay of the mean and covariance with increasing domain size, and use cross-validation to facilitate model selection. The procedure yields a model of the domain size needed to achieve a REV with quantifiable uncertainty. We present results in the context of multiphase fluid flow through a highly resolved realization of sandstone imaged using micro-CT. A 1440x1440x4320 section of the full 2520x2520x5280 imaged medium is simulated using the lattice-Boltzmann method. We compare the fidelity of the predictive model with results obtained by an analogous approach using polynomial regression.
Kinetic behaviours of aggregate growth driven by time-dependent migration, birth and death
NASA Astrophysics Data System (ADS)
Zhu, Sheng-Qing; Yang, Shun-You; Ke, Jianhong; Lin, Zhenquan
2008-12-01
We propose a dynamic growth model to mimic some social phenomena, such as the evolution of cities' population, in which monomer migrations occur between any two aggregates and monomer birth/death can simultaneously occur in each aggregate. Considering the fact that the rate kernels of migration, birth and death processes may change with time, we assume that the migration rate kernel is ijf(t), and the self-birth and death rate kernels are ig1(t) and ig2(t), respectively. Based on the mean-field rate equation, we obtain the exact solution of this model and then discuss semi-quantitatively the scaling behaviour of the aggregate size distribution at large times. The results show that in the long-time limit, (i) if ∫t0g1(t') dt'/∫t0g2(t') dt' >= 1 or exp{∫t0[g2(t') - g1(t')] dt'}/∫t0f(t') dt' → 0, the aggregate size distribution ak(t) can obey a generalized scaling form; (ii) if ∫t0g1(t') dt'/∫t0g2(t') dt' → 0 and exp ∫t0[g2(t') - g1(t') dt'/∫t0f(t') dt' → ∞, ak(t) can take a scale-free form and decay exponentially in size k; (iii) ak(t) will satisfy a modified scaling law in the remaining cases. Moreover, the total mass of aggregates depends strongly on the net birth rate g1(t) - g2(t) and evolves exponentially as exp{∫t0[g1(t') - g2(t')] dt'}, which is in qualitative agreement with the evolution of the total population of a country in real world.
Chemical Interruption of Flowering to Improve Harvested Peanut Maturity
USDA-ARS?s Scientific Manuscript database
Peanut (Arachis hypogaea) is a botanically indeterminate plant where flowering, fruit initiation, and pod maturity occurs over an extended time period during the growing season. As a result, the maturity and size of individual peanut pods varies considerably at harvest. Immature kernels that meet...
Exact combinatorial approach to finite coagulating systems
NASA Astrophysics Data System (ADS)
Fronczak, Agata; Chmiel, Anna; Fronczak, Piotr
2018-02-01
This paper outlines an exact combinatorial approach to finite coagulating systems. In this approach, cluster sizes and time are discrete and the binary aggregation alone governs the time evolution of the systems. By considering the growth histories of all possible clusters, an exact expression is derived for the probability of a coagulating system with an arbitrary kernel being found in a given cluster configuration when monodisperse initial conditions are applied. Then this probability is used to calculate the time-dependent distribution for the number of clusters of a given size, the average number of such clusters, and that average's standard deviation. The correctness of our general expressions is proved based on the (analytical and numerical) results obtained for systems with the constant kernel. In addition, the results obtained are compared with the results arising from the solutions to the mean-field Smoluchowski coagulation equation, indicating its weak points. The paper closes with a brief discussion on the extensibility to other systems of the approach presented herein, emphasizing the issue of arbitrary initial conditions.
[Adaptability of sweet corn ears to a frozen process].
Ramírez Matheus, Alejandra O; Martínez, Norelkys Maribel; de Bertorelli, Ligia O; De Venanzi, Frank
2004-12-01
The effects of frozen condition on the quality of three sweet corn ears (2038, 2010, 2004) and the pattern (Bonanza), were evaluated. Biometrics characteristics like ear size, ear diameter, row and kernel deep were measured as well as chemical and physical measurement in fresh and frozen states. The corn ears were frozen at -95 degrees C by 7 minutes. The yield and stability of the frozen ears were evaluated at 45 and 90 days of frozen storage (-18 degrees C). The average commercial yield as frozen corn ear for all the hybrids was 54.2%. The industry has a similar value range of 48% to 54%. The ear size average was 21.57 cm, row number was 15, ear diameter 45.54 mm and the kernel corn deep was 8.57 mm. All these measurements were found not different from commercial values found for the industry. All corn samples evaluated showed good stability despites the frozen processing and storage. Hybrid 2038 ranked higher in quality.
NASA Astrophysics Data System (ADS)
Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long
2014-11-01
The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.
Retrieval of the aerosol size distribution in the complex anomalous diffraction approximation
NASA Astrophysics Data System (ADS)
Franssens, Ghislain R.
This contribution reports some recently achieved results in aerosol size distribution retrieval in the complex anomalous diffraction approximation (ADA) to MIE scattering theory. This approximation is valid for spherical particles that are large compared to the wavelength and have a refractive index close to 1. The ADA kernel is compared with the exact MIE kernel. Despite being a simple approximation, the ADA seems to have some practical value for the retrieval of the larger modes of tropospheric and lower stratospheric aerosols. The ADA has the advantage over MIE theory that an analytic inversion of the associated Fredholm integral equation becomes possible. In addition, spectral inversion in the ADA can be formulated as a well-posed problem. In this way, a new inverse formula was obtained, which allows the direct computation of the size distribution as an integral over the spectral extinction function. This formula is valid for particles that both scatter and absorb light and it also takes the spectral dispersion of the refractive index into account. Some details of the numerical implementation of the inverse formula are illustrated using a modified gamma test distribution. Special attention is given to the integration of spectrally truncated discrete extinction data with errors.
Masoumi, Hamid Reza Fard; Basri, Mahiran; Samiun, Wan Sarah; Izadiyan, Zahra; Lim, Chaw Jiang
2015-01-01
Aripiprazole is considered as a third-generation antipsychotic drug with excellent therapeutic efficacy in controlling schizophrenia symptoms and was the first atypical anti-psychotic agent to be approved by the US Food and Drug Administration. Formulation of nanoemulsion-containing aripiprazole was carried out using high shear and high pressure homogenizers. Mixture experimental design was selected to optimize the composition of nanoemulsion. A very small droplet size of emulsion can provide an effective encapsulation for delivery system in the body. The effects of palm kernel oil ester (3-6 wt%), lecithin (2-3 wt%), Tween 80 (0.5-1 wt%), glycerol (1.5-3 wt%), and water (87-93 wt%) on the droplet size of aripiprazole nanoemulsions were investigated. The mathematical model showed that the optimum formulation for preparation of aripiprazole nanoemulsion having the desirable criteria was 3.00% of palm kernel oil ester, 2.00% of lecithin, 1.00% of Tween 80, 2.25% of glycerol, and 91.75% of water. Under optimum formulation, the corresponding predicted response value for droplet size was 64.24 nm, which showed an excellent agreement with the actual value (62.23 nm) with residual standard error <3.2%.
Benchmarking NWP Kernels on Multi- and Many-core Processors
NASA Astrophysics Data System (ADS)
Michalakes, J.; Vachharajani, M.
2008-12-01
Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.
Short-Term File Reference Patterns in a UNIX Environment,
1986-03-01
accounts mentioned ahose. This includes major administrative and status files (for example, /etc/ passwd ), system libraries, system include files and so on...34 files are those appearing in / and /etc. Examples are /vmunix (the bootable kernel image) and /etc/ passwd (passwords and other information on accounts...as /etc/ passwd ). The small size of opened files (55% are under 1024 bytes, a common block transfer size, and 75% are under 4096 bytes) suggests that
Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan
2012-05-15
Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less
An alternative covariance estimator to investigate genetic heterogeneity in populations.
Heslot, Nicolas; Jannink, Jean-Luc
2015-11-26
For genomic prediction and genome-wide association studies (GWAS) using mixed models, covariance between individuals is estimated using molecular markers. Based on the properties of mixed models, using available molecular data for prediction is optimal if this covariance is known. Under this assumption, adding individuals to the analysis should never be detrimental. However, some empirical studies showed that increasing training population size decreased prediction accuracy. Recently, results from theoretical models indicated that even if marker density is high and the genetic architecture of traits is controlled by many loci with small additive effects, the covariance between individuals, which depends on relationships at causal loci, is not always well estimated by the whole-genome kinship. We propose an alternative covariance estimator named K-kernel, to account for potential genetic heterogeneity between populations that is characterized by a lack of genetic correlation, and to limit the information flow between a priori unknown populations in a trait-specific manner. This is similar to a multi-trait model and parameters are estimated by REML and, in extreme cases, it can allow for an independent genetic architecture between populations. As such, K-kernel is useful to study the problem of the design of training populations. K-kernel was compared to other covariance estimators or kernels to examine its fit to the data, cross-validated accuracy and suitability for GWAS on several datasets. It provides a significantly better fit to the data than the genomic best linear unbiased prediction model and, in some cases it performs better than other kernels such as the Gaussian kernel, as shown by an empirical null distribution. In GWAS simulations, alternative kernels control type I errors as well as or better than the classical whole-genome kinship and increase statistical power. No or small gains were observed in cross-validated prediction accuracy. This alternative covariance estimator can be used to gain insight into trait-specific genetic heterogeneity by identifying relevant sub-populations that lack genetic correlation between them. Genetic correlation can be 0 between identified sub-populations by performing automatic selection of relevant sets of individuals to be included in the training population. It may also increase statistical power in GWAS.
Popcorn: An Explosive Mixture of General Mathematics.
ERIC Educational Resources Information Center
Westerberg, Judy; Whiting, Jack
1992-01-01
Presents an activity developed for back-to-back general science and mathematics classes involving measurement, data analysis, and consumer mathematics. Students compare brands of popcorn for number of popped and unpopped kernels, volume, size, color, texture, and flavor, and develop advertisements for the best brands. Suggests possible extension…
An improved Monte-Carlo model of the Varian EPID separating support arm and rear-housing backscatter
NASA Astrophysics Data System (ADS)
Monville, M. E.; Kuncic, Z.; Greer, P. B.
2014-03-01
Previous investigators of EPID dosimetric properties have ascribed the backscatter, that contaminates dosimetric EPID images, to its supporting arm. Accordingly, Monte-Carlo (MC) EPID models have approximated the backscatter signal from the layers under the detector and the robotic support arm using either uniform or non-uniform solid water slabs, or through convolutions with back-scatter kernels. The aim of this work is to improve the existent MC models by measuring and modelling the separate backscatter contributions of the robotic arm and the rear plastic housing of the EPID. The EPID plastic housing is non-uniform with a 11.9 cm wide indented section that runs across the cross-plane direction in the superior half of the EPID which is 1.75 cm closer to the EPID sensitive layer than the rest of the housing. The thickness of the plastic housing is 0.5 cm. Experiments were performed with and without the housing present by removing all components of the EPID from the housing. The robotic support arm was not present for these measurements. A MC model of the linear accelerator and the EPID was modified to include the rear-housing indentation and results compared to the measurement. The rear housing was found to contribute a maximum of 3% additional signal. The rear housing contribution to the image is non-uniform in the in-plane direction with 2% asymmetry across the central 20 cm of an image irradiating the entire detector. The MC model was able to reproduce this non-uniform contribution. The EPID rear housing contributes a non-uniform backscatter component to the EPID image, which has not been previously characterized. This has been incorporated into an improved MC model of the EPID.
Calculation of flexoelectric deformations of finite-size bodies
NASA Astrophysics Data System (ADS)
Yurkov, A. S.
2015-03-01
The previously developed approximate theory of flexoelectric deformations of finite-size bodies has been considered as applied to three special cases: a uniformly polarized ball, a uniformly polarized circular rod, and a uniformly polarized thin circular plate of an isotropic material. For these cases simple algebraic formulas have been derived. In the case of the ball, the solution is compared with the previously obtained exact solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, TK
Purpose In proton beam configuration for spot scanning proton therapy (SSPT), one can define the spacing between spots and lines of scanning as a ratio of given spot size. If the spacing increases, the number of spots decreases which can potentially decrease scan time, and so can whole treatment time, and vice versa. However, if the spacing is too large, the uniformity of scanned field decreases. Also, the field uniformity can be affected by motion during SSPT beam delivery. In the present study, the interplay between spot/ line spacing and motion is investigated. Methods We used four Gaussian-shape spot sizesmore » with 0.5cm, 1.0cm, 1.5cm, and 2.0cm FWHM, three spot/line spacing that creates uniform field profile which are 1/3*FWHM, σ/3*FWHM and 2/3*FWHM, and three random motion amplitudes within, +/−0.3mm, +/−0.5mm, and +/−1.0mm. We planned with 2Gy uniform single layer of 10×10cm2 and 20×20cm2 fields. Then, mean dose within 80% area of given field size, contrubuting MU per each spot assuming 1cGy/MU calibration for all spot sizes, number of spots and uniformity were calculated. Results The plans with spot/line spacing equal to or smaller than 2/3*FWHM without motion create ∼100% uniformity. However, it was found that the uniformity decreases with increased spacing, and it is more pronounced with smaller spot sizes, but is not affected by scanned field sizes. Conclusion It was found that the motion during proton beam delivery can alter the dose uniformity and the amount of alteration changes with spot size which changes with energy and spot/line spacing. Currently, robust evaluation in TPS (e.g. Eclipse system) performs range uncertainty evaluation using isocenter shift and CT calibration error. Based on presented study, it is recommended to add interplay effect evaluation to robust evaluation process. For future study, the additional interplay between the energy layers and motion is expected to present volumetric effect.« less
NASA Astrophysics Data System (ADS)
Yoon, Jinsik; Kim, Kibeom; Park, Wook
2017-07-01
We present an essential method for generating microparticles uniformly in a single ultraviolet (UV) light exposure area for optofluidic maskless lithography. In the optofluidic maskless lithography process, the productivity of monodisperse microparticles depends on the size of the UV exposure area. An effective fabrication area is determined by the size of the UV intensity profile map, satisfying the required uniformity of UV intensity. To increase the productivity of monodisperse microparticles in optofluidic maskless lithography, we expanded the effective UV exposure area by modulating the intensity of the desired UV light pattern based on the premeasured UV intensity profile map. We verified the improvement of the uniformity of the microparticles generated by the proposed modulation technique, providing histogram analyses of the conjugated fluorescent intensities and the sizes of the microparticles. Additionally, we demonstrated the generation of DNA uniformly encapsulated in microparticles.
Assessment of the influence of field size on maize gene flow using SSR analysis.
Palaudelmàs, M; Melé, E; Monfort, A; Serra, J; Salvia, J; Messeguer, J
2012-06-01
One of the factors that may influence the rate of cross-fertilization is the relative size of the pollen donor and receptor fields. We designed a spatial distribution with four varieties of genetically-modified (GM) yellow maize to generate different sized fields while maintaining a constant distance to neighbouring fields of conventional white kernel maize. Samples of cross-fertilized, yellow kernels in white cobs were collected from all of the adjacent fields at different distances. A special series of samples was collected at distances of 0, 2, 5, 10, 20, 40, 80 and 120 m following a transect traced in the dominant down-wind direction in order to identify the origin of the pollen through SSR analysis. The size of the receptor fields should be taken into account, especially when they extend in the same direction than the GM pollen flow is coming. From collected data, we then validated a function that takes into account the gene flow found in the field border and that is very useful for estimating the % of GM that can be found in any point of the field. It also serves to predict the total GM content of the field due to cross fertilization. Using SSR analysis to identify the origin of pollen showed that while changes in the size of the donor field clearly influence the percentage of GMO detected, this effect is moderate. This study demonstrates that doubling the donor field size resulted in an approximate increase of GM content in the receptor field of 7%. This indicates that variations in the size of the donor field have a smaller influence on GM content than variations in the size of the receptor field.
Shen, Jiajian; Liu, Wei; Stoker, Joshua; Ding, Xiaoning; Anand, Aman; Hu, Yanle; Herman, Michael G; Bues, Martin
2016-12-01
To find an efficient method to configure the proton fluence for a commercial proton pencil beam scanning (PBS) treatment planning system (TPS). An in-water dose kernel was developed to mimic the dose kernel of the pencil beam convolution superposition algorithm, which is part of the commercial proton beam therapy planning software, eclipse™ (Varian Medical Systems, Palo Alto, CA). The field size factor (FSF) was calculated based on the spot profile reconstructed by the in-house dose kernel. The workflow of using FSFs to find the desirable proton fluence is presented. The in-house derived spot profile and FSF were validated by a direct comparison with those calculated by the eclipse TPS. The validation included 420 comparisons of the FSFs from 14 proton energies, various field sizes from 2 to 20 cm and various depths from 20% to 80% of proton range. The relative in-water lateral profiles between the in-house calculation and the eclipse TPS agree very well even at the level of 10 -4 . The FSFs between the in-house calculation and the eclipse TPS also agree well. The maximum deviation is within 0.5%, and the standard deviation is less than 0.1%. The authors' method significantly reduced the time to find the desirable proton fluences of the clinical energies. The method is extensively validated and can be applied to any proton centers using PBS and the eclipse TPS.
Gajera, H P; Gevariya, Shila N; Hirpara, Darshna G; Patel, S V; Golakiya, B A
2017-09-01
Fruit phenolics are important dietary antioxidant and antidiabetic constituents. The fruit parts (pulp, seed, seed coat, kernel) of underutilized indigenous six black jamun landraces ( Syzygium cumini L.), found in Gir forest region of India and differed in their fruit size, shape and weight, are evaluated and correlated with antidiabetic, DPPH radical scavenging and phenolic constituents. The α-amylase inhibitors propose an efficient antidiabetic strategy and the levels of postprandial hyperglycemia were lowered by restraining starch breakdown. The sequential solvent systems with ascending polarity-petroleum ether, ethyl acetate, methanol and water were performed for soxhlet extraction by hot percolation method and extractive yield was found maximum with methanolic fruit part extracts of six landraces. The methanolic extracts of fruit parts also evidenced higher antidiabetic activity and hence utilized for further characterization. Among the six landraces, pulp and kernel of BJLR-6 (very small, oblong fruits) evidenced maximum 53.8 and 98.2% inhibition of α-amylase activity, respectively. The seed attained inhibitory activity mostly contributed by the kernel fraction. The inhibition of DPPH radical scavenging activity was positively correlated with phenol constituents. An HPLC-PDA technique was used to quantify the seven individual phenolics. The seed and kernel of BJLR-6 exhibited higher individual phenolics-gallic, catechin, ellagic, ferulic acids and quercetin, whereas pulp evidenced higher with gallic acid and catechin as α-amylase inhibitors. The IC 50 value indicates concentration of fruit extracts exhibiting ≥50% inhibition on porcine pancreatic α-amylase (PPA) activity. The kernel fraction of BJLR6 evidenced lowest (8.3 µg ml -1 ) IC 50 value followed by seed (12.9 µg ml -1 ), seed coat (50.8 µg ml -1 ) and pulp (270 µg ml -1 ). The seed and kernel of BJLR-6 inhibited PPA at much lower concentrations than standard acarbose (24.7 µg ml -1 ) considering good candidates for antidiabetic herbal formulations.
UNIFORMLY MOST POWERFUL BAYESIAN TESTS
Johnson, Valen E.
2014-01-01
Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829
NASA Technical Reports Server (NTRS)
Haviland, J. K.
1974-01-01
The results are reported of two unrelated studies. The first was an investigation of the formulation of the equations for non-uniform unsteady flows, by perturbation of an irrotational flow to obtain the linear Green's equation. The resulting integral equation was found to contain a kernel which could be expressed as the solution of the adjoint flow equation, a linear equation for small perturbations, but with non-constant coefficients determined by the steady flow conditions. It is believed that the non-uniform flow effects may prove important in transonic flutter, and that in such cases, the use of doublet type solutions of the wave equation would then prove to be erroneous. The second task covered an initial investigation into the use of the Monte Carlo method for solution of acoustical field problems. Computed results are given for a rectangular room problem, and for a problem involving a circular duct with a source located at the closed end.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kornilov, Oleg; Toennies, J. Peter
The size distribution of para-H{sub 2} (pH{sub 2}) clusters produced in free jet expansions at a source temperature of T{sub 0} = 29.5 K and pressures of P{sub 0} = 0.9–1.96 bars is reported and analyzed according to a cluster growth model based on the Smoluchowski theory with kernel scaling. Good overall agreement is found between the measured and predicted, N{sub k} = A k{sup a} e{sup −bk}, shape of the distribution. The fit yields values for A and b for values of a derived from simple collision models. The small remaining deviations between measured abundances and theory imply a (pH{submore » 2}){sub k} magic number cluster of k = 13 as has been observed previously by Raman spectroscopy. The predicted linear dependence of b{sup −(a+1)} on source gas pressure was verified and used to determine the value of the basic effective agglomeration reaction rate constant. A comparison of the corresponding effective growth cross sections σ{sub 11} with results from a similar analysis of He cluster size distributions indicates that the latter are much larger by a factor 6-10. An analysis of the three body recombination rates, the geometric sizes and the fact that the He clusters are liquid independent of their size can explain the larger cross sections found for He.« less
Comparative habitat use of sympatric Mexican spotted and great horned owls
Joseph L. Ganey; William M. Block; Jeffrey S. Jenness; Randolph A. Wilson
1997-01-01
To provide information on comparative habitat use, we studied radiotagged Mexican spotted owls (Strix occidentalis lucida: n = 13) and great horned owls (Bubo virginianus: n = 4) in northern Arizona. Home-range size (95% adaptive kernel estimate) did not differ significantly between species during either the breeding or nonbreeding...
Chemical interruption of late season flowering to improve harvested peanut maturity
USDA-ARS?s Scientific Manuscript database
Peanut (Arachis hypogaea) is a botanically indeterminate plant where flowering, fruit initiation, and pod maturity occurs over an extended time period during the growing season. As a result, the maturity and size of individual peanut pods varies considerably at harvest. Immature kernels that meet co...
NASA Astrophysics Data System (ADS)
Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir
2018-03-01
Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using Taguchi analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.
Multi-board kernel communication using socket programming for embedded applications
NASA Astrophysics Data System (ADS)
Mishra, Ashish; Girdhar, Neha; Krishnia, Nikita
2016-03-01
It is often seen in large application projects, there is a need to communicate between two different processors or two different kernels. The aim of this paper is to communicate between two different kernels and use efficient method to do so. The TCP/IP protocol is implemented to communicate between two boards via the Ethernet port and use lwIP (lightweight IP) stack, which is a smaller independent implementation of the TCP/IP stack suitable for use in embedded systems. While retaining TCP/IP functionality, lwIP stack reduces the use of memory and even size of the code. In this process of communication we made Raspberry pi as an active client and Field programmable gate array(FPGA) board as a passive server and they are allowed to communicate via Ethernet. Three applications based on TCP/IP client-server network communication have been implemented. The Echo server application is used to communicate between two different kernels of two different boards. Socket programming is used as it is independent of platform and programming language used. TCP transmit and receive throughput test applications are used to measure maximum throughput of the transmission of data. These applications are based on communication to an open source tool called iperf. It is used to measure the throughput transmission rate by sending or receiving some constant piece of data to the client or server according to the test application.
Maigne, L; Perrot, Y; Schaart, D R; Donnarieix, D; Breton, V
2011-02-07
The GATE Monte Carlo simulation platform based on the GEANT4 toolkit has come into widespread use for simulating positron emission tomography (PET) and single photon emission computed tomography (SPECT) imaging devices. Here, we explore its use for calculating electron dose distributions in water. Mono-energetic electron dose point kernels and pencil beam kernels in water are calculated for different energies between 15 keV and 20 MeV by means of GATE 6.0, which makes use of the GEANT4 version 9.2 Standard Electromagnetic Physics Package. The results are compared to the well-validated codes EGSnrc and MCNP4C. It is shown that recent improvements made to the GEANT4/GATE software result in significantly better agreement with the other codes. We furthermore illustrate several issues of general interest to GATE and GEANT4 users who wish to perform accurate simulations involving electrons. Provided that the electron step size is sufficiently restricted, GATE 6.0 and EGSnrc dose point kernels are shown to agree to within less than 3% of the maximum dose between 50 keV and 4 MeV, while pencil beam kernels are found to agree to within less than 4% of the maximum dose between 15 keV and 20 MeV.
Detection of Splice Sites Using Support Vector Machine
NASA Astrophysics Data System (ADS)
Varadwaj, Pritish; Purohit, Neetesh; Arora, Bhumika
Automatic identification and annotation of exon and intron region of gene, from DNA sequences has been an important research area in field of computational biology. Several approaches viz. Hidden Markov Model (HMM), Artificial Intelligence (AI) based machine learning and Digital Signal Processing (DSP) techniques have extensively and independently been used by various researchers to cater this challenging task. In this work, we propose a Support Vector Machine based kernel learning approach for detection of splice sites (the exon-intron boundary) in a gene. Electron-Ion Interaction Potential (EIIP) values of nucleotides have been used for mapping character sequences to corresponding numeric sequences. Radial Basis Function (RBF) SVM kernel is trained using EIIP numeric sequences. Furthermore this was tested on test gene dataset for detection of splice site by window (of 12 residues) shifting. Optimum values of window size, various important parameters of SVM kernel have been optimized for a better accuracy. Receiver Operating Characteristic (ROC) curves have been utilized for displaying the sensitivity rate of the classifier and results showed 94.82% accuracy for splice site detection on test dataset.
Tarjan, Lily M; Tinker, M. Tim
2016-01-01
Parametric and nonparametric kernel methods dominate studies of animal home ranges and space use. Most existing methods are unable to incorporate information about the underlying physical environment, leading to poor performance in excluding areas that are not used. Using radio-telemetry data from sea otters, we developed and evaluated a new algorithm for estimating home ranges (hereafter Permissible Home Range Estimation, or “PHRE”) that reflects habitat suitability. We began by transforming sighting locations into relevant landscape features (for sea otters, coastal position and distance from shore). Then, we generated a bivariate kernel probability density function in landscape space and back-transformed this to geographic space in order to define a permissible home range. Compared to two commonly used home range estimation methods, kernel densities and local convex hulls, PHRE better excluded unused areas and required a smaller sample size. Our PHRE method is applicable to species whose ranges are restricted by complex physical boundaries or environmental gradients and will improve understanding of habitat-use requirements and, ultimately, aid in conservation efforts.
Optical design for uniform scanning in MEMS-based 3D imaging lidar.
Lee, Xiaobao; Wang, Chunhui
2015-03-20
This paper proposes a method for the optical system design of uniform scanning in a larger scan field of view (FOV) in 3D imaging lidar. The theoretical formulas are derived for the design scheme. By employing the optical design software ZEMAX, a foldaway uniform scanning optical system based on MEMS has been designed, and the scanning uniformity and spot size of the system on the target plane, perpendicular to optical axis, are analyzed and discussed. Results show that the designed system can scan uniformly within the FOV of 40°×40° with small spot size for the target at distance of about 100 m.
Lei, Lei; Chen, Daqin; Huang, Ping; Xu, Ju; Zhang, Rui; Wang, Yuansheng
2013-11-21
NaGdF4 is regarded as an ideal upconversion (UC) host material for lanthanide (Ln(3+)) activators because of its unique crystal structure, high Ln(3+) solubility, low phonon energy and high photochemical stability, and Ln(3+)-doped NaGdF4 UC nanocrystals (NCs) have been widely investigated as bio-imaging and magnetic resonance imaging agents recently. To realize their practical applications, controlling the size and uniformity of the monodisperse Ln(3+)-doped NaGdF4 UC NCs is highly desired. Unlike the routine routes by finely adjusting the multiple experimental parameters, herein we provide a facile and straightforward strategy to modify the size and uniformity of NaGdF4 NCs via alkaline-earth doping for the first time. With the increase of alkaline-earth doping content, the size of NaGdF4 NCs increases gradually, while the size-uniformity is still retained. We attribute this "focusing" of size distribution to the diffusion controlled growth of NaGdF4 NCs induced by alkaline-earth doping. Importantly, adopting the Ca(2+)-doped Yb/Er:NaGdF4 NCs as cores, the complete Ca/Yb/Er:NaGdF4@NaYF4 core-shell particles with excellent size-uniformity can be easily achieved. However, when taking the Yb/Er:NaGdF4 NCs without Ca(2+) doping as cores, they could not be perfectly covered by NaYF4 shells, and the obtained products are non-uniform in size. As a result, the UC emission intensity of the complete core-shell NCs increases by about 30 times in comparison with that of the cores, owing to the effective surface passivation of the Ca(2+)-doped cores and therefore protection of Er(3+) in the cores from the non-radiative decay caused by surface defects, whereas the UC intensity of the incomplete core-shell NCs is enhanced by only 3 times.
A facile method for the preparation of monodisperse beads with uniform pore sizes for cell culture.
Moon, Seung-Kwan; Oh, Myeong-Jin; Paik, Dong-Hyun; Ryu, Tae-Kyung; Park, Kyeongsoon; Kim, Sung-Eun; Park, Jong-Hoon; Kim, Jung-Hyun; Choi, Sung-Wook
2013-03-12
This paper describes a facile method for the preparation of porous gelatin beads with uniform pore sizes using a simple fluidic device and their application as supporting materials for cell culture. An aqueous gelatin droplet containing many uniform toluene droplets, produced in the fluidic device, is dropped into liquid nitrogen for instant freezing and the small toluene droplets evolve into pores in the gelatin beads after removal of toluene and then freeze-drying. The porous gelatin beads exhibit a uniform pore size and monodisperse diameter as well as large open pores at the surface. Fluorescence microscopy images of fibroblast-loaded gelatin beads confirm the attachment and proliferation of the cells throughout the porous gelatin beads. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multilevel image recognition using discriminative patches and kernel covariance descriptor
NASA Astrophysics Data System (ADS)
Lu, Le; Yao, Jianhua; Turkbey, Evrim; Summers, Ronald M.
2014-03-01
Computer-aided diagnosis of medical images has emerged as an important tool to objectively improve the performance, accuracy and consistency for clinical workflow. To computerize the medical image diagnostic recognition problem, there are three fundamental problems: where to look (i.e., where is the region of interest from the whole image/volume), image feature description/encoding, and similarity metrics for classification or matching. In this paper, we exploit the motivation, implementation and performance evaluation of task-driven iterative, discriminative image patch mining; covariance matrix based descriptor via intensity, gradient and spatial layout; and log-Euclidean distance kernel for support vector machine, to address these three aspects respectively. To cope with often visually ambiguous image patterns for the region of interest in medical diagnosis, discovery of multilabel selective discriminative patches is desired. Covariance of several image statistics summarizes their second order interactions within an image patch and is proved as an effective image descriptor, with low dimensionality compared with joint statistics and fast computation regardless of the patch size. We extensively evaluate two extended Gaussian kernels using affine-invariant Riemannian metric or log-Euclidean metric with support vector machines (SVM), on two medical image classification problems of degenerative disc disease (DDD) detection on cortical shell unwrapped CT maps and colitis detection on CT key images. The proposed approach is validated with promising quantitative results on these challenging tasks. Our experimental findings and discussion also unveil some interesting insights on the covariance feature composition with or without spatial layout for classification and retrieval, and different kernel constructions for SVM. This will also shed some light on future work using covariance feature and kernel classification for medical image analysis.
Verma, Prashant; Doyley, Marvin M
2017-09-01
We derived the Cramér Rao lower bound for 2-D estimators employed in quasi-static elastography. To illustrate the theory, we modeled the 2-D point spread function as a sinc-modulated sine pulse in the axial direction and as a sinc function in the lateral direction. We compared theoretical predictions of the variance incurred in displacements and strains when quasi-static elastography was performed under varying conditions (different scanning methods, different configuration of conventional linear array imaging and different-size kernels) with those measured from simulated or experimentally acquired data. We performed studies to illustrate the application of the derived expressions when performing vascular elastography with plane wave and compounded plane wave imaging. Standard deviations in lateral displacements were an order higher than those in axial. Additionally, the derived expressions predicted that peak performance should occur when 2% strain is applied, the same order of magnitude as observed in simulations (1%) and experiments (1%-2%). We assessed how different configurations of conventional linear array imaging (number of active reception and transmission elements) influenced the quality of axial and lateral strain elastograms. The theoretical expressions predicted that 2-D echo tracking should be performed with wide kernels, but the length of the kernels should be selected using knowledge of the magnitude of the applied strain: specifically, longer kernels for small strains (<5%) and shorter kernels for larger strains. Although the general trends of theoretical predictions and experimental observations were similar, biases incurred during beamforming and subsample displacement estimation produced noticeable differences. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Hanft, J M; Jones, R J
1986-06-01
Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.
The Effects of Popping Popcorn Under Reduced Pressure
NASA Astrophysics Data System (ADS)
Quinn, Paul; Cooper, Amanda
2008-03-01
In our experiments, we model the popping of popcorn as an adiabatic process and develop a process for improving the efficiency of popcorn production. By lowering the pressure of the popcorn during the popping process, we induce an increase in popcorn size, while decreasing the number of remaining unpopped kernels. In this project we run numerous experiments using three of the most common popping devices, a movie popcorn maker, a stove pot, and a microwave. We specifically examine the effects of varying the pressure on total sample size, flake size and waste. An empirical relationship is found between these variables and the pressure.
Hecksel, D; Anferov, V; Fitzek, M; Shahnazi, K
2010-06-01
Conventional proton therapy facilities use double scattering nozzles, which are optimized for delivery of a few fixed field sizes. Similarly, uniform scanning nozzles are commissioned for a limited number of field sizes. However, cases invariably occur where the treatment field is significantly different from these fixed field sizes. The purpose of this work was to determine the impact of the radiation field conformity to the patient-specific collimator on the secondary neutron dose equivalent. Using a WENDI-II neutron detector, the authors experimentally investigated how the neutron dose equivalent at a particular point of interest varied with different collimator sizes, while the beam spreading was kept constant. The measurements were performed for different modes of dose delivery in proton therapy, all of which are available at the Midwest Proton Radiotherapy Institute (MPRI): Double scattering, uniform scanning delivering rectangular fields, and uniform scanning delivering circular fields. The authors also studied how the neutron dose equivalent changes when one changes the amplitudes of the scanned field for a fixed collimator size. The secondary neutron dose equivalent was found to decrease linearly with the collimator area for all methods of dose delivery. The relative values of the neutron dose equivalent for a collimator with a 5 cm diameter opening using 88 MeV protons were 1.0 for the double scattering field, 0.76 for rectangular uniform field, and 0.6 for the circular uniform field. Furthermore, when a single circle wobbling was optimized for delivery of a uniform field 5 cm in diameter, the secondary neutron dose equivalent was reduced by a factor of 6 compared to the double scattering nozzle. Additionally, when the collimator size was kept constant, the neutron dose equivalent at the given point of interest increased linearly with the area of the scanned proton beam. The results of these experiments suggest that the patient-specific collimator is a significant contributor to the secondary neutron dose equivalent to a distant organ at risk. Improving conformity of the radiation field to the patient-specific collimator can significantly reduce secondary neutron dose equivalent to the patient. Therefore, it is important to increase the number of available generic field sizes in double scattering systems as well as in uniform scanning nozzles.
Mean-Field Description of Ionic Size Effects with Non-Uniform Ionic Sizes: A Numerical Approach
Zhou, Shenggao; Wang, Zhongming; Li, Bo
2013-01-01
Ionic size effects are significant in many biological systems. Mean-field descriptions of such effects can be efficient but also challenging. When ionic sizes are different, explicit formulas in such descriptions are not available for the dependence of the ionic concentrations on the electrostatic potential, i.e., there is no explicit, Boltzmann type distributions. This work begins with a variational formulation of the continuum electrostatics of an ionic solution with such non-uniform ionic sizes as well as multiple ionic valences. An augmented Lagrange multiplier method is then developed and implemented to numerically solve the underlying constrained optimization problem. The method is shown to be accurate and efficient, and is applied to ionic systems with non-uniform ionic sizes such as the sodium chloride solution. Extensive numerical tests demonstrate that the mean-field model and numerical method capture qualitatively some significant ionic size effects, particularly those for multivalent ionic solutions, such as the stratification of multivalent counterions near a charged surface. The ionic valence-to-volume ratio is found to be the key physical parameter in the stratification of concentrations. All these are not well described by the classical Poisson–Boltzmann theory, or the generalized Poisson–Boltzmann theory that treats uniform ionic sizes. Finally, various issues such as the close packing, limitation of the continuum model, and generalization of this work to molecular solvation are discussed. PMID:21929014
Pappas, E; Maris, T G; Papadakis, A; Zacharopoulou, F; Damilakis, J; Papanikolaou, N; Gourtsoyiannis, N
2006-10-01
The aim of this work is to investigate experimentally the detector size effect on narrow beam profile measurements. Polymer gel and magnetic resonance imaging dosimetry was used for this purpose. Profile measurements (Pm(s)) of a 5 mm diameter 6 MV stereotactic beam were performed using polymer gels. Eight measurements of the profile of this narrow beam were performed using correspondingly eight different detector sizes. This was achieved using high spatial resolution (0.25 mm) two-dimensional measurements and eight different signal integration volumes A X A X slice thickness, simulating detectors of different size. "A" ranged from 0.25 to 7.5 mm, representing the detector size. The gel-derived profiles exhibited increased penumbra width with increasing detector size, for sizes >0.5 mm. By extrapolating the gel-derived profiles to zero detector size, the true profile (Pt) of the studied beam was derived. The same polymer gel data were also used to simulate a small-volume ion chamber profile measurement of the same beam, in terms of volume averaging. The comparison between these results and actual corresponding small-volume chamber profile measurements performed in this study, reveal that the penumbra broadening caused by both volume averaging and electron transport alterations (present in actual ion chamber profile measurements) is a lot more intense than that resulted by volume averaging effects alone (present in gel-derived profiles simulating ion chamber profile measurements). Therefore, not only the detector size, but also its composition and tissue equivalency is proved to be an important factor for correct narrow beam profile measurements. Additionally, the convolution kernels related to each detector size and to the air ion chamber were calculated using the corresponding profile measurements (Pm(s)), the gel-derived true profile (Pt), and convolution theory. The response kernels of any desired detector can be derived, allowing the elimination of the errors associated with narrow beam profile measurements.
Preliminary CFD study of Pebble Size and its Effect on Heat Transfer in a Pebble Bed Reactor
NASA Astrophysics Data System (ADS)
Jones, Andrew; Enriquez, Christian; Spangler, Julian; Yee, Tein; Park, Jungkyu; Farfan, Eduardo
2017-11-01
In pebble bed reactors, the typical pebble diameter used is 6cm, and within each pebble is are thousands of nuclear fuel kernels. However, efficiency of the reactor does not solely depend on the number of kernels of fuel within each graphite sphere, but also depends on the type and motion of the coolant within the voids between the spheres and the reactor itself. In this work a physical analysis of the pebble bed nuclear reactor's fluid dynamics is undertaken using Computational Fluid Dynamics software. The primary goal of this work is to observe the relationship between the different pebble diameters in an idealized alignment and the thermal transport efficiency of the reactor. The model constructed of our idealized argument will consist on stacked 8 pebble columns that fixed at the inlet on the reactor. Two different pebble sizes 4 cm and 6 cm will be studied and helium will be supplied as coolant with a fixed flow rate of 96 kg/s, also a fixed pebble surface temperatures will be used. Comparison will then be made to evaluate the efficiency of coolant to transport heat due to the varying sizes of the pebbles. Assistant Professor for the Department of Civil and Construction Engineering PhD.
Newlander, Shawn M; Chu, Alan; Sinha, Usha S; Lu, Po H; Bartzokis, George
2014-02-01
To identify regional differences in apparent diffusion coefficient (ADC) and fractional anisotropy (FA) using customized preprocessing before voxel-based analysis (VBA) in 14 normal subjects with the specific genes that decrease (apolipoprotein [APO] E ε2) and that increase (APOE ε4) the risk of Alzheimer's disease. Diffusion tensor images (DTI) acquired at 1.5 Tesla were denoised with a total variation tensor regularization algorithm before affine and nonlinear registration to generate a common reference frame for the image volumes of all subjects. Anisotropic and isotropic smoothing with varying kernel sizes was applied to the aligned data before VBA to determine regional differences between cohorts segregated by allele status. VBA on the denoised tensor data identified regions of reduced FA in APOE ε4 compared with the APOE ε2 healthy older carriers. The most consistent results were obtained using the denoised tensor and anisotropic smoothing before statistical testing. In contrast, isotropic smoothing identified regional differences for small filter sizes alone, emphasizing that this method introduces bias in FA values for higher kernel sizes. Voxel-based DTI analysis can be performed on low signal to noise ratio images to detect subtle regional differences in cohorts using the proposed preprocessing techniques. Copyright © 2013 Wiley Periodicals, Inc.
Fard Masoumi, Hamid Reza; Basri, Mahiran; Sarah Samiun, Wan; Izadiyan, Zahra; Lim, Chaw Jiang
2015-01-01
Aripiprazole is considered as a third-generation antipsychotic drug with excellent therapeutic efficacy in controlling schizophrenia symptoms and was the first atypical anti-psychotic agent to be approved by the US Food and Drug Administration. Formulation of nanoemulsion-containing aripiprazole was carried out using high shear and high pressure homogenizers. Mixture experimental design was selected to optimize the composition of nanoemulsion. A very small droplet size of emulsion can provide an effective encapsulation for delivery system in the body. The effects of palm kernel oil ester (3–6 wt%), lecithin (2–3 wt%), Tween 80 (0.5–1 wt%), glycerol (1.5–3 wt%), and water (87–93 wt%) on the droplet size of aripiprazole nanoemulsions were investigated. The mathematical model showed that the optimum formulation for preparation of aripiprazole nanoemulsion having the desirable criteria was 3.00% of palm kernel oil ester, 2.00% of lecithin, 1.00% of Tween 80, 2.25% of glycerol, and 91.75% of water. Under optimum formulation, the corresponding predicted response value for droplet size was 64.24 nm, which showed an excellent agreement with the actual value (62.23 nm) with residual standard error <3.2%. PMID:26508853
7 CFR 810.602 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...
Hi-Res scan mode in clinical MDCT systems: Experimental assessment of spatial resolution performance
Cruz-Bastida, Juan P.; Gomez-Cardona, Daniel; Li, Ke; Sun, Heyi; Hsieh, Jiang; Szczykutowicz, Timothy P.; Chen, Guang-Hong
2016-01-01
Purpose: The introduction of a High-Resolution (Hi-Res) scan mode and another associated option that combines Hi-Res mode with the so-called High Definition (HD) reconstruction kernels (referred to as a Hi-Res/HD mode in this paper) in some multi-detector CT (MDCT) systems offers new opportunities to increase spatial resolution for some clinical applications that demand high spatial resolution. The purpose of this work was to quantify the in-plane spatial resolution along both the radial direction and tangential direction for the Hi-Res and Hi-Res/HD scan modes at different off-center positions. Methods: A technique was introduced and validated to address the signal saturation problem encountered in the attempt to quantify spatial resolution for the Hi-Res and Hi-Res/HD scan modes. Using the proposed method, the modulation transfer functions (MTFs) of a 64-slice MDCT system (Discovery CT750 HD, GE Healthcare) equipped with both Hi-Res and Hi-Res/HD modes were measured using a metal bead at nine different off-centered positions (0–16 cm with a step size of 2 cm); at each position, both conventional scans and Hi-Res scans were performed. For each type of scan and position, 80 repeated acquisitions were performed to reduce noise induced uncertainties in the MTF measurements. A total of 15 reconstruction kernels, including eight conventional kernels and seven HD kernels, were used to reconstruct CT images of the bead. An ex vivo animal study consisting of a bone fracture model was performed to corroborate the MTF results, as the detection of this high-contrast and high frequency task is predominantly determined by spatial resolution. Images of this animal model generated by different scan modes and reconstruction kernels were qualitatively compared with the MTF results. Results: At the centered position, the use of Hi-Res mode resulted in a slight improvement in the MTF; each HD kernel generated higher spatial resolution than its counterpart conventional kernel. However, the MTF along the tangential direction of the scan field of view (SFOV) was significantly degraded at off-centered positions, yet the combined Hi-Res/HD mode reduced this azimuthal MTF degradation. Images of the animal bone fracture model confirmed the improved spatial resolution at the off-centered positions through the use of the Hi-Res mode and HD kernels. Conclusions: The Hi-Res/HD scan improve spatial resolution of MDCT systems at both centered and off-centered positions. PMID:27147351
Cruz-Bastida, Juan P; Gomez-Cardona, Daniel; Li, Ke; Sun, Heyi; Hsieh, Jiang; Szczykutowicz, Timothy P; Chen, Guang-Hong
2016-05-01
The introduction of a High-Resolution (Hi-Res) scan mode and another associated option that combines Hi-Res mode with the so-called High Definition (HD) reconstruction kernels (referred to as a Hi-Res/HD mode in this paper) in some multi-detector CT (MDCT) systems offers new opportunities to increase spatial resolution for some clinical applications that demand high spatial resolution. The purpose of this work was to quantify the in-plane spatial resolution along both the radial direction and tangential direction for the Hi-Res and Hi-Res/HD scan modes at different off-center positions. A technique was introduced and validated to address the signal saturation problem encountered in the attempt to quantify spatial resolution for the Hi-Res and Hi-Res/HD scan modes. Using the proposed method, the modulation transfer functions (MTFs) of a 64-slice MDCT system (Discovery CT750 HD, GE Healthcare) equipped with both Hi-Res and Hi-Res/HD modes were measured using a metal bead at nine different off-centered positions (0-16 cm with a step size of 2 cm); at each position, both conventional scans and Hi-Res scans were performed. For each type of scan and position, 80 repeated acquisitions were performed to reduce noise induced uncertainties in the MTF measurements. A total of 15 reconstruction kernels, including eight conventional kernels and seven HD kernels, were used to reconstruct CT images of the bead. An ex vivo animal study consisting of a bone fracture model was performed to corroborate the MTF results, as the detection of this high-contrast and high frequency task is predominantly determined by spatial resolution. Images of this animal model generated by different scan modes and reconstruction kernels were qualitatively compared with the MTF results. At the centered position, the use of Hi-Res mode resulted in a slight improvement in the MTF; each HD kernel generated higher spatial resolution than its counterpart conventional kernel. However, the MTF along the tangential direction of the scan field of view (SFOV) was significantly degraded at off-centered positions, yet the combined Hi-Res/HD mode reduced this azimuthal MTF degradation. Images of the animal bone fracture model confirmed the improved spatial resolution at the off-centered positions through the use of the Hi-Res mode and HD kernels. The Hi-Res/HD scan improve spatial resolution of MDCT systems at both centered and off-centered positions.
Hanft, Jonathan M.; Jones, Robert J.
1986-01-01
Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846
USDA-ARS?s Scientific Manuscript database
Durum wheat (Triticum turgidum ssp. durum) is considered unsuitable for the majority of commercial bread production because its weak gluten strength combined with flour particle size and flour starch damage after milling are not commensurate with hexaploid wheat flours. Recently a new durum cultivar...
7 CFR 51.1437 - Size classifications for halves.
Code of Federal Regulations, 2014 CFR
2014-01-01
... halves per pound shall be based upon the weight of half-kernels after all pieces, particles and dust... specified range. (d) Tolerances for pieces, particles, and dust. In order to allow for variations incident..., particles, and dust: Provided, That not more than one-third of this amount, or 5 percent, shall be allowed...
7 CFR 51.1437 - Size classifications for halves.
Code of Federal Regulations, 2013 CFR
2013-01-01
... halves per pound shall be based upon the weight of half-kernels after all pieces, particles and dust... specified range. (d) Tolerances for pieces, particles, and dust. In order to allow for variations incident..., particles, and dust: Provided, That not more than one-third of this amount, or 5 percent, shall be allowed...
Genetic analysis of kernel traits in maize-teosinte introgression populations
USDA-ARS?s Scientific Manuscript database
Seed traits have been targeted by human selection during the domestication of crop species as a way to increase caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is mos...
Voronoi Cell Patterns: theoretical model and application to submonolayer growth
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2012-02-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We apply our model to describe the Voronoi cell patterns of island nucleation for critical island sizes i=0,1,2,3. Experimental results for the Voronoi cells of InAs/GaAs quantum dots are also described by our model.
Zhang, Guangya; Ge, Huihua
2013-10-01
Understanding of proteins adaptive to hypersaline environment and identifying them is a challenging task and would help to design stable proteins. Here, we have systematically analyzed the normalized amino acid compositions of 2121 halophilic and 2400 non-halophilic proteins. The results showed that halophilic protein contained more Asp at the expense of Lys, Ile, Cys and Met, fewer small and hydrophobic residues, and showed a large excess of acidic over basic amino acids. Then, we introduce a support vector machine method to discriminate the halophilic and non-halophilic proteins, by using a novel Pearson VII universal function based kernel. In the three validation check methods, it achieved an overall accuracy of 97.7%, 91.7% and 86.9% and outperformed other machine learning algorithms. We also address the influence of protein size on prediction accuracy and found the worse performance for small size proteins might be some significant residues (Cys and Lys) were missing in the proteins. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Three-phase boundary length in solid-oxide fuel cells: A mathematical model
NASA Astrophysics Data System (ADS)
Janardhanan, Vinod M.; Heuveline, Vincent; Deutschmann, Olaf
A mathematical model to calculate the volume specific three-phase boundary length in the porous composite electrodes of solid-oxide fuel cell is presented. The model is exclusively based on geometrical considerations accounting for porosity, particle diameter, particle size distribution, and solids phase distribution. Results are presented for uniform particle size distribution as well as for non-uniform particle size distribution.
Out-of-Sample Extensions for Non-Parametric Kernel Methods.
Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang
2017-02-01
Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.
7 CFR 810.1202 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...
Self-assembly of self-limiting monodisperse supraparticles from polydisperse nanoparticles
NASA Astrophysics Data System (ADS)
Xia, Yunsheng; Nguyen, Trung Dac; Yang, Ming; Lee, Byeongdu; Santos, Aaron; Podsiadlo, Paul; Tang, Zhiyong; Glotzer, Sharon C.; Kotov, Nicholas A.
2011-09-01
Nanoparticles are known to self-assemble into larger structures through growth processes that typically occur continuously and depend on the uniformity of the individual nanoparticles. Here, we show that inorganic nanoparticles with non-uniform size distributions can spontaneously assemble into uniformly sized supraparticles with core-shell morphologies. This self-limiting growth process is governed by a balance between electrostatic repulsion and van der Waals attraction, which is aided by the broad polydispersity of the nanoparticles. The generic nature of the interactions creates flexibility in the composition, size and shape of the constituent nanoparticles, and leads to a large family of self-assembled structures, including hierarchically organized colloidal crystals.
7 CFR 810.802 - Definition of other terms.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...
Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction
Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.
2018-01-01
Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
Mutually catalyzed birth of population and assets in exchange-driven growth
NASA Astrophysics Data System (ADS)
Lin, Zhenquan; Ke, Jianhong; Ye, Gaoxiang
2006-10-01
We propose an exchange-driven aggregation growth model of population and assets with mutually catalyzed birth to study the interaction between the population and assets in their exchange-driven processes. In this model, monomer (or equivalently, individual) exchange occurs between any pair of aggregates of the same species (population or assets). The rate kernels of the exchanges of population and assets are K(k,l)=Kkl and L(k,l)=Lkl , respectively, at which one monomer migrates from an aggregate of size k to another of size l . Meanwhile, an aggregate of one species can yield a new monomer by the catalysis of an arbitrary aggregate of the other species. The rate kernel of asset-catalyzed population birth is I(k,l)=Iklμ [and that of population-catalyzed asset birth is J(k,l)=Jklν ], at which an aggregate of size k gains a monomer birth when it meets a catalyst aggregate of size l . The kinetic behaviors of the population and asset aggregates are solved based on the rate equations. The evolution of the aggregate size distributions of population and assets is found to fall into one of three categories for different parameters μ and ν : (i) population (asset) aggregates evolve according to the conventional scaling form in the case of μ⩽0 (ν⩽0) , (ii) population (asset) aggregates evolve according to a modified scaling form in the case of ν=0 and μ>0 ( μ=0 and ν>0 ), and (iii) both population and asset aggregates undergo gelation transitions at a finite time in the case of μ=ν>0 .
The mixing of rain with near-surface water
Dennis F. Houk
1976-01-01
Rain experiments were run with various temperature differences between the warm rain and the cool receiving water. The rain intensities were uniform and the raindrop sizes were usually uniform (2.2 mm, 3.6 mm, and 5.5 mm diameter drops). Two drop size distributions were also used.
Ni, Xinzhi; Wilson, Jeffrey P; Toews, Michael D; Buntin, G David; Lee, R Dewey; Li, Xin; Lei, Zhongren; He, Kanglai; Xu, Wenwei; Li, Xianchun; Huffaker, Alisa; Schmelz, Eric A
2014-10-01
Spatial and temporal patterns of insect damage in relation to aflatoxin contamination in a corn field with plants of uniform genetic background are not well understood. After previous examination of spatial patterns of insect damage and aflatoxin in pre-harvest corn fields, we further examined both spatial and temporal patterns of cob- and kernel-feeding insect damage, and aflatoxin level with two samplings at pre-harvest in 2008 and 2009. The feeding damage by each of the ear/kernel-feeding insects (i.e., corn earworm/fall armyworm damage on the silk/cob, and discoloration of corn kernels by stink bugs) and maize weevil population were assessed at each grid point with five ears. Sampling data showed a field edge effect in both insect damage and aflatoxin contamination in both years. Maize weevils tended toward an aggregated distribution more frequently than either corn earworm or stink bug damage in both years. The frequency of detecting aggregated distribution for aflatoxin level was less than any of the insect damage assessments. Stink bug damage and maize weevil number were more closely associated with aflatoxin level than was corn earworm damage. In addition, the indices of spatial-temporal association (χ) demonstrated that the number of maize weevils was associated between the first (4 weeks pre-harvest) and second (1 week pre-harvest) samplings in both years on all fields. In contrast, corn earworm damage between the first and second samplings from the field on the Belflower Farm, and aflatoxin level and corn earworm damage from the field on the Lang Farm were dissociated in 2009. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.
Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E
2015-01-07
Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners-the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [(11)C]AFM rats imaged on the HRRT and [(11)C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.
Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E
2016-01-01
Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners - the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods. PMID:25490063
Kernel Smoothing Methods for Non-Poissonian Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Woo, Gordon
2017-04-01
For almost fifty years, the mainstay of probabilistic seismic hazard analysis has been the methodology developed by Cornell, which assumes that earthquake occurrence is a Poisson process, and that the spatial distribution of epicentres can be represented by a set of polygonal source zones, within which seismicity is uniform. Based on Vere-Jones' use of kernel smoothing methods for earthquake forecasting, these methods were adapted in 1994 by the author for application to probabilistic seismic hazard analysis. There is no need for ambiguous boundaries of polygonal source zones, nor for the hypothesis of time independence of earthquake sequences. In Europe, there are many regions where seismotectonic zones are not well delineated, and where there is a dynamic stress interaction between events, so that they cannot be described as independent. From the Amatrice earthquake of 24 August, 2016, the subsequent damaging earthquakes in Central Italy over months were not independent events. Removing foreshocks and aftershocks is not only an ill-defined task, it has a material effect on seismic hazard computation. Because of the spatial dispersion of epicentres, and the clustering of magnitudes for the largest events in a sequence, which might all be around magnitude 6, the specific event causing the highest ground motion can vary from one site location to another. Where significant active faults have been clearly identified geologically, they should be modelled as individual seismic sources. The remaining background seismicity should be modelled as non-Poissonian using statistical kernel smoothing methods. This approach was first applied for seismic hazard analysis at a UK nuclear power plant two decades ago, and should be included within logic-trees for future probabilistic seismic hazard at critical installations within Europe. In this paper, various salient European applications are given.
NASA Astrophysics Data System (ADS)
Bott, Andreas; Kerkweg, Astrid; Wurzler, Sabine
A study has been made of the modification of aerosol spectra due to cloud pro- cesses and the impact of the modified aerosols on the microphysical structure of future clouds. For this purpose an entraining air parcel model with two-dimensional spectral cloud microphysics has been used. In order to treat collision/coalescence processes in the two-dimensional microphysical module, a new realistic and continuous formu- lation of the collection kernel has been developed. Based on experimental data, the kernel covers the entire investigated size range of aerosols, cloud and rain drops, that is the kernel combines all important coalescence processes such as the collision of cloud drops as well as the impaction scavenging of small aerosols by big raindrops. Since chemical reactions in the gas phase and in cloud drops have an important impact on the physico-chemical properties of aerosol particles, the parcel model has been extended by a chemical module describing gas phase and aqueous phase chemical reactions. However, it will be shown that in the numerical case studies presented in this paper the modification of aerosols by chemical reactions has a minor influence on the microphysical structure of future clouds. The major process yielding in a second cloud event an enhanced formation of rain is the production of large aerosol particles by collision/coalescence processes in the first cloud.
NASA Astrophysics Data System (ADS)
Noor, Nurazuwa Md; Xiang-ONG, Jun; Noh, Hamidun Mohd; Hamid, Noor Azlina Abdul; Kuzaiman, Salsabila; Ali, Adiwijaya
2017-11-01
Effect of inclusion of palm oil kernel shell (PKS) and palm oil fibre (POF) in concrete was investigated on the compressive strength and flexural strength. In addition, investigation of palm oil kernel shell on concrete water absorption was also conducted. Total of 48 concrete cubes and 24 concrete prisms with the size of 100mm × 100mm × 100mm and 100mm × 100mm × 500mm were prepared, respectively. Four (4) series of concrete mix consists of coarse aggregate was replaced by 0%, 25%, 50% and 75% palm kernel shell and each series were divided into two (2) main group. The first group is without POF, while the second group was mixed with the 5cm length of 0.25% of the POF volume fraction. All specimen were tested after 7 and 28 days of water curing for a compression test, and flexural test at 28 days of curing period. Water absorption test was conducted on concrete cube age 28 days. The results showed that the replacement of PKS achieves lower compressive and flexural strength in comparison with conventional concrete. However, the 25% replacement of PKS concrete showed acceptable compressive strength which within the range of requirement for structural concrete. Meanwhile, the POF which should act as matrix reinforcement showed no enhancement in flexural strength due to the balling effect in concrete. As expected, water absorption was increasing with the increasing of PKS in the concrete cause by the porous characteristics of PKS
Optimisation of shape kernel and threshold in image-processing motion analysers.
Pedrocchi, A; Baroni, G; Sada, S; Marcon, E; Pedotti, A; Ferrigno, G
2001-09-01
The aim of the work is to optimise the image processing of a motion analyser. This is to improve accuracy, which is crucial for neurophysiological and rehabilitation applications. A new motion analyser, ELITE-S2, for installation on the International Space Station is described, with the focus on image processing. Important improvements are expected in the hardware of ELITE-S2 compared with ELITE and previous versions (ELITE-S and Kinelite). The core algorithm for marker recognition was based on the current ELITE version, using the cross-correlation technique. This technique was based on the matching of the expected marker shape, the so-called kernel, with image features. Optimisation of the kernel parameters was achieved using a genetic algorithm, taking into account noise rejection and accuracy. Optimisation was achieved by performing tests on six highly precise grids (with marker diameters ranging from 1.5 to 4 mm), representing all allowed marker image sizes, and on a noise image. The results of comparing the optimised kernels and the current ELITE version showed a great improvement in marker recognition accuracy, while noise rejection characteristics were preserved. An average increase in marker co-ordinate accuracy of +22% was achieved, corresponding to a mean accuracy of 0.11 pixel in comparison with 0.14 pixel, measured over all grids. An improvement of +37%, corresponding to an improvement from 0.22 pixel to 0.14 pixel, was observed over the grid with the biggest markers.
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y M; Han, B; Xing, L
2016-06-15
Purpose: EPID-based patient-specific quality assurance provides verification of the planning setup and delivery process that phantomless QA and log-file based virtual dosimetry methods cannot achieve. We present a method for EPID-based QA utilizing spatially-variant EPID response kernels that allows for direct calculation of the entrance fluence and 3D phantom dose. Methods: An EPID dosimetry system was utilized for 3D dose reconstruction in a cylindrical phantom for the purposes of end-to-end QA. Monte Carlo (MC) methods were used to generate pixel-specific point-spread functions (PSFs) characterizing the spatially non-uniform EPID portal response in the presence of phantom scatter. The spatially-variant PSFs weremore » decomposed into spatially-invariant basis PSFs with the symmetric central-axis kernel as the primary basis kernel and off-axis representing orthogonal perturbations in pixel-space. This compact and accurate characterization enables the use of a modified Richardson-Lucy deconvolution algorithm to directly reconstruct entrance fluence from EPID images without iterative scatter subtraction. High-resolution phantom dose kernels were cogenerated in MC with the PSFs enabling direct recalculation of the resulting phantom dose by rapid forward convolution once the entrance fluence was calculated. A Delta4 QA phantom was used to validate the dose reconstructed in this approach. Results: The spatially-invariant representation of the EPID response accurately reproduced the entrance fluence with >99.5% fidelity with a simultaneous reduction of >60% in computational overhead. 3D dose for 10{sub 6} voxels was reconstructed for the entire phantom geometry. A 3D global gamma analysis demonstrated a >95% pass rate at 3%/3mm. Conclusion: Our approach demonstrates the capabilities of an EPID-based end-to-end QA methodology that is more efficient than traditional EPID dosimetry methods. Displacing the point of measurement external to the QA phantom reduces the necessary complexity of the phantom itself while offering a method that is highly scalable and inherently generalizable to rotational and trajectory based deliveries. This research was partially supported by Varian.« less
An oscillatory kernel function method for lifting surfaces in mixed transonic flow
NASA Technical Reports Server (NTRS)
Cunningham, A. M., Jr.
1974-01-01
A study was conducted on the use of combined subsonic and supersonic linear theory to obtain economical and yet realistic solutions to unsteady transonic flow problems. With some modification, existing linear theory methods were combined into a single computer program. The method was applied to problems for which measured steady Mach number distributions and unsteady pressure distributions were available. By comparing theory and experiment, the transonic method showed a significant improvement over uniform flow methods. The results also indicated that more exact local Mach number effects and normal shock boundary conditions on the perturbation potential were needed. The validity of these improvements was demonstrated by application to steady flow.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
Gabor-based kernel PCA with fractional power polynomial models for face recognition.
Liu, Chengjun
2004-05-01
This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.
G-Hash: Towards Fast Kernel-based Similarity Search in Large Graph Databases.
Wang, Xiaohong; Smalter, Aaron; Huan, Jun; Lushington, Gerald H
2009-01-01
Structured data including sets, sequences, trees and graphs, pose significant challenges to fundamental aspects of data management such as efficient storage, indexing, and similarity search. With the fast accumulation of graph databases, similarity search in graph databases has emerged as an important research topic. Graph similarity search has applications in a wide range of domains including cheminformatics, bioinformatics, sensor network management, social network management, and XML documents, among others.Most of the current graph indexing methods focus on subgraph query processing, i.e. determining the set of database graphs that contains the query graph and hence do not directly support similarity search. In data mining and machine learning, various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models for supervised learning, graph kernel functions have (i) high computational complexity and (ii) non-trivial difficulty to be indexed in a graph database.Our objective is to bridge graph kernel function and similarity search in graph databases by proposing (i) a novel kernel-based similarity measurement and (ii) an efficient indexing structure for graph data management. Our method of similarity measurement builds upon local features extracted from each node and their neighboring nodes in graphs. A hash table is utilized to support efficient storage and fast search of the extracted local features. Using the hash table, a graph kernel function is defined to capture the intrinsic similarity of graphs and for fast similarity query processing. We have implemented our method, which we have named G-hash, and have demonstrated its utility on large chemical graph databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Most importantly, the new similarity measurement and the index structure is scalable to large database with smaller indexing size, faster indexing construction time, and faster query processing time as compared to state-of-the-art indexing methods such as C-tree, gIndex, and GraphGrep.
NASA Astrophysics Data System (ADS)
Bauer, Rita A.; Kelemen, Lóránd; Nakano, Masami; Totsuka, Atsushi; Zrínyi, Miklós
2015-10-01
We have presented the first direct observation of electric field induced rotation of epoxy based polymer rotors. Polymer disks, hollow cylinders and gears were prepared in few micrometer dimensions as rotors. Electrorotation of these sub-millimeter sized tools was studied under uniform dc electric field. The effects of shape, size and thickness were investigated. The novel epoxy based micro devices show intensive spinning in a uniform dc electric field. The rotational speed of micron-sized polymer rotors can be conveniently tuned in a wide range (between 300 and 3000 rpm) by the electric field intensity, opening new perspectives for their use in several MEMS applications.
NASA Astrophysics Data System (ADS)
Amirnasr, Elham
It is widely recognized that nonwoven basis weight non-uniformity affects various properties of nonwovens. However, few studies can be found in this topic. The development of uniformity definition and measurement methods and the study of their impact on various web properties such as filtration properties and air permeability would be beneficial both in industrial applications and in academia. They can be utilized as a quality control tool and would provide insights about nonwoven behaviors that cannot be solely explained by average values. Therefore, for quantifying nonwoven web basis weight uniformity we purse to develop an optical analytical tool. The quadrant method and clustering analysis was utilized in an image analysis scheme to help define "uniformity" and its spatial variation. Implementing the quadrant method in an image analysis system allows the establishment of a uniformity index that can be used to quantify the degree of uniformity. Clustering analysis has also been modified and verified using uniform and random simulated images with known parameters. Number of clusters and cluster properties such as cluster size, member and density was determined. We also utilized this new measurement method to evaluate uniformity of nonwovens produced with different processes and investigated impacts of uniformity on filtration and permeability. The results of quadrant method shows that uniformity index computed from quadrant method demonstrate a good range for non-uniformity of nonwoven webs. Clustering analysis is also been applied on reference nonwoven with known visual uniformity. From clustering analysis results, cluster size is promising to be used as uniformity parameter. It is been shown that non-uniform nonwovens has provide lager cluster size than uniform nonwovens. It was been tried to find a relationship between web properties and uniformity index (as a web characteristic). To achieve this, filtration properties, air permeability, solidity and uniformity index of meltblown and spunbond samples was measured. Results for filtration test show some deviation between theoretical and experimental filtration efficiency by considering different types of fiber diameter. This deviation can occur due to variation in basis weight non-uniformity. So an appropriate theory is required to predict the variation of filtration efficiency with respect to non-uniformity of nonwoven filter media. And the results for air permeability test showed that uniformity index determined by quadrant method and measured properties have some relationship. In the other word, air permeability decreases as uniformity index on nonwoven web increase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katalenich, Jeffrey A.; Kitchen, Brian B.; Pierson, Bruce
Cerium dioxide microspheres with uniform diameters between 65 – 211 µm were fabricated using internal gelation sol-gel methods. Although uniform microspheres are produced for nuclear fuel applications with diameters above 300 µm, sol-gel microspheres with diameters of 50 - 200 µm have historically been made by emulsion techniques and had poor size uniformity [1, 2]. An internal gelation, sol-gel apparatus was designed and constructed to accommodate the production of small, uniform microspheres whereby cerium-containing solutions were dispersed into flowing silicone oil and heated in a gelation column to initiate solidification [3, 4]. Problems with premature feed gelation and microsphere coalescencemore » were overcome by equipment modifications unique among known internal gelation setups. Microspheres were fabricated and sized in batches as a function of dispersing needle diameter and silicone oil flow rate in the two-fluid nozzle in order to determine the range of sizes possible and corresponding degree of monodispersity. Initial experiments with poor size uniformity were linked to microsphere coalescence in the gelation column prior to solidification as well as excessive flow rates for the cerium feed solution. Average diameter standard deviations as low as 2.23% were observed after optimization of flow rates and minimization of coalescence reactions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katalenich, Jeffrey A.; Kitchen, Brian B.; Pierson, Bruce D.
Cerium dioxide microspheres with uniform diameters between 65 – 211 µm were fabricated using internal gelation sol-gel methods. Although uniform microspheres are produced for nuclear fuel applications with diameters above 300 µm, sol-gel microspheres with diameters of 50 - 200 µm have historically been made by emulsion techniques and had poor size uniformity [1, 2]. An internal gelation, sol-gel apparatus was designed and constructed to accommodate the production of small, uniform microspheres whereby cerium-containing solutions were dispersed into flowing silicone oil and heated in a gelation column to initiate solidification [3, 4]. Problems with premature feed gelation and microsphere coalescencemore » were overcome by equipment modifications unique among known internal gelation setups. Microspheres were fabricated and sized in batches as a function of dispersing needle diameter and silicone oil flow rate in the two-fluid nozzle in order to determine the range of sizes possible and corresponding degree of monodispersity. Initial experiments with poor size uniformity were linked to microsphere coalescence in the gelation column prior to solidification as well as excessive flow rates for the cerium feed solution. Average diameter standard deviations as low as 2.23% were observed after optimization of flow rates and minimization of coalescence reactions.« less
A multi-label learning based kernel automatic recommendation method for support vector machine.
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.
A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896
USDA-ARS?s Scientific Manuscript database
Effective mass selection tools are needed to enrich hard winter wheat breeding populations from red wheat × white wheat crosses while maintaining large population sizes in early breeding generations. Tools also are needed to select for white-seeded genotypes or to eliminate white-seeded genotypes wh...
Weighted graph cuts without eigenvectors a multilevel approach.
Dhillon, Inderjit S; Guan, Yuqiang; Kulis, Brian
2007-11-01
A variety of clustering algorithms have recently been proposed to handle data that is not linearly separable; spectral clustering and kernel k-means are two of the main methods. In this paper, we discuss an equivalence between the objective functions used in these seemingly different methods--in particular, a general weighted kernel k-means objective is mathematically equivalent to a weighted graph clustering objective. We exploit this equivalence to develop a fast, high-quality multilevel algorithm that directly optimizes various weighted graph clustering objectives, such as the popular ratio cut, normalized cut, and ratio association criteria. This eliminates the need for any eigenvector computation for graph clustering problems, which can be prohibitive for very large graphs. Previous multilevel graph partitioning methods, such as Metis, have suffered from the restriction of equal-sized clusters; our multilevel algorithm removes this restriction by using kernel k-means to optimize weighted graph cuts. Experimental results show that our multilevel algorithm outperforms a state-of-the-art spectral clustering algorithm in terms of speed, memory usage, and quality. We demonstrate that our algorithm is applicable to large-scale clustering tasks such as image segmentation, social network analysis and gene network analysis.
NASA Technical Reports Server (NTRS)
Nonhebel, H. M.; Bandurski, R. S. (Principal Investigator)
1986-01-01
Oxindole-3-acetic acid is the principal catabolite of indole-3-acetic acid in Zea mays seedlings. In this paper measurements of the turnover of oxindole-3-acetic acid are presented and used to calculate the rate of indole-3-acetic acid oxidation. [3H]Oxindole-3-acetic acid was applied to the endosperm of Zea mays seedlings and allowed to equilibrate for 24 h before the start of the experiment. The subsequent decrease in its specific activity was used to calculate the turnover rate. The average half-life of oxindole-3-acetic acid in the shoots was found to be 30 h while that in the kernels had an average half-life of 35h. Using previously published values of the pool sizes of oxindole-3-acetic acid in shoots and kernels from seedlings of the same age and variety, and grown under the same conditions, the rate of indole-3-acetic acid oxidation was calculated to be 1.1 pmol plant-1 h-1 in the shoots and 7.1 pmol plant-1 h-1 in the kernels.
Image re-sampling detection through a novel interpolation kernel.
Hilal, Alaa
2018-06-01
Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
High-Throughput, Adaptive FFT Architecture for FPGA-Based Spaceborne Data Processors
NASA Technical Reports Server (NTRS)
NguyenKobayashi, Kayla; Zheng, Jason X.; He, Yutao; Shah, Biren N.
2011-01-01
Exponential growth in microelectronics technology such as field-programmable gate arrays (FPGAs) has enabled high-performance spaceborne instruments with increasing onboard data processing capabilities. As a commonly used digital signal processing (DSP) building block, fast Fourier transform (FFT) has been of great interest in onboard data processing applications, which needs to strike a reasonable balance between high-performance (throughput, block size, etc.) and low resource usage (power, silicon footprint, etc.). It is also desirable to be designed so that a single design can be reused and adapted into instruments with different requirements. The Multi-Pass Wide Kernel FFT (MPWK-FFT) architecture was developed, in which the high-throughput benefits of the parallel FFT structure and the low resource usage of Singleton s single butterfly method is exploited. The result is a wide-kernel, multipass, adaptive FFT architecture. The 32K-point MPWK-FFT architecture includes 32 radix-2 butterflies, 64 FIFOs to store the real inputs, 64 FIFOs to store the imaginary inputs, complex twiddle factor storage, and FIFO logic to route the outputs to the correct FIFO. The inputs are stored in sequential fashion into the FIFOs, and the outputs of each butterfly are sequentially written first into the even FIFO, then the odd FIFO. Because of the order of the outputs written into the FIFOs, the depth of the even FIFOs, which are 768 each, are 1.5 times larger than the odd FIFOs, which are 512 each. The total memory needed for data storage, assuming that each sample is 36 bits, is 2.95 Mbits. The twiddle factors are stored in internal ROM inside the FPGA for fast access time. The total memory size to store the twiddle factors is 589.9Kbits. This FFT structure combines the benefits of high throughput from the parallel FFT kernels and low resource usage from the multi-pass FFT kernels with desired adaptability. Space instrument missions that need onboard FFT capabilities such as the proposed DESDynl, SWOT (Surface Water Ocean Topography), and Europa sounding radar missions would greatly benefit from this technology with significant reductions in non-recurring cost and risk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blue, C.A.; Sikka, V.K.; Chun, Jung-Hoon
1997-04-01
The uniform-droplet process is a new method of liquid-metal atomization that results in single droplets that can be used to produce mono-size powders or sprayed-on to substrates to produce near-net shapes with tailored microstructure. The mono-sized powder-production capability of the uniform-droplet process also has the potential of permitting engineered powder blends to produce components of controlled porosity. Metal and alloy powders are commercially produced by at least three different methods: gas atomization, water atomization, and rotating disk. All three methods produce powders of a broad range in size with a very small yield of fine powders with single-sized droplets thatmore » can be used to produce mono-size powders or sprayed-on substrates to produce near-net shapes with tailored microstructures. The economical analysis has shown the process to have the potential of reducing capital cost by 50% and operating cost by 37.5% when applied to powder making. For the spray-forming process, a 25% savings is expected in both the capital and operating costs. The project is jointly carried out at Massachusetts Institute of Technology (MIT), Tuffs University, and Oak Ridge National Laboratory (ORNL). Preliminary interactions with both finished parts and powder producers have shown a strong interest in the uniform-droplet process. Systematic studies are being conducted to optimize the process parameters, understand the solidification of droplets and spray deposits, and develop a uniform-droplet-system (UDS) apparatus appropriate for processing engineering alloys.« less
Exploiting graph kernels for high performance biomedical relation extraction.
Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri
2018-01-30
Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.
7 CFR 810.2202 - Definition of other terms.
Code of Federal Regulations, 2014 CFR
2014-01-01
... kernels, foreign material, and shrunken and broken kernels. The sum of these three factors may not exceed... the removal of dockage and shrunken and broken kernels. (g) Heat-damaged kernels. Kernels, pieces of... sample after the removal of dockage and shrunken and broken kernels. (h) Other grains. Barley, corn...
7 CFR 981.8 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...
7 CFR 51.1415 - Inedible kernels.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Song, Minhang; Zeng, Lingyan; Chen, Zhichao; Li, Zhengqi; Zhu, Qunyi; Kuang, Min
2016-02-02
To solve the water wall overheating in lower furnace, and further reduce NOx emissions and carbon in fly ash, continuous improvement of the previously proposed multiple injection and multiple staging combustion (MIMSC) technology lies on three aspects: (1) along the furnace arch breadth, changing the previously centralized 12 burner groups into a more uniform pattern with 24 burners; (2) increasing the mass ratio of pulverized coal in fuel-rich flow to that in fuel-lean flow from 6:4 to 9:1; (3) reducing the arch-air momentum by 23% and increasing the tertiary-air momentum by 24%. Industrial-size measurements (i.e., adjusting overfire air (OFA) damper opening of 20-70%) uncovered that, compared with the prior MIMSC technology, the ignition distance of fuel-rich coal/air flow shortened by around 1 m. The gas temperature in the lower furnace was symmetric and higher, the flame kernel moved upward and therefore made the temperature in near-wall region of furnace hopper decrease by about 400 °C, the water wall overheating disappeared completely. Under the optimal OFA damper opening (i.e, 55%), NOx emissions and carbon in fly ash attained levels of 589 mg/m(3) at 6% O2 and 6.18%, respectively, achieving NOx and carbon in fly ash significant reduction by 33% and 37%, respectively.
Unconventional protein sources: apricot seed kernels.
Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M
1981-09-01
Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.
Du, Yiping P; Jin, Zhaoyang
2009-10-01
To develop a robust algorithm for tissue-air segmentation in magnetic resonance imaging (MRI) using the statistics of phase and magnitude of the images. A multivariate measure based on the statistics of phase and magnitude was constructed for tissue-air volume segmentation. The standard deviation of first-order phase difference and the standard deviation of magnitude were calculated in a 3 x 3 x 3 kernel in the image domain. To improve differentiation accuracy, the uniformity of phase distribution in the kernel was also calculated and linear background phase introduced by field inhomogeneity was corrected. The effectiveness of the proposed volume segmentation technique was compared to a conventional approach that uses the magnitude data alone. The proposed algorithm was shown to be more effective and robust in volume segmentation in both synthetic phantom and susceptibility-weighted images of human brain. Using our proposed volume segmentation method, veins in the peripheral regions of the brain were well depicted in the minimum-intensity projection of the susceptibility-weighted images. Using the additional statistics of phase, tissue-air volume segmentation can be substantially improved compared to that using the statistics of magnitude data alone. (c) 2009 Wiley-Liss, Inc.
Effect of Aspergillus niger xylanase on dough characteristics and bread quality attributes.
Ahmad, Zulfiqar; Butt, Masood Sadiq; Ahmed, Anwaar; Riaz, Muhammad; Sabir, Syed Mubashar; Farooq, Umar; Rehman, Fazal Ur
2014-10-01
The present study was conducted to investigate the impact of various treatments of xylanase produced by Aspergillus niger applied in bread making processes like during tempering of wheat kernels and dough mixing on the dough quality characteristics i.e. dryness, stiffness, elasticity, extensibility, coherency and bread quality parameters i.e. volume, specific volume, density, moisture retention and sensory attributes. Different doses (200, 400, 600, 800 and 1,000 IU) of purified enzyme were applied to 1 kg of wheat grains during tempering and 1 kg of flour (straight grade flour) during mixing of dough in parallel. The samples of wheat kernels were agitated at different intervals for uniformity in tempering. After milling and dough making of both types of flour (having enzyme treatment during tempering and flour mixing) showed improved dough characteristics but the improvement was more prominent in the samples receiving enzyme treatment during tempering. Moreover, xylanase decreased dryness and stiffness of the dough whereas, resulted in increased elasticity, extensibility and coherency and increase in volume & decrease in bread density. Xylanase treatments also resulted in higher moisture retention and improvement of sensory attributes of bread. From the results, it is concluded that dough characteristics and bread quality improved significantly in response to enzyme treatments during tempering as compared to application during mixing.
NASA Astrophysics Data System (ADS)
Khoei, A. R.; Samimi, M.; Azami, A. R.
2007-02-01
In this paper, an application of the reproducing kernel particle method (RKPM) is presented in plasticity behavior of pressure-sensitive material. The RKPM technique is implemented in large deformation analysis of powder compaction process. The RKPM shape function and its derivatives are constructed by imposing the consistency conditions. The essential boundary conditions are enforced by the use of the penalty approach. The support of the RKPM shape function covers the same set of particles during powder compaction, hence no instability is encountered in the large deformation computation. A double-surface plasticity model is developed in numerical simulation of pressure-sensitive material. The plasticity model includes a failure surface and an elliptical cap, which closes the open space between the failure surface and hydrostatic axis. The moving cap expands in the stress space according to a specified hardening rule. The cap model is presented within the framework of large deformation RKPM analysis in order to predict the non-uniform relative density distribution during powder die pressing. Numerical computations are performed to demonstrate the applicability of the algorithm in modeling of powder forming processes and the results are compared to those obtained from finite element simulation to demonstrate the accuracy of the proposed model.
An introduction to kernel-based learning algorithms.
Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B
2001-01-01
This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.
7 CFR 981.408 - Inedible kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...
Design of CT reconstruction kernel specifically for clinical lung imaging
NASA Astrophysics Data System (ADS)
Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.
2005-04-01
In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.
Quality changes in macadamia kernel between harvest and farm-gate.
Walton, David A; Wallace, Helen M
2011-02-01
Macadamia integrifolia, Macadamia tetraphylla and their hybrids are cultivated for their edible kernels. After harvest, nuts-in-shell are partially dried on-farm and sorted to eliminate poor-quality kernels before consignment to a processor. During these operations, kernel quality may be lost. In this study, macadamia nuts-in-shell were sampled at five points of an on-farm postharvest handling chain from dehusking to the final storage silo to assess quality loss prior to consignment. Shoulder damage, weight of pieces and unsound kernel were assessed for raw kernels, and colour, mottled colour and surface damage for roasted kernels. Shoulder damage, weight of pieces and unsound kernel for raw kernels increased significantly between the dehusker and the final silo. Roasted kernels displayed a significant increase in dark colour, mottled colour and surface damage during on-farm handling. Significant loss of macadamia kernel quality occurred on a commercial farm during sorting and storage of nuts-in-shell before nuts were consigned to a processor. Nuts-in-shell should be dried as quickly as possible and on-farm handling minimised to maintain optimum kernel quality. 2010 Society of Chemical Industry.
A new discriminative kernel from probabilistic models.
Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert
2002-10-01
Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.
Automated thinning increases uniformity of in-row spacing and plant size in romaine lettuce
USDA-ARS?s Scientific Manuscript database
Low availability and high cost of farm hand labor make automated thinners a faster and cheaper alternative to hand thinning in lettuce (Lactuca sativa L.). However, the effects of this new technology on uniformity of plant spacing and size as well as crop yield are not proven. Three experiments wer...
A Comparison of Uniform DIF Effect Size Estimators under the MIMIC and Rasch Models
ERIC Educational Resources Information Center
Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon; Penfield, Randall D.
2013-01-01
The Rasch model, a member of a larger group of models within item response theory, is widely used in empirical studies. Detection of uniform differential item functioning (DIF) within the Rasch model typically employs null hypothesis testing with a concomitant consideration of effect size (e.g., signed area [SA]). Parametric equivalence between…
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
ERIC Educational Resources Information Center
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.
Kwak, Nojun
2016-05-20
Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Jie; Wang Yuming; Liu Yang, E-mail: jzhang7@gmu.ed
We have developed a computational software system to automate the process of identifying solar active regions (ARs) and quantifying their physical properties based on high-resolution synoptic magnetograms constructed from Michelson Doppler Imager (MDI; on board the SOHO spacecraft) images from 1996 to 2008. The system, based on morphological analysis and intensity thresholding, has four functional modules: (1) intensity segmentation to obtain kernel pixels, (2) a morphological opening operation to erase small kernels, which effectively remove ephemeral regions and magnetic fragments in decayed ARs, (3) region growing to extend kernels to full AR size, and (4) the morphological closing operation tomore » merge/group regions with a small spatial gap. We calculate the basic physical parameters of the 1730 ARs identified by the auto system. The mean and maximum magnetic flux of individual ARs are 1.67 x 10{sup 22} Mx and 1.97 x 10{sup 23} Mx, while that per Carrington rotation are 1.83 x 10{sup 23} Mx and 6.96 x 10{sup 23} Mx, respectively. The frequency distributions of ARs with respect to both area size and magnetic flux follow a log-normal function. However, when we decrease the detection thresholds and thus increase the number of detected ARs, the frequency distribution largely follows a power-law function. We also find that the equatorward drifting motion of the AR bands with solar cycle can be described by a linear function superposed with intermittent reverse driftings. The average drifting speed over one solar cycle is 1{sup o}.83{+-}0{sup o}.04 yr{sup -1} or 0.708 {+-} 0.015 m s{sup -1}.« less
NASA Astrophysics Data System (ADS)
Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas
2016-09-01
A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.
Bueso, Francisco; Sosa, Italo; Chun, Roldan; Pineda, Renan
2016-01-01
Jatropha curcas L. (Jatropha) is believed to have originated from Mexico and Central America. So far, characterization efforts have focused on Asia, Africa and Mexico. Non-toxic, low phorbol ester (PE) varieties have been found only in Mexico. Differences in PE content in seeds and its structural components, crude oil and cake from Jatropha provenances cultivated in Central and South America were evaluated. Seeds were dehulled, and kernels were separated into tegmen, cotyledons and embryo for PE quantitation by RP-HPLC. Crude oil and cake PE content was also measured. No phenotypic departures in seed size and structure were observed among Jatropha cultivated in Central and South America compared to provenances from Mexico, Asia and Africa. Cotyledons comprised 96.2-97.5 %, tegmen 1.6-2.4 % and embryo represented 0.9-1.4 % of dehulled kernel. Total PE content of all nine provenances categorized them as toxic. Significant differences in kernel PE content were observed among provenances from Mexico, Central and South America (P < 0.01), being Mexican the highest (7.6 mg/g) and Cabo Verde the lowest (2.57 mg/g). All accessions had >95 % of PEs concentrated in cotyledons, 0.5-3 % in the tegmen and 0.5-1 % in the embryo. Over 60 % of total PE in dehulled kernels accumulated in the crude oil, while 35-40 % remained in the cake after extraction. Low phenotypic variability in seed physical, structural traits and PE content was observed among provenances from Latin America. Very high-PE provenances with potential as biopesticide were found in Central America. No PE-free, edible Jatropha was found among provenances currently cultivated in Central America and Brazil that could be used for human consumption and feedstock. Furthermore, dehulled kernel structural parts as well as its crude oil and cake contained toxic PE levels.
Generation of a novel phase-space-based cylindrical dose kernel for IMRT optimization.
Zhong, Hualiang; Chetty, Indrin J
2012-05-01
Improving dose calculation accuracy is crucial in intensity-modulated radiation therapy (IMRT). We have developed a method for generating a phase-space-based dose kernel for IMRT planning of lung cancer patients. Particle transport in the linear accelerator treatment head of a 21EX, 6 MV photon beam (Varian Medical Systems, Palo Alto, CA) was simulated using the EGSnrc/BEAMnrc code system. The phase space information was recorded under the secondary jaws. Each particle in the phase space file was associated with a beamlet whose index was calculated and saved in the particle's LATCH variable. The DOSXYZnrc code was modified to accumulate the energy deposited by each particle based on its beamlet index. Furthermore, the central axis of each beamlet was calculated from the orientation of all the particles in this beamlet. A cylinder was then defined around the central axis so that only the energy deposited within the cylinder was counted. A look-up table was established for each cylinder during the tallying process. The efficiency and accuracy of the cylindrical beamlet energy deposition approach was evaluated using a treatment plan developed on a simulated lung phantom. Profile and percentage depth doses computed in a water phantom for an open, square field size were within 1.5% of measurements. Dose optimized with the cylindrical dose kernel was found to be within 0.6% of that computed with the nontruncated 3D kernel. The cylindrical truncation reduced optimization time by approximately 80%. A method for generating a phase-space-based dose kernel, using a truncated cylinder for scoring dose, in beamlet-based optimization of lung treatment planning was developed and found to be in good agreement with the standard, nontruncated scoring approach. Compared to previous techniques, our method significantly reduces computational time and memory requirements, which may be useful for Monte-Carlo-based 4D IMRT or IMAT treatment planning.
NASA Astrophysics Data System (ADS)
Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-12-01
Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a limited number of particles.
Conducting Meta-Analyses Based on p Values
van Aert, Robbie C. M.; Wicherts, Jelte M.; van Assen, Marcel A. L. M.
2016-01-01
Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform. PMID:27694466
The effect of precursor types on the magnetic properties of Y-type hexa-ferrite composite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Chin Mo; Na, Eunhye; Kim, Ingyu
2015-05-07
With magnetic composite including uniform magnetic particles, we expect to realize good high-frequency soft magnetic properties. We produced needle-like (α-FeOOH) nanoparticles with nearly uniform diameter and length of 20 and 500 nm. Zn-doped Y-type hexa-ferrite samples were prepared by solid state reaction method using the uniform goethite and non-uniform hematite (Fe{sub 2}O{sub 3}) with size of <1 μm, respectively. The micrographs observed by scanning electron microscopy show that more uniform hexagonal plates are observed in ZYG-sample (Zn-doped Y-type hexa-ferrite prepared with non-uniform hematite) than in ZYH-sample (Zn-doped Y-type hexa-ferrite prepared with uniform goethite). The permeability (μ′) and loss tangent (δ) atmore » 2 GHz are 2.31 and 0.07 in ZYG-sample and 2.0 and 0.07 in ZYH sample, respectively. We can observe that permeability and loss tangent are strongly related to the particle size and uniformity based on the nucleation, growth, and two magnetizing mechanisms: spin rotation and domain wall motion. The complex permeability spectra also can be numerically separated into spin rotational and domain wall resonance components.« less
Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)
NASA Astrophysics Data System (ADS)
Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.
2016-08-01
Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.
NASA Astrophysics Data System (ADS)
Zhao, Huaqing
There are two major objectives of this thesis work. One is to study theoretically the fracture and fatigue behavior of both homogeneous and functionally graded materials, with or without crack bridging. The other is to further develop the singular integral equation approach in solving mixed boundary value problems. The newly developed functionally graded materials (FGMs) have attracted considerable research interests as candidate materials for structural applications ranging from aerospace to automobile to manufacturing. From the mechanics viewpoint, the unique feature of FGMs is that their resistance to deformation, fracture and damage varies spatially. In order to guide the microstructure selection and the design and performance assessment of components made of functionally graded materials, in this thesis work, a series of theoretical studies has been carried out on the mode I stress intensity factors and crack opening displacements for FGMs with different combinations of geometry and material under various loading conditions, including: (1) a functionally graded layer under uniform strain, far field pure bending and far field axial loading, (2) a functionally graded coating on an infinite substrate under uniform strain, and (3) a functionally graded coating on a finite substrate under uniform strain, far field pure bending and far field axial loading. In solving crack problems in homogeneous and non-homogeneous materials, a very powerful singular integral equation (SEE) method has been developed since 1960s by Erdogan and associates to solve mixed boundary value problems. However, some of the kernel functions developed earlier are incomplete and possibly erroneous. In this thesis work, mode I fracture problems in a homogeneous strip are reformulated and accurate singular Cauchy type kernels are derived. Very good convergence rates and consistency with standard data are achieved. Other kernel functions are subsequently developed for mode I fracture in functionally graded materials. This work provides a solid foundation for further applications of the singular integral equation approach to fracture and fatigue problems in advanced composites. The concept of crack bridging is a unifying theory for fracture at various length scales, from atomic cleavage to rupture of concrete structures. However, most of the previous studies are limited to small scale bridging analyses although large scale bridging conditions prevail in engineering materials. In this work, a large scale bridging analysis is included within the framework of singular integral equation approach. This allows us to study fracture, fatigue and toughening mechanisms in advanced materials with crack bridging. As an example, the fatigue crack growth of grain bridging ceramics is studied. With the advent of composite materials technology, more complex material microstructures are being introduced, and more mechanics issues such as inhomogeneity and nonlinearity come into play. Improved mathematical and numerical tools need to be developed to allow theoretical modeling of these materials. This thesis work is an attempt to meet these challenges by making contributions to both micromechanics modeling and applied mathematics. It sets the stage for further investigations of a wide range of problems in the deformation and fracture of advanced engineering materials.
Predicting the spread of all invasive forest pests in the United States
Emma J. Hudgins; Andrew M. Liebhold; Brian Leung; Regan Early
2017-01-01
We tested whether a general spread model could capture macroecological patterns across all damaging invasive forest pests in the United States. We showed that a common constant dispersal kernel model, simulated from the discovery date, explained 67.94% of the variation in range size across all pests, and had 68.00% locational accuracy between predicted and observed...
Joseph L. Ganey; William M. Block; Jeffrey S. Jenness; Randolph A. Wilson
1998-01-01
To better understand the habitat relationships of the Mexican spotted owl (Strix occidentalis lucida), and how such relationships might influence forest management, we studied home-range and habitat use of radio-marked owls in ponderosa pine (Pinus ponderosa) Gambel oak (Quercus gambelii) forest. Annual home-range size (95% adaptive-kernel estimate) averaged 895 ha...
Lozenge Tilings of Hexagons with Cuts and Asymptotic Fluctuations: a New Universality Class
NASA Astrophysics Data System (ADS)
Adler, Mark; Johansson, Kurt; van Moerbeke, Pierre
2018-03-01
This paper investigates lozenge tilings of non-convex hexagonal regions and more specifically the asymptotic fluctuations of the tilings within and near the strip formed by opposite cuts in the regions, when the size of the regions tend to infinity, together with the cuts. It leads to a new kernel, which is expected to have universality properties.
Broken rice kernels and the kinetics of rice hydration and texture during cooking.
Saleh, Mohammed; Meullenet, Jean-Francois
2013-05-01
During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P < 0.05) but the unbroken kernels became significantly harder. Moisture content and moisture uptake rate were positively correlated, and cooked rice hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.
Snake River Plain Geothermal Play Fairway Analysis - Phase 1 Raster Files
John Shervais
2015-10-09
Snake River Plain Play Fairway Analysis - Phase 1 CRS Raster Files. This dataset contains raster files created in ArcGIS. These raster images depict Common Risk Segment (CRS) maps for HEAT, PERMEABILITY, AND SEAL, as well as selected maps of Evidence Layers. These evidence layers consist of either Bayesian krige functions or kernel density functions, and include: (1) HEAT: Heat flow (Bayesian krige map), Heat flow standard error on the krige function (data confidence), volcanic vent distribution as function of age and size, groundwater temperature (equivalue interval and natural breaks bins), and groundwater T standard error. (2) PERMEABILTY: Fault and lineament maps, both as mapped and as kernel density functions, processed for both dilational tendency (TD) and slip tendency (ST), along with data confidence maps for each data type. Data types include mapped surface faults from USGS and Idaho Geological Survey data bases, as well as unpublished mapping; lineations derived from maximum gradients in magnetic, deep gravity, and intermediate depth gravity anomalies. (3) SEAL: Seal maps based on presence and thickness of lacustrine sediments and base of SRP aquifer. Raster size is 2 km. All files generated in ArcGIS.
Sparsity-based image monitoring of crystal size distribution during crystallization
NASA Astrophysics Data System (ADS)
Liu, Tao; Huo, Yan; Ma, Cai Y.; Wang, Xue Z.
2017-07-01
To facilitate monitoring crystal size distribution (CSD) during a crystallization process by using an in-situ imaging system, a sparsity-based image analysis method is proposed for real-time implementation. To cope with image degradation arising from in-situ measurement subject to particle motion, solution turbulence, and uneven illumination background in the crystallizer, sparse representation of a real-time captured crystal image is developed based on using an in-situ image dictionary established in advance, such that the noise components in the captured image can be efficiently removed. Subsequently, the edges of a crystal shape in a captured image are determined in terms of the salience information defined from the denoised crystal images. These edges are used to derive a blur kernel for reconstruction of a denoised image. A non-blind deconvolution algorithm is given for the real-time reconstruction. Consequently, image segmentation can be easily performed for evaluation of CSD. The crystal image dictionary and blur kernels are timely updated in terms of the imaging conditions to improve the restoration efficiency. An experimental study on the cooling crystallization of α-type L-glutamic acid (LGA) is shown to demonstrate the effectiveness and merit of the proposed method.
Discrete bivariate population balance modelling of heteroaggregation processes.
Rollié, Sascha; Briesen, Heiko; Sundmacher, Kai
2009-08-15
Heteroaggregation in binary particle mixtures was simulated with a discrete population balance model in terms of two internal coordinates describing the particle properties. The considered particle species are of different size and zeta-potential. Property space is reduced with a semi-heuristic approach to enable an efficient solution. Aggregation rates are based on deterministic models for Brownian motion and stability, under consideration of DLVO interaction potentials. A charge-balance kernel is presented, relating the electrostatic surface potential to the property space by a simple charge balance. Parameter sensitivity with respect to the fractal dimension, aggregate size, hydrodynamic correction, ionic strength and absolute particle concentration was assessed. Results were compared to simulations with the literature kernel based on geometric coverage effects for clusters with heterogeneous surface properties. In both cases electrostatic phenomena, which dominate the aggregation process, show identical trends: impeded cluster-cluster aggregation at low particle mixing ratio (1:1), restabilisation at high mixing ratios (100:1) and formation of complex clusters for intermediate ratios (10:1). The particle mixing ratio controls the surface coverage extent of the larger particle species. Simulation results are compared to experimental flow cytometric data and show very satisfactory agreement.
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1990-01-01
All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreiner, S.; Paschal, C.B.; Galloway, R.L.
Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less
Laser induced spark ignition of methane-oxygen mixtures
NASA Technical Reports Server (NTRS)
Santavicca, D. A.; Ho, C.; Reilly, B. J.; Lee, T.-W.
1991-01-01
Results from an experimental study of laser induced spark ignition of methane-oxygen mixtures are presented. The experiments were conducted at atmospheric pressure and 296 K under laminar pre-mixed and turbulent-incompletely mixed conditions. A pulsed, frequency doubled Nd:YAG laser was used as the ignition source. Laser sparks with energies of 10 mJ and 40 mJ were used, as well as a conventional electrode spark with an effective energy of 6 mJ. Measurements were made of the flame kernel radius as a function of time using pulsed laser shadowgraphy. The initial size of the spark ignited flame kernel was found to correlate reasonably well with breakdown energy as predicted by the Taylor spherical blast wave model. The subsequent growth rate of the flame kernel was found to increase with time from a value less than to a value greater than the adiabatic, unstretched laminar growth rate. This behavior was attributed to the combined effects of flame stretch and an apparent wrinkling of the flame surface due to the extremely rapid acceleration of the flame. The very large laminar flame speed of methane-oxygen mixtures appears to be the dominant factor affecting the growth rate of spark ignited flame kernels, with the mode of ignition having a small effect. The effect of incomplete fuel-oxidizer mixing was found to have a significant effect on the growth rate, one which was greater than could simply be accounted for by the effect of local variations in the equivalence ratio on the local flame speed.
Hu, Wenjun; Chung, Fu-Lai; Wang, Shitong
2012-03-01
Although pattern classification has been extensively studied in the past decades, how to effectively solve the corresponding training on large datasets is a problem that still requires particular attention. Many kernelized classification methods, such as SVM and SVDD, can be formulated as the corresponding quadratic programming (QP) problems, but computing the associated kernel matrices requires O(n2)(or even up to O(n3)) computational complexity, where n is the size of the training patterns, which heavily limits the applicability of these methods for large datasets. In this paper, a new classification method called the maximum vector-angular margin classifier (MAMC) is first proposed based on the vector-angular margin to find an optimal vector c in the pattern feature space, and all the testing patterns can be classified in terms of the maximum vector-angular margin ρ, between the vector c and all the training data points. Accordingly, it is proved that the kernelized MAMC can be equivalently formulated as the kernelized Minimum Enclosing Ball (MEB), which leads to a distinctive merit of MAMC, i.e., it has the flexibility of controlling the sum of support vectors like v-SVC and may be extended to a maximum vector-angular margin core vector machine (MAMCVM) by connecting the core vector machine (CVM) method with MAMC such that the corresponding fast training on large datasets can be effectively achieved. Experimental results on artificial and real datasets are provided to validate the power of the proposed methods. Copyright © 2011 Elsevier Ltd. All rights reserved.
Observation of a 3D Magnetic Null Point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, P.; Falco, M.; Guglielmino, S. L.
2017-03-10
We describe high-resolution observations of a GOES B-class flare characterized by a circular ribbon at the chromospheric level, corresponding to the network at the photospheric level. We interpret the flare as a consequence of a magnetic reconnection event that occurred at a three-dimensional (3D) coronal null point located above the supergranular cell. The potential field extrapolation of the photospheric magnetic field indicates that the circular chromospheric ribbon is cospatial with the fan footpoints, while the ribbons of the inner and outer spines look like compact kernels. We found new interesting observational aspects that need to be explained by models: (1)more » a loop corresponding to the outer spine became brighter a few minutes before the onset of the flare; (2) the circular ribbon was formed by several adjacent compact kernels characterized by a size of 1″–2″; (3) the kernels with a stronger intensity emission were located at the outer footpoint of the darker filaments, departing radially from the center of the supergranular cell; (4) these kernels started to brighten sequentially in clockwise direction; and (5) the site of the 3D null point and the shape of the outer spine were detected by RHESSI in the low-energy channel between 6.0 and 12.0 keV. Taking into account all these features and the length scales of the magnetic systems involved in the event, we argue that the low intensity of the flare may be ascribed to the low amount of magnetic flux and to its symmetric configuration.« less
SOMKE: kernel density estimation over data streams by sequences of self-organizing maps.
Cao, Yuan; He, Haibo; Man, Hong
2012-08-01
In this paper, we propose a novel method SOMKE, for kernel density estimation (KDE) over data streams based on sequences of self-organizing map (SOM). In many stream data mining applications, the traditional KDE methods are infeasible because of the high computational cost, processing time, and memory requirement. To reduce the time and space complexity, we propose a SOM structure in this paper to obtain well-defined data clusters to estimate the underlying probability distributions of incoming data streams. The main idea of this paper is to build a series of SOMs over the data streams via two operations, that is, creating and merging the SOM sequences. The creation phase produces the SOM sequence entries for windows of the data, which obtains clustering information of the incoming data streams. The size of the SOM sequences can be further reduced by combining the consecutive entries in the sequence based on the measure of Kullback-Leibler divergence. Finally, the probability density functions over arbitrary time periods along the data streams can be estimated using such SOM sequences. We compare SOMKE with two other KDE methods for data streams, the M-kernel approach and the cluster kernel approach, in terms of accuracy and processing time for various stationary data streams. Furthermore, we also investigate the use of SOMKE over nonstationary (evolving) data streams, including a synthetic nonstationary data stream, a real-world financial data stream and a group of network traffic data streams. The simulation results illustrate the effectiveness and efficiency of the proposed approach.
Method for preparing spherical thermoplastic particles of uniform size
Day, J.R.
1975-11-17
Spherical particles of thermoplastic material of virtually uniform roundness and diameter are prepared by cutting monofilaments of a selected diameter into rod-like segments of a selected uniform length which are then heated in a viscous liquid to effect the formation of the spherical particles.
Nonlinear Deep Kernel Learning for Image Annotation.
Jiu, Mingyuan; Sahbi, Hichem
2017-02-08
Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Haryanto, B.; Bukit, R. Br; Situmeang, E. M.; Christina, E. P.; Pandiangan, F.
2018-02-01
The purpose of this study was to determine the performance, productivity and feasibility of the operation of palm kernel processing plant based on Energy Productivity Ratio (EPR). EPR is expressed as the ratio of output to input energy and by-product. Palm Kernel plan is process in palm kernel to become palm kernel oil. The procedure started from collecting data needed as energy input such as: palm kernel prices, energy demand and depreciation of the factory. The energy output and its by-product comprise the whole production price such as: palm kernel oil price and the remaining products such as shells and pulp price. Calculation the equality of energy of palm kernel oil is to analyze the value of Energy Productivity Ratio (EPR) bases on processing capacity per year. The investigation has been done in Kernel Oil Processing Plant PT-X at Sumatera Utara plantation. The value of EPR was 1.54 (EPR > 1), which indicated that the processing of palm kernel into palm kernel oil is feasible to be operated based on the energy productivity.
2013-01-01
Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...
On the convergence of local approximations to pseudodifferential operators with applications
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1994-01-01
We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.
An SVM model with hybrid kernels for hydrological time series
NASA Astrophysics Data System (ADS)
Wang, C.; Wang, H.; Zhao, X.; Xie, Q.
2017-12-01
Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multiple kernels learning-based biological entity relationship extraction method.
Dongliang, Xu; Jingchang, Pan; Bailing, Wang
2017-09-20
Automatic extracting protein entity interaction information from biomedical literature can help to build protein relation network and design new drugs. There are more than 20 million literature abstracts included in MEDLINE, which is the most authoritative textual database in the field of biomedicine, and follow an exponential growth over time. This frantic expansion of the biomedical literature can often be difficult to absorb or manually analyze. Thus efficient and automated search engines are necessary to efficiently explore the biomedical literature using text mining techniques. The P, R, and F value of tag graph method in Aimed corpus are 50.82, 69.76, and 58.61%, respectively. The P, R, and F value of tag graph kernel method in other four evaluation corpuses are 2-5% higher than that of all-paths graph kernel. And The P, R and F value of feature kernel and tag graph kernel fuse methods is 53.43, 71.62 and 61.30%, respectively. The P, R and F value of feature kernel and tag graph kernel fuse methods is 55.47, 70.29 and 60.37%, respectively. It indicated that the performance of the two kinds of kernel fusion methods is better than that of simple kernel. In comparison with the all-paths graph kernel method, the tag graph kernel method is superior in terms of overall performance. Experiments show that the performance of the multi-kernels method is better than that of the three separate single-kernel method and the dual-mutually fused kernel method used hereof in five corpus sets.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...
7 CFR 810.206 - Grades and grade requirements for barley.
Code of Federal Regulations, 2010 CFR
2010-01-01
... weight per bushel (pounds) Sound barley (percent) Maximum Limits of— Damaged kernels 1 (percent) Heat damaged kernels (percent) Foreign material (percent) Broken kernels (percent) Thin barley (percent) U.S... or otherwise of distinctly low quality. 1 Includes heat-damaged kernels. Injured-by-frost kernels and...
Code of Federal Regulations, 2014 CFR
2014-01-01
...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...
Code of Federal Regulations, 2013 CFR
2013-01-01
...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...
7 CFR 51.2125 - Split or broken kernels.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will not...
7 CFR 51.2296 - Three-fourths half kernel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...
The Classification of Diabetes Mellitus Using Kernel k-means
NASA Astrophysics Data System (ADS)
Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.
2018-01-01
Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.
UNICOS Kernel Internals Application Development
NASA Technical Reports Server (NTRS)
Caredo, Nicholas; Craw, James M. (Technical Monitor)
1995-01-01
Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.
Detection of maize kernels breakage rate based on K-means clustering
NASA Astrophysics Data System (ADS)
Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping
2017-04-01
In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.
Modeling adaptive kernels from probabilistic phylogenetic trees.
Nicotra, Luca; Micheli, Alessio
2009-01-01
Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.
Aflatoxin and nutrient contents of peanut collected from local market and their processed foods
NASA Astrophysics Data System (ADS)
Ginting, E.; Rahmianna, A. A.; Yusnawan, E.
2018-01-01
Peanut is succeptable to aflatoxin contamination and the sources of peanut as well as processing methods considerably affect aflatoxin content of the products. Therefore, the study on aflatoxin and nutrient contents of peanut collected from local market and their processed foods were performed. Good kernels of peanut were prepared into fried peanut, pressed-fried peanut, peanut sauce, peanut press cake, fermented peanut press cake (tempe) and fried tempe, while blended kernels (good and poor kernels) were processed into peanut sauce and tempe and poor kernels were only processed into tempe. The results showed that good and blended kernels which had high number of sound/intact kernels (82,46% and 62,09%), contained 9.8-9.9 ppb of aflatoxin B1, while slightly higher level was seen in poor kernels (12.1 ppb). However, the moisture, ash, protein, and fat contents of the kernels were similar as well as the products. Peanut tempe and fried tempe showed the highest increase in protein content, while decreased fat contents were seen in all products. The increase in aflatoxin B1 of peanut tempe prepared from poor kernels > blended kernels > good kernels. However, it averagely decreased by 61.2% after deep-fried. Excluding peanut tempe and fried tempe, aflatoxin B1 levels in all products derived from good kernels were below the permitted level (15 ppb). This suggests that sorting peanut kernels as ingredients and followed by heat processing would decrease the aflatoxin content in the products.
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.
Witayaudom, Pimchanok; Klinkesorn, Utai
2017-11-01
Nanostructured lipid carrier (NLC) was fabricated from rambutan (Nephelium lappaceum L.) kernel fat stabilized with Tween 80 in this present work. The influence of the Tween 80 concentration (0.025, 0.05, 0.1, 0.2, 0.5 and 1.0wt%) and solidification temperature (5 and 25°C) on the characteristics and stability of the NLC were investigated. The results showed that an increase in the Tween 80 concentration caused decreased zeta-potential (ζ-potential) and particle size (Z-average) with no significant effect on the polydispersity index (PDI). Lipid particles in the NLC at all Tween 80 concentrations had a tendency to grow and the PDI tended to increase due to Ostwald ripening upon storage over 28days. At least 0.2wt% Tween 80 concentrations could be used to stabilize 1wt% rambutan NLC. The solidification temperature affected the microstructure, melting behavior and stability of rambutan NLC. Pre-solidification at 5°C could create stable NLC with monodispersed-spherical lipid particles. Consequently, these stable NLC particles produced from rambutan kernel fat may serve as useful carriers for the delivery of bioactive lipophilic nutraceuticals. Copyright © 2017 Elsevier Inc. All rights reserved.
Technical Note: Dose gradients and prescription isodose in orthovoltage stereotactic radiosurgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagerstrom, Jessica M., E-mail: fagerstrom@wisc.edu; Bender, Edward T.; Culberson, Wesley S.
Purpose: The purpose of this work is to examine the trade-off between prescription isodose and dose gradients in orthovoltage stereotactic radiosurgery. Methods: Point energy deposition kernels (EDKs) describing photon and electron transport were calculated using Monte Carlo methods. EDKs were generated from 10 to 250 keV, in 10 keV increments. The EDKs were converted to pencil beam kernels and used to calculate dose profiles through isocenter from a 4π isotropic delivery from all angles of circularly collimated beams. Monoenergetic beams and an orthovoltage polyenergetic spectrum were analyzed. The dose gradient index (DGI) is the ratio of the 50% prescription isodosemore » volume to the 100% prescription isodose volume and represents a metric by which dose gradients in stereotactic radiosurgery (SRS) may be evaluated. Results: Using the 4π dose profiles calculated using pencil beam kernels, the relationship between DGI and prescription isodose was examined for circular cones ranging from 4 to 18 mm in diameter and monoenergetic photon beams with energies ranging from 20 to 250 keV. Values were found to exist for prescription isodose that optimize DGI. Conclusions: The relationship between DGI and prescription isodose was found to be dependent on both field size and energy. Examining this trade-off is an important consideration for designing optimal SRS systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas; ...
2016-01-06
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
Online Pairwise Learning Algorithms.
Ying, Yiming; Zhou, Ding-Xuan
2016-04-01
Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.
NASA Astrophysics Data System (ADS)
Uslu, Faruk Sukru
2017-07-01
Oil spills on the ocean surface cause serious environmental, political, and economic problems. Therefore, these catastrophic threats to marine ecosystems require detection and monitoring. Hyperspectral sensors are powerful optical sensors used for oil spill detection with the help of detailed spectral information of materials. However, huge amounts of data in hyperspectral imaging (HSI) require fast and accurate computation methods for detection problems. Support vector data description (SVDD) is one of the most suitable methods for detection, especially for large data sets. Nevertheless, the selection of kernel parameters is one of the main problems in SVDD. This paper presents a method, inspired by ensemble learning, for improving performance of SVDD without tuning its kernel parameters. Additionally, a classifier selection technique is proposed to get more gain. The proposed approach also aims to solve the small sample size problem, which is very important for processing high-dimensional data in HSI. The algorithm is applied to two HSI data sets for detection problems. In the first HSI data set, various targets are detected; in the second HSI data set, oil spill detection in situ is realized. The experimental results demonstrate the feasibility and performance improvement of the proposed algorithm for oil spill detection problems.
Measurement of Flaw Size From Thermographic Data
NASA Technical Reports Server (NTRS)
Winfree, William P.; Zalameda, Joseph N.; Howell, Patricia A.
2015-01-01
Simple methods for reducing the pulsed thermographic responses of delaminations tend to overestimate the size of the delamination, since the heat diffuses in the plane parallel to the surface. The result is a temperature profile over the delamination which is larger than the delamination size. A variational approach is presented for reducing the thermographic data to produce an estimated size for a flaw that is much closer to the true size of the delamination. The method is based on an estimate for the thermal response that is a convolution of a Gaussian kernel with the shape of the flaw. The size is determined from both the temporal and spatial thermal response of the exterior surface above the delamination and constraints on the length of the contour surrounding the delamination. Examples of the application of the technique to simulation and experimental data are presented to investigate the limitations of the technique.
Romero, Pascual; Navarro, Josefa Maria; García, Francisco; Botía Ordaz, Pablo
2004-03-01
We investigated the effects of regulated deficit irrigation (RDI) during the pre-harvest period (kernel-filling stage) on water relations, leaf development and crop yield in mature almond (Prunus dulcis (Mill.) D.A. Webb cv. Cartagenera) trees during a 2-year field experiment. Trees were either irrigated at full-crop evapotranspiration (ETc=100%) (well-irrigated control treatment) or subjected to an RDI treatment that consisted of full irrigation for the full season, except from early June to early August (kernel-filling stage), when 20% ETc was applied. The severity of water stress was characterized by measurements of soil water content, predawn leaf water potential (Psipd) and relative water content (RWC). Stomatal conductance (gs), net CO2 assimilation rate (A), transpiration rate (E), leaf abscission, leaf expansion rate and crop yield were also measured. In both years, Psipd and RWC of well-irrigated trees were maintained above -1.0 MPa and 92%, respectively, whereas the corresponding values for trees in the RDI treatment were -2.37 MPa and 82%. Long-term water stress led to a progressive decline in gs, A and E, with significant reductions after 21 days in the RDI treatment. At the time of maximum stress (48 days after commencement of RDI), A, gs and E were 64, 67 and 56% lower than control values, respectively. High correlations between A, E and gs were observed. Plant water status recovered within 15 days after the resumption of irrigation and was associated with recovery of soil water content. A relatively rapid and complete recovery of A and gs was also observed, although the recovery was slower than for Psipd and RWC. Severe water stress during the kernel-filling stage resulted in premature defoliation (caused by increased leaf abscission) and a reduction in leaf growth rate, which decreased tree leaf area. Although kernel yield was correlated with leaf water potential, RDI caused a nonsignificant 7% reduction in kernel yield and had no effect on kernel size. The RDI treatment also improved water-use efficiency because about 30% less irrigation water was applied in the RDI treatment than in the control treatment. We conclude that high-cropping almonds can be successfully grown in semiarid regions in an RDI regime provided that Psipd is maintained above a threshold value of -2 MPa.
Ward, B. F. L.
2008-01-01
We show that it is possible to improve the infrared aspects of the standard treatment of the DGLAP-CS evolution theory to take into account a large class of higher-order corrections that significantly improve the precision of the theory for any given level of fixed-order calculation of its respective kernels. We illustrate the size of the effects we resum using the moments of the parton distributions.
Modelling Nonlinear Dynamic Textures using Hybrid DWT-DCT and Kernel PCA with GPU
NASA Astrophysics Data System (ADS)
Ghadekar, Premanand Pralhad; Chopade, Nilkanth Bhikaji
2016-12-01
Most of the real-world dynamic textures are nonlinear, non-stationary, and irregular. Nonlinear motion also has some repetition of motion, but it exhibits high variation, stochasticity, and randomness. Hybrid DWT-DCT and Kernel Principal Component Analysis (KPCA) with YCbCr/YIQ colour coding using the Dynamic Texture Unit (DTU) approach is proposed to model a nonlinear dynamic texture, which provides better results than state-of-art methods in terms of PSNR, compression ratio, model coefficients, and model size. Dynamic texture is decomposed into DTUs as they help to extract temporal self-similarity. Hybrid DWT-DCT is used to extract spatial redundancy. YCbCr/YIQ colour encoding is performed to capture chromatic correlation. KPCA is applied to capture nonlinear motion. Further, the proposed algorithm is implemented on Graphics Processing Unit (GPU), which comprise of hundreds of small processors to decrease time complexity and to achieve parallelism.
Online Distributed Learning Over Networks in RKH Spaces Using Random Fourier Features
NASA Astrophysics Data System (ADS)
Bouboulis, Pantelis; Chouvardas, Symeon; Theodoridis, Sergios
2018-04-01
We present a novel diffusion scheme for online kernel-based learning over networks. So far, a major drawback of any online learning algorithm, operating in a reproducing kernel Hilbert space (RKHS), is the need for updating a growing number of parameters as time iterations evolve. Besides complexity, this leads to an increased need of communication resources, in a distributed setting. In contrast, the proposed method approximates the solution as a fixed-size vector (of larger dimension than the input space) using Random Fourier Features. This paves the way to use standard linear combine-then-adapt techniques. To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented. Conditions for asymptotic convergence and boundness of the networkwise regret are also provided. The simulated tests illustrate the performance of the proposed scheme.
Is there a single best estimator? selection of home range estimators using area- under- the-curve
Walter, W. David; Onorato, Dave P.; Fischer, Justin W.
2015-01-01
Comparisons of fit of home range contours with locations collected would suggest that use of VHF technology is not as accurate as GPS technology to estimate size of home range for large mammals. Estimators of home range collected with GPS technology performed better than those estimated with VHF technology regardless of estimator used. Furthermore, estimators that incorporate a temporal component (third-generation estimators) appeared to be the most reliable regardless of whether kernel-based or Brownian bridge-based algorithms were used and in comparison to first- and second-generation estimators. We defined third-generation estimators of home range as any estimator that incorporates time, space, animal-specific parameters, and habitat. Such estimators would include movement-based kernel density, Brownian bridge movement models, and dynamic Brownian bridge movement models among others that have yet to be evaluated.
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2012 CFR
2012-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2010 CFR
2010-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
7 CFR 981.401 - Adjusted kernel weight.
Code of Federal Regulations, 2014 CFR
2014-01-01
... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...
7 CFR 51.1403 - Kernel color classification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...
7 CFR 51.1450 - Serious damage.
Code of Federal Regulations, 2010 CFR
2010-01-01
...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...
7 CFR 51.1450 - Serious damage.
Code of Federal Regulations, 2011 CFR
2011-01-01
...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...
7 CFR 51.1450 - Serious damage.
Code of Federal Regulations, 2012 CFR
2012-01-01
...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...
NASA Astrophysics Data System (ADS)
Du, Peijun; Tan, Kun; Xing, Xiaoshi
2010-12-01
Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.
A trace ratio maximization approach to multiple kernel-based dimensionality reduction.
Jiang, Wenhao; Chung, Fu-lai
2014-01-01
Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.
Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar
2017-01-01
Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838
Hadamard Kernel SVM with applications for breast cancer outcome predictions.
Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong
2017-12-21
Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.
Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar
2017-01-01
Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.
Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila
2018-05-07
Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.
A framework for optimal kernel-based manifold embedding of medical image data.
Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma
2015-04-01
Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.
Evaluating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Wilton, Donald R.; Champagne, Nathan J.
2008-01-01
Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.
Kernel Machine SNP-set Testing under Multiple Candidate Kernels
Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.
2013-01-01
Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868
Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki
2014-01-01
The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.
7 CFR 810.202 - Definition of other terms.
Code of Federal Regulations, 2014 CFR
2014-01-01
... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...
7 CFR 810.202 - Definition of other terms.
Code of Federal Regulations, 2013 CFR
2013-01-01
... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...
7 CFR 810.202 - Definition of other terms.
Code of Federal Regulations, 2012 CFR
2012-01-01
... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...
graphkernels: R and Python packages for graph comparison
Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten
2018-01-01
Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902
Aflatoxin variability in pistachios.
Mahoney, N E; Rodriguez, S B
1996-01-01
Pistachio fruit components, including hulls (mesocarps and epicarps), seed coats (testas), and kernels (seeds), all contribute to variable aflatoxin content in pistachios. Fresh pistachio kernels were individually inoculated with Aspergillus flavus and incubated 7 or 10 days. Hulled, shelled kernels were either left intact or wounded prior to inoculation. Wounded kernels, with or without the seed coat, were readily colonized by A. flavus and after 10 days of incubation contained 37 times more aflatoxin than similarly treated unwounded kernels. The aflatoxin levels in the individual wounded pistachios were highly variable. Neither fungal colonization nor aflatoxin was detected in intact kernels without seed coats. Intact kernels with seed coats had limited fungal colonization and low aflatoxin concentrations compared with their wounded counterparts. Despite substantial fungal colonization of wounded hulls, aflatoxin was not detected in hulls. Aflatoxin levels were significantly lower in wounded kernels with hulls than in kernels of hulled pistachios. Both the seed coat and a water-soluble extract of hulls suppressed aflatoxin production by A. flavus. PMID:8919781
graphkernels: R and Python packages for graph comparison.
Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten
2018-02-01
Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.
TURBULENCE-INDUCED RELATIVE VELOCITY OF DUST PARTICLES. IV. THE COLLISION KERNEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Liubin; Padoan, Paolo, E-mail: lpan@cfa.harvard.edu, E-mail: ppadoan@icc.ub.edu
Motivated by its importance for modeling dust particle growth in protoplanetary disks, we study turbulence-induced collision statistics of inertial particles as a function of the particle friction time, τ{sub p}. We show that turbulent clustering significantly enhances the collision rate for particles of similar sizes with τ{sub p} corresponding to the inertial range of the flow. If the friction time, τ{sub p,} {sub h}, of the larger particle is in the inertial range, the collision kernel per unit cross section increases with increasing friction time, τ{sub p,} {sub l}, of the smaller particle and reaches the maximum at τ{sub p,}more » {sub l} = τ{sub p,} {sub h}, where the clustering effect peaks. This feature is not captured by the commonly used kernel formula, which neglects the effect of clustering. We argue that turbulent clustering helps alleviate the bouncing barrier problem for planetesimal formation. We also investigate the collision velocity statistics using a collision-rate weighting factor to account for higher collision frequency for particle pairs with larger relative velocity. For τ{sub p,} {sub h} in the inertial range, the rms relative velocity with collision-rate weighting is found to be invariant with τ{sub p,} {sub l} and scales with τ{sub p,} {sub h} roughly as ∝ τ{sub p,h}{sup 1/2}. The weighting factor favors collisions with larger relative velocity, and including it leads to more destructive and less sticking collisions. We compare two collision kernel formulations based on spherical and cylindrical geometries. The two formulations give consistent results for the collision rate and the collision-rate weighted statistics, except that the spherical formulation predicts more head-on collisions than the cylindrical formulation.« less
Microscopic analysis of irradiated AGR-1 coated particle fuel compacts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott A. Ploger; Paul A. Demkowicz; John D. Hunn
The AGR-1 experiment involved irradiation of 72 TRISO-coated particle fuel compacts to a peak compact-average burnup of 19.5% FIMA with no in-pile failures observed out of 3 x 105 total particles. Irradiated AGR-1 fuel compacts have been cross-sectioned and analyzed with optical microscopy to characterize kernel, buffer, and coating behavior. Six compacts have been examined, spanning a range of irradiation conditions (burnup, fast fluence, and irradiation temperature) and including all four TRISO coating variations irradiated in the AGR-1 experiment. The cylindrical specimens were sectioned both transversely and longitudinally, then polished to expose from 36 to 79 individual particles near midplanemore » on each mount. The analysis focused primarily on kernel swelling and porosity, buffer densification and fracturing, buffer–IPyC debonding, and fractures in the IPyC and SiC layers. Characteristic morphologies have been identified, 981 particles have been classified, and spatial distributions of particle types have been mapped. No significant spatial patterns were discovered in these cross sections. However, some trends were found between morphological types and certain behavioral aspects. Buffer fractures were found in 23% of the particles, and these fractures often resulted in unconstrained kernel protrusion into the open cavities. Fractured buffers and buffers that stayed bonded to IPyC layers appear related to larger pore size in kernels. Buffer–IPyC interface integrity evidently factored into initiation of rare IPyC fractures. Fractures through part of the SiC layer were found in only four classified particles, all in conjunction with IPyC–SiC debonding. Compiled results suggest that the deliberate coating fabrication variations influenced the frequencies of IPyC fractures and IPyC–SiC debonds.« less
Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K
2015-05-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.
Lu, Jennifer Q; Yi, Sung Soo
2006-04-25
A monolayer of gold-containing surface micelles has been produced by spin-coating solution micelles formed by the self-assembly of the gold-modified polystyrene-b-poly(2-vinylpyridine) block copolymer in toluene. After oxygen plasma removed the block copolymer template, highly ordered and uniformly sized nanoparticles have been generated. Unlike other published methods that require reduction treatments to form gold nanoparticles in the zero-valent state, these as-synthesized nanoparticles are in form of metallic gold. These gold nanoparticles have been demonstrated to be an excellent catalyst system for growing small-diameter silicon nanowires. The uniformly sized gold nanoparticles have promoted the controllable synthesis of silicon nanowires with a narrow diameter distribution. Because of the ability to form a monolayer of surface micelles with a high degree of order, evenly distributed gold nanoparticles have been produced on a surface. As a result, uniformly distributed, high-density silicon nanowires have been generated. The process described herein is fully compatible with existing semiconductor processing techniques and can be readily integrated into device fabrication.
NASA Astrophysics Data System (ADS)
Palakurthi, Nikhil Kumar; Ghia, Urmila; Comer, Ken
2013-11-01
Capillary penetration of liquid through fibrous porous media is important in many applications such as printing, drug delivery patches, sanitary wipes, and performance fabrics. Historically, capillary transport (with a distinct liquid propagating front) in porous media is modeled using capillary-bundle theory. However, it is not clear if the capillary model (Washburn equation) describes the fluid transport in porous media accurately, as it assumes uniformity of pore sizes in the porous medium. The present work investigates the limitations of the applicability of the capillary model by studying liquid penetration through virtual fibrous media with uniform and non-uniform pore-sizes. For the non-uniform-pore fibrous medium, the effective capillary radius of the fibrous medium was estimated from the pore-size distribution curve. Liquid penetration into the 3D virtual fibrous medium at micro-scale was simulated using OpenFOAM, and the numerical results were compared with the Washburn-equation capillary-model predictions. Preliminary results show that the Washburn equation over-predicts the height rise in the early stages (purely inertial and visco-inertial stages) of capillary transport.
NASA Astrophysics Data System (ADS)
Sun, Yun-Fei; Chen, Dan; Lin, Zhen-Quan; Ke, Jian-Hong
2009-06-01
We propose a solvable aggregation model to mimic the evolution of population A, asset B, and the quantifiable resource C in a society. In this system, the population and asset aggregates themselves grow through self-exchanges with the rate kernels K1(k, j) = K1kj and K2(k, j) = K2kj, respectively. The actions of the population and asset aggregations on the aggregation evolution of resource aggregates are described by the population-catalyzed monomer death of resource aggregates and asset-catalyzed monomer birth of resource aggregates with the rate kernels J1(k, j) = J1k and J2(k, j) = J2k, respectively. Meanwhile, the asset and resource aggregates conjunctly catalyze the monomer birth of population aggregates with the rate kernel I1(k, i, j) = I1kiμjη, and population and resource aggregates conjunctly catalyze the monomer birth of asset aggregates with the rate kernel I2(k, i, j) = I2kivjη. The kinetic behaviors of species A, B, and C are investigated by means of the mean-field rate equation approach. The effects of the population-catalyzed death and asset-catalyzed birth on the evolution of resource aggregates based on the self-exchanges of population and asset appear in effective forms. The coefficients of the effective population-catalyzed death and the asset-catalyzed birth are expressed as J1e = J1/K1 and J2e = J2/K2, respectively. The aggregate size distribution of C species is found to be crucially dominated by the competition between the effective death and the effective birth. It satisfies the conventional scaling form, generalized scaling form, and modified scaling form in the cases of J1e < J2e, J1e = J2e, and J1e > J2e, respectively. Meanwhile, we also find the aggregate size distributions of populations and assets both fall into two distinct categories for different parameters μ, ν, and η: (i) When μ = ν = η = 0 and μ = ν = 0, η = 1, the population and asset aggregates obey the generalized scaling forms; and (ii) When μ = ν = 1, η = 0, and μ = ν = η = 1, the population and asset aggregates experience gelation transitions at finite times and the scaling forms break down.
Code of Federal Regulations, 2010 CFR
2010-01-01
...— Damaged kernels 1 (percent) Foreign material (percent) Other grains (percent) Skinned and broken kernels....0 10.0 15.0 1 Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered against sound barley. Notes: Malting barley shall not be infested in accordance with...
Code of Federal Regulations, 2013 CFR
2013-01-01
... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...
Code of Federal Regulations, 2014 CFR
2014-01-01
... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...
7 CFR 810.205 - Grades and grade requirements for Two-rowed Malting barley.
Code of Federal Regulations, 2010 CFR
2010-01-01
... (percent) Maximum limits of— Wild oats (percent) Foreign material (percent) Skinned and broken kernels... Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered...
Coded aperture imaging with uniformly redundant arrays
Fenimore, Edward E.; Cannon, Thomas M.
1980-01-01
A system utilizing uniformly redundant arrays to image non-focusable radiation. The uniformly redundant array is used in conjunction with a balanced correlation technique to provide a system with no artifacts such that virtually limitless signal-to-noise ratio is obtained with high transmission characteristics. Additionally, the array is mosaicked to reduce required detector size over conventional array detectors.
Coded aperture imaging with uniformly redundant arrays
Fenimore, Edward E.; Cannon, Thomas M.
1982-01-01
A system utilizing uniformly redundant arrays to image non-focusable radiation. The uniformly redundant array is used in conjunction with a balanced correlation technique to provide a system with no artifacts such that virtually limitless signal-to-noise ratio is obtained with high transmission characteristics. Additionally, the array is mosaicked to reduce required detector size over conventional array detectors.
Detection of ochratoxin A contamination in stored wheat using near-infrared hyperspectral imaging
NASA Astrophysics Data System (ADS)
Senthilkumar, T.; Jayas, D. S.; White, N. D. G.; Fields, P. G.; Gräfenhan, T.
2017-03-01
Near-infrared (NIR) hyperspectral imaging system was used to detect five concentration levels of ochratoxin A (OTA) in contaminated wheat kernels. The wheat kernels artificially inoculated with two different OTA producing Penicillium verrucosum strains, two different non-toxigenic P. verrucosum strains, and sterile control wheat kernels were subjected to NIR hyperspectral imaging. The acquired three-dimensional data were reshaped into readable two-dimensional data. Principal Component Analysis (PCA) was applied to the two dimensional data to identify the key wavelengths which had greater significance in detecting OTA contamination in wheat. Statistical and histogram features extracted at the key wavelengths were used in the linear, quadratic and Mahalanobis statistical discriminant models to differentiate between sterile control, five concentration levels of OTA contamination in wheat kernels, and five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels. The classification models differentiated sterile control samples from OTA contaminated wheat kernels and non-OTA producing P. verrucosum inoculated wheat kernels with a 100% accuracy. The classification models also differentiated between five concentration levels of OTA contaminated wheat kernels and between five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels with a correct classification of more than 98%. The non-OTA producing P. verrucosum inoculated wheat kernels and OTA contaminated wheat kernels subjected to hyperspectral imaging provided different spectral patterns.
Application of kernel method in fluorescence molecular tomography
NASA Astrophysics Data System (ADS)
Zhao, Yue; Baikejiang, Reheman; Li, Changqing
2017-02-01
Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.
Credit scoring analysis using kernel discriminant
NASA Astrophysics Data System (ADS)
Widiharih, T.; Mukid, M. A.; Mustafid
2018-05-01
Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.
Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.
2014-01-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435
Fernández-Muñoz, J. L.; Zapata-Torrez, M.; Márquez-Herrera, A.; Sánchez-Sinencio, F.; Mendoza-Álvarez, J. G.; Meléndez-Lira, M.; Zelaya-Ángel, O.
2016-01-01
This paper focuses on the particle size distribution (PSD) changes during nixtamalized corn kernels (NCK) as a function of the steeping time (ST). The process to obtain powder or corn flour from NCK was as follows: (i) the NCK with different STs were wet-milled in a stone mill, (ii) dehydrated by a Flash type dryer, and (iii) pulverized with a hammer mill and sieved with a 20 mesh. The powder was characterized by measuring the PSD percentage, calcium percentage (CP), peak viscosity at 90°C (PV), and crystallinity percentage (CP). The PSD of the powder as a function of ST was determined by sieving in Ro-TAP equipment. By sieving, five fractions of powder were obtained employing meshes 30, 40, 60, 80, and 100. The final weight of the PSD obtained from the sieving process follows a Gaussian profile with the maximum corresponding to the average particle obtained with mesh 60. The calcium percentage as a function of ST follows a behavior similar to the weight of the PSD. The study of crystallinity versus the mesh number shows that it decreases for smaller mesh number. A similar behavior is observed as steeping time increases, except around ST = 8 h where the gelatinization of starch is observed. The trend of increasing viscosity values of the powder samples occurs when increasing ST and decreasing particle size. The ST significantly changes the crystallinity and viscosity values of the powder and, in both cases, a minimum value is observed in the region 7–9 h. The experimental results show that the viscosity increases (decreases) if the particle size decreases (increases). PMID:27375921
Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D
2010-05-01
The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to 0.91 when a threshold of either 20 or 100 ng g(-1) was used. Overall, the results indicate that fluorescence hyperspectral imaging may be applicable in estimating aflatoxin content in individual corn kernels.
Multiscale Anomaly Detection and Image Registration Algorithms for Airborne Landmine Detection
2008-05-01
with the sensed image. The two- dimensional correlation coefficient r for two matrices A and B both of size M ×N is given by r = ∑ m ∑ n (Amn...correlation based method by matching features in a high- dimensional feature- space . The current implementation of the SIFT algorithm uses a brute-force...by repeatedly convolving the image with a Guassian kernel. Each plane of the scale
Lv, Qiming; Schneider, Manuel K; Pitchford, Jonathan W
2008-08-01
We study individual plant growth and size hierarchy formation in an experimental population of Arabidopsis thaliana, within an integrated analysis that explicitly accounts for size-dependent growth, size- and space-dependent competition, and environmental stochasticity. It is shown that a Gompertz-type stochastic differential equation (SDE) model, involving asymmetric competition kernels and a stochastic term which decreases with the logarithm of plant weight, efficiently describes individual plant growth, competition, and variability in the studied population. The model is evaluated within a Bayesian framework and compared to its deterministic counterpart, and to several simplified stochastic models, using distributional validation. We show that stochasticity is an important determinant of size hierarchy and that SDE models outperform the deterministic model if and only if structural components of competition (asymmetry; size- and space-dependence) are accounted for. Implications of these results are discussed in the context of plant ecology and in more general modelling situations.
Habitat use and home range of the Laysan Teal on Laysan Island, Hawaii
Reynolds, M.H.
2004-01-01
The 24-hour habitat use and home range of the Laysan Teal (Anas laysanensis), an endemic dabbling duck in Hawaii, was studied using radio telemetry during 1998-2000. Radios were retained for a mean of 40 days (0-123 d; 73 adult birds radio-tagged). Comparisons of daily habitat use were made for birds in the morning, day, evening, and night. Most birds showed strong evidence of selective habitat use. Adults preferred the terrestrial vegetation (88%), and avoided the lake and wetlands during the day. At night, 63% of the birds selected the lake and wetlands. Nocturnal habitat use differed significantly between the non-breeding and breeding seasons, while the lake and wetland habitats were used more frequently during the non-breeding season. Most individuals showed strong site fidelity during the study, but habitat selection varied between individuals. Mean home range size was 9.78 ha (SE ?? 2.6) using the fixed kernel estimator (95% kernel; 15 birds, each with >25 locations). The average minimum convex polygon size was 24 ha (SE ?? 5.6). The mean distance traveled between tracking locations was 178 m (SE ?? 30-5), with travel distances between points ranging up to 1,649 m. Tracking duration varied from 31-121 days per bird (mean tracking duration 75 days).
TaGW2, a Good Reflection of Wheat Polyploidization and Evolution.
Qin, Lin; Zhao, Junjie; Li, Tian; Hou, Jian; Zhang, Xueyong; Hao, Chenyang
2017-01-01
Hexaploid wheat consists of three subgenomes, namely, A, B, and D. These well-characterized ancestral genomes also exist at the diploid and tetraploid levels, thereby rendering wheat as a good model species for studying polyploidization. Here, we performed intra- and inter-species comparative analyses of wheat and its relatives to dissect polymorphism and differentiation of the TaGW2 genes. Our results showed that genetic diversity of TaGW2 decreased with progression from the diploids to tetraploids and hexaploids. The strongest selection occurred in the promoter regions of TaGW2-6A and TaGW2-6B . Phylogenetic trees clearly indicated that Triticum urartu and Ae. speltoides were the donors of the A and B genomes in tetraploid and hexaploid wheats. Haplotypes detected among hexaploid genotypes traced back to the tetraploid level. Fst and π values revealed that the strongest selection on TaGW2 occurred at the tetraploid level rather than in hexaploid wheat. This infers that grain size enlargement, especially increased kernel width, mainly occurred in tetraploid genotypes. In addition, relative expression levels of TaGW2s significantly declined from the diploid level to tetraploids and hexaploids, further indicating that these genes negatively regulate kernel size. Our results also revealed that the polyploidization events possibly caused much stronger differentiation than domestication and breeding.
Classification of Phylogenetic Profiles for Protein Function Prediction: An SVM Approach
NASA Astrophysics Data System (ADS)
Kotaru, Appala Raju; Joshi, Ramesh C.
Predicting the function of an uncharacterized protein is a major challenge in post-genomic era due to problems complexity and scale. Having knowledge of protein function is a crucial link in the development of new drugs, better crops, and even the development of biochemicals such as biofuels. Recently numerous high-throughput experimental procedures have been invented to investigate the mechanisms leading to the accomplishment of a protein’s function and Phylogenetic profile is one of them. Phylogenetic profile is a way of representing a protein which encodes evolutionary history of proteins. In this paper we proposed a method for classification of phylogenetic profiles using supervised machine learning method, support vector machine classification along with radial basis function as kernel for identifying functionally linked proteins. We experimentally evaluated the performance of the classifier with the linear kernel, polynomial kernel and compared the results with the existing tree kernel. In our study we have used proteins of the budding yeast saccharomyces cerevisiae genome. We generated the phylogenetic profiles of 2465 yeast genes and for our study we used the functional annotations that are available in the MIPS database. Our experiments show that the performance of the radial basis kernel is similar to polynomial kernel is some functional classes together are better than linear, tree kernel and over all radial basis kernel outperformed the polynomial kernel, linear kernel and tree kernel. In analyzing these results we show that it will be feasible to make use of SVM classifier with radial basis function as kernel to predict the gene functionality using phylogenetic profiles.
NASA Astrophysics Data System (ADS)
Kundu, Snehasis
2018-09-01
In this study vertical distribution of sediment particles in steady uniform turbulent open channel flow over erodible bed is investigated using fractional advection-diffusion equation (fADE). Unlike previous investigations on fADE to investigate the suspension distribution, in this study the modified Atangana-Baleanu-Caputo fractional derivative with a non-singular and non-local kernel is employed. The proposed fADE is solved and an analytical model for finding vertical suspension distribution is obtained. The model is validated against experimental as well as field measurements of Missouri River, Mississippi River and Rio Grande conveyance channel and is compared with the Rouse equation and other fractional model found in literature. A quantitative error analysis shows that the proposed model is able to predict the vertical distribution of particles more appropriately than previous models. The validation results shows that the fractional model can be equally applied to all size of particles with an appropriate choice of the order of the fractional derivative α. It is also found that besides particle diameter, parameter α depends on the mass density of particle and shear velocity of the flow. To predict this parameter, a multivariate regression is carried out and a relation is proposed for easy application of the model. From the results for sand and plastic particles, it is found that the parameter α is more sensitive to mass density than the particle diameter. The rationality of the dependence of α on particle and flow characteristics has been justified physically.
On simplified application of multidimensional Savitzky-Golay filters and differentiators
NASA Astrophysics Data System (ADS)
Shekhar, Chandra
2016-02-01
I propose a simplified approach for multidimensional Savitzky-Golay filtering, to enable its fast and easy implementation in scientific and engineering applications. The proposed method, which is derived from a generalized framework laid out by Thornley (D. J. Thornley, "Novel anisotropic multidimensional convolution filters for derivative estimation and reconstruction" in Proceedings of International Conference on Signal Processing and Communications, November 2007), first transforms any given multidimensional problem into a unique one, by transforming coordinates of the sampled data nodes to unity-spaced, uniform data nodes, and then performs filtering and calculates partial derivatives on the unity-spaced nodes. It is followed by transporting the calculated derivatives back onto the original data nodes by using the chain rule of differentiation. The burden to performing the most cumbersome task, which is to carry out the filtering and to obtain derivatives on the unity-spaced nodes, is almost eliminated by providing convolution coefficients for a number of convolution kernel sizes and polynomial orders, up to four spatial dimensions. With the availability of the convolution coefficients, the task of filtering at a data node reduces merely to multiplication of two known matrices. Simplified strategies to adequately address near-boundary data nodes and to calculate partial derivatives there are also proposed. Finally, the proposed methodologies are applied to a three-dimensional experimentally obtained data set, which shows that multidimensional Savitzky-Golay filters and differentiators perform well in both the internal and the near-boundary regions of the domain.
Alternative beam configuration for a Canadian Ka-band satellite system
NASA Technical Reports Server (NTRS)
Hindson, Daniel J.; Caron, Mario
1995-01-01
Satellite systems operating in the Ka-band have been proposed to offer wide band personal communications services to fixed earth terminals employing small aperture antennas as well as to mobile terminals. This requirement to service a small aperture antenna leads to a satellite system utilizing small spot beams. The traditional approach is to cover the service area with uniform spot beams which have been sized to provide a given grade of service at the worst location over the service area and to place them in a honeycomb pattern. In the lower frequency bands this approach leads to a fairly uniform grade of service over the service area due to the minimal effects of rain on the signals. At Ka-band, however, the effects of rain are quite significant. Using this approach over a large service area (e.g. Canada) where the geographic distribution of rain impairment varies significantly yields an inefficient use of satellite resources to provide a uniform grade of service. An alternative approach is to cover the service area using more than one spot beam size in effect linking the spot beam size to the severity of the rain effects in a region. This paper demonstrates how for a Canadian Ka-band satellite system, that the use of two spot beam sizes can provide a more uniform grade of service across the country as well as reduce the satellite payload complexity over a design utilizing a single spot beam size.
Evidence-based Kernels: Fundamental Units of Behavioral Influence
Biglan, Anthony
2008-01-01
This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600
Integrating the Gradient of the Thin Wire Kernel
NASA Technical Reports Server (NTRS)
Champagne, Nathan J.; Wilton, Donald R.
2008-01-01
A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form
Ranking Support Vector Machine with Kernel Approximation
Dou, Yong
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Marjanovic, Jovana; Mulder, Han A; Khaw, Hooi L; Bijma, Piter
2016-06-10
Animal breeding programs have been very successful in improving the mean levels of traits through selection. However, in recent decades, reducing the variability of trait levels between individuals has become a highly desirable objective. Reaching this objective through genetic selection requires that there is genetic variation in the variability of trait levels, a phenomenon known as genetic heterogeneity of environmental (residual) variance. The aim of our study was to investigate the potential for genetic improvement of uniformity of harvest weight and body size traits (length, depth, and width) in the genetically improved farmed tilapia (GIFT) strain. In order to quantify the genetic variation in uniformity of traits and estimate the genetic correlations between level and variance of the traits, double hierarchical generalized linear models were applied to individual trait values. Our results showed substantial genetic variation in uniformity of all analyzed traits, with genetic coefficients of variation for residual variance ranging from 39 to 58 %. Genetic correlation between trait level and variance was strongly positive for harvest weight (0.60 ± 0.09), moderate and positive for body depth (0.37 ± 0.13), but not significantly different from 0 for body length and width. Our results on the genetic variation in uniformity of harvest weight and body size traits show good prospects for the genetic improvement of uniformity in the GIFT strain. A high and positive genetic correlation was estimated between level and variance of harvest weight, which suggests that selection for heavier fish will also result in more variation in harvest weight. Simultaneous improvement of harvest weight and its uniformity will thus require index selection.
Improved dot size uniformity and luminescense of InAs quantum dots on InP substrate
NASA Technical Reports Server (NTRS)
Qiu, Y.; Uhl, D.
2002-01-01
InAs self-organized quantum dots have been grown in InGaAs quantum well on InP substrates by metalorganic vapor phase epitaxy. Atomic Force Microscopy confirmed of quantum dot formation with dot density of 3X10(sup 10) cm(sup -2). Improved dot size uniformity and strong room temperature photoluminescence up to 2 micron were observed after modifying the InGaAs well.
NASA Astrophysics Data System (ADS)
Zhang, Jing; Tian, Yu; Ling, Lu-Ting; Yin, Su-Na; Wang, Cai-Feng; Chen, Su
2014-12-01
Versatile hydrogel-based nanocrystal (NC) microreactors were designed in this work for the construction of uniform fluorescence colloidal photonic crystal (CPC) supraballs. The hydrogel-based microspheres with sizes ranging from 150 to 300 nm were prepared by seeded copolymerization of acrylic acid and 2-hydroxyethyl methacrylate with micrometer-sized PS seed particles. As an independent NC microreactor, the as-synthesized hydrogel microsphere can effectively capture the guest cadmium ions due to the abundant carboxyl groups inside. Followed by the introduction of chalcogenides, in situ generation of higher-uptake NCs with sizes less than 5 nm was finally realized. Additionally, with the aid of the microfluidic device, the as-obtained NC-latex hybrids can be further self-assembled to bi-functional CPC supraballs bearing brilliant structural colors and uniform fluorescence. This research offers an alternative way to finely bind CPCs with NCs, which will facilitate progress in fields of self-assembled functional colloids and photonic materials.
Code of Federal Regulations, 2011 CFR
2011-04-01
... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...
Code of Federal Regulations, 2013 CFR
2013-04-01
... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...
Code of Federal Regulations, 2012 CFR
2012-04-01
... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...
Wigner functions defined with Laplace transform kernels.
Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George
2011-10-24
We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America
Online learning control using adaptive critic designs with sparse kernel machines.
Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo
2013-05-01
In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.
Home range and survival of breeding painted buntings on Sapelo Island, Georgia
Springborn, E.G.; Meyers, J.M.
2005-01-01
The southeastern United States population of the painted bunting (Passerina ciris) has decreased approximately 75% from 1966-1996 based on Breeding Bird Survey trends. Partners in Flight guidelines recommend painted bunting conservation as a high priority with a need for management by state and federal agencies. Basic information on home range and survival of breeding painted buntings will provide managers with required habitat types and estimates of land areas necessary to maintain minimum population sizes for this species. We radiotracked after-second-year male and after-hatching-year female buntings on Sapelo Island, Georgia, during the breeding seasons (late April-early August) of 1997 and 1998. We used the animal movement extension in ArcView to determine fixed-kernel home range in an unmanaged maritime shrub and managed 60-80-year-old pine (Pinus spp.)-oak Quercus spp.) forest. Using the Kaplan-Meier method, we estimated an adult breeding season survival of 1.00 for males (n = 36) and 0.94 (SE = 0.18) for females(n=27). Painted bunting home ranges were smaller in unmanaged maritime shrub (female: kernel (x) over bar = 3.5 ha [95% CI: 2.5-4.51; male: kernel (x) over bar = 3.1 ha [95% CI: 2.3-3.9]) compared to those in managed pine-oak forests (female: kernel (x) over bar = 4.7 ha [95% CI: 2.8-6.6]; male: kernel (x) over bar = 7.0 ha [95% CI: 4.9-9.1]). Buntings nesting in the managed pine-oak forest flew long distances (>= 300 m) to forage in salt marshes, freshwater wetlands, and moist forest clearings. In maritime shrub buntings occupied a compact area and rarely moved long distances. The painted bunting population of Sapelo Island requires conservation of maritime shrub as potential optimum nesting habitat and management of nesting habitat in open-canopy pine-oak sawtimber forests by periodic prescribed fire (every 4-6 years) and timber thinning within a landscape that contains salt marsh or freshwater wetland openings within 700 m of those forests.
NASA Astrophysics Data System (ADS)
Zheng, Yuese; Solomon, Justin; Choudhury, Kingshuk; Marin, Daniele; Samei, Ehsan
2017-03-01
Texture analysis for lung lesions is sensitive to changing imaging conditions but these effects are not well understood, in part, due to a lack of ground-truth phantoms with realistic textures. The purpose of this study was to explore the accuracy and variability of texture features across imaging conditions by comparing imaged texture features to voxel-based 3D printed textured lesions for which the true values are known. The seven features of interest were based on the Grey Level Co-Occurrence Matrix (GLCM). The lesion phantoms were designed with three shapes (spherical, lobulated, and spiculated), two textures (homogenous and heterogeneous), and two sizes (diameter < 1.5 cm and 1.5 cm < diameter < 3 cm), resulting in 24 lesions (with a second replica of each). The lesions were inserted into an anthropomorphic thorax phantom (Multipurpose Chest Phantom N1, Kyoto Kagaku) and imaged using a commercial CT system (GE Revolution) at three CTDI levels (0.67, 1.42, and 5.80 mGy), three reconstruction algorithms (FBP, IR-2, IR-4), four reconstruction kernel types (standard, soft, edge), and two slice thicknesses (0.6 mm and 5 mm). Another repeat scan was performed. Texture features from these images were extracted and compared to the ground truth feature values by percent relative error. The variability across imaging conditions was calculated by standard deviation across a certain imaging condition for all heterogeneous lesions. The results indicated that the acquisition method has a significant influence on the accuracy and variability of extracted features and as such, feature quantities are highly susceptible to imaging parameter choices. The most influential parameters were slice thickness and reconstruction kernels. Thin slice thickness and edge reconstruction kernel overall produced more accurate and more repeatable results. Some features (e.g., Contrast) were more accurately quantified under conditions that render higher spatial frequencies (e.g., thinner slice thickness and sharp kernels), while others (e.g., Homogeneity) showed more accurate quantification under conditions that render smoother images (e.g., higher dose and smoother kernels). Care should be exercised is relating texture features between cases of varied acquisition protocols, with need to cross calibration dependent on the feature of interest.
Microstructural Evaluation of Forging Parameters for Superalloy Disks
NASA Technical Reports Server (NTRS)
Falsey, John R.
2004-01-01
Forgings of nickel base superalloy were formed under several different strain rates and forging temperatures. Samples were taken from each forging condition to find the ASTM grain size, and the as large as grain (ALA). The specimens were mounted in bakelite, polished, etched and then optical microscopy was used to determine grain size. The specimens ASTM grain sizes from each forging condition were plotted against strain rate, forging temperature, and presoak time. Grain sizes increased with increasing forging temperature. Grain sizes also increased with decreasing strain rates and increasing forging presoak time. The ALA had been determined from each forging condition using the ASTM standard method. Each ALA was compared with the ASTM grain size of each forging condition to determine if the grain sizes were uniform or not. The forging condition of a strain rate of .03/sec and supersolvus heat treatment produced non uniform grains indicated by critical grain growth. Other anomalies are noted as well.
NASA Astrophysics Data System (ADS)
Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas
2015-05-01
Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.
Influence of Kernel Age on Fumonisin B1 Production in Maize by Fusarium moniliforme
Warfield, Colleen Y.; Gilchrist, David G.
1999-01-01
Production of fumonisins by Fusarium moniliforme on naturally infected maize ears is an important food safety concern due to the toxic nature of this class of mycotoxins. Assessing the potential risk of fumonisin production in developing maize ears prior to harvest requires an understanding of the regulation of toxin biosynthesis during kernel maturation. We investigated the developmental-stage-dependent relationship between maize kernels and fumonisin B1 production by using kernels collected at the blister (R2), milk (R3), dough (R4), and dent (R5) stages following inoculation in culture at their respective field moisture contents with F. moniliforme. Highly significant differences (P ≤ 0.001) in fumonisin B1 production were found among kernels at the different developmental stages. The highest levels of fumonisin B1 were produced on the dent stage kernels, and the lowest levels were produced on the blister stage kernels. The differences in fumonisin B1 production among kernels at the different developmental stages remained significant (P ≤ 0.001) when the moisture contents of the kernels were adjusted to the same level prior to inoculation. We concluded that toxin production is affected by substrate composition as well as by moisture content. Our study also demonstrated that fumonisin B1 biosynthesis on maize kernels is influenced by factors which vary with the developmental age of the tissue. The risk of fumonisin contamination may begin early in maize ear development and increases as the kernels reach physiological maturity. PMID:10388675
NASA Astrophysics Data System (ADS)
Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin
2015-10-01
The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.
Design of a multiple kernel learning algorithm for LS-SVM by convex programming.
Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou
2011-06-01
As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Novel near-infrared sampling apparatus for single kernel analysis of oil content in maize.
Janni, James; Weinstock, B André; Hagen, Lisa; Wright, Steve
2008-04-01
A method of rapid, nondestructive chemical and physical analysis of individual maize (Zea mays L.) kernels is needed for the development of high value food, feed, and fuel traits. Near-infrared (NIR) spectroscopy offers a robust nondestructive method of trait determination. However, traditional NIR bulk sampling techniques cannot be applied successfully to individual kernels. Obtaining optimized single kernel NIR spectra for applied chemometric predictive analysis requires a novel sampling technique that can account for the heterogeneous forms, morphologies, and opacities exhibited in individual maize kernels. In this study such a novel technique is described and compared to less effective means of single kernel NIR analysis. Results of the application of a partial least squares (PLS) derived model for predictive determination of percent oil content per individual kernel are shown.
Zhou, Qijing; Jiang, Biao; Dong, Fei; Huang, Peiyu; Liu, Hongtao; Zhang, Minming
2014-01-01
To evaluate the improvement of iterative reconstruction in image space (IRIS) technique in computed tomographic (CT) coronary stent imaging with sharp kernel, and to make a trade-off analysis. Fifty-six patients with 105 stents were examined by 128-slice dual-source CT coronary angiography (CTCA). Images were reconstructed using standard filtered back projection (FBP) and IRIS with both medium kernel and sharp kernel applied. Image noise and the stent diameter were investigated. Image noise was measured both in background vessel and in-stent lumen as objective image evaluation. Image noise score and stent score were performed as subjective image evaluation. The CTCA images reconstructed with IRIS were associated with significant noise reduction compared to that of CTCA images reconstructed using FBP technique in both of background vessel and in-stent lumen (the background noise decreased by approximately 25.4% ± 8.2% in medium kernel (P
Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen
2016-07-07
Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.
Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.
Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe
2018-02-19
Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.
Mapping QTLs controlling kernel dimensions in a wheat inter-varietal RIL mapping population.
Cheng, Ruiru; Kong, Zhongxin; Zhang, Liwei; Xie, Quan; Jia, Haiyan; Yu, Dong; Huang, Yulong; Ma, Zhengqiang
2017-07-01
Seven kernel dimension QTLs were identified in wheat, and kernel thickness was found to be the most important dimension for grain weight improvement. Kernel morphology and weight of wheat (Triticum aestivum L.) affect both yield and quality; however, the genetic basis of these traits and their interactions has not been fully understood. In this study, to investigate the genetic factors affecting kernel morphology and the association of kernel morphology traits with kernel weight, kernel length (KL), width (KW) and thickness (KT) were evaluated, together with hundred-grain weight (HGW), in a recombinant inbred line population derived from Nanda2419 × Wangshuibai, with data from five trials (two different locations over 3 years). The results showed that HGW was more closely correlated with KT and KW than with KL. A whole genome scan revealed four QTLs for KL, one for KW and two for KT, distributed on five different chromosomes. Of them, QKl.nau-2D for KL, and QKt.nau-4B and QKt.nau-5A for KT were newly identified major QTLs for the respective traits, explaining up to 32.6 and 41.5% of the phenotypic variations, respectively. Increase of KW and KT and reduction of KL/KT and KW/KT ratios always resulted in significant higher grain weight. Lines combining the Nanda 2419 alleles of the 4B and 5A intervals had wider, thicker, rounder kernels and a 14% higher grain weight in the genotype-based analysis. A strong, negative linear relationship of the KW/KT ratio with grain weight was observed. It thus appears that kernel thickness is the most important kernel dimension factor in wheat improvement for higher yield. Mapping and marker identification of the kernel dimension-related QTLs definitely help realize the breeding goals.
Kernel learning at the first level of inference.
Cawley, Gavin C; Talbot, Nicola L C
2014-05-01
Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.
The effect of Au amount on size uniformity of self-assembled Au nanoparticles
NASA Astrophysics Data System (ADS)
Chen, S.-H.; Wang, D.-C.; Chen, G.-Y.; Chen, K.-Y.
2008-03-01
The self-assembled fabrication of nanostructure, a dreaming approach in the area of fabrication engineering, is the ultimate goal of this research. A finding was proved through previous research that the size of the self-assembled gold nanoparticles could be controlled with the mole ratio between AuCl4- and thiol. In this study, the moles of Au were fixed, only the moles of thiol were adjusted. Five different mole ratios of Au/S with their effect on size uniformity were investigated. The mole ratios were 1:1/16, 1:1/8, 1:1, 1:8, 1:16, respectively. The size distributions of the gold nanoparticles were analyzed by Mac-View analysis software. HR-TEM was used to derive images of self-assembled gold nanoparticles. The result reached was also the higher the mole ratio between AuCl4- and thiol the bigger the self-assembled gold nanoparticles. Under the condition of moles of Au fixed, the most homogeneous nanoparticles in size distribution derived with the mole ratio of 1:1/8 between AuCl4- and thiol. The obtained nanoparticles could be used, for example, in uniform surface nanofabrication, leading to the fabrication of ordered array of quantum dots.
Method to produce large, uniform hollow spherical shells
Hendricks, C.D.
1983-09-26
The invention is a method to produce large uniform hollow spherical shells by (1) forming uniform size drops of heat decomposable or vaporizable material, (2) evaporating the drops to form dried particles, (3) coating the dried particles with a layer of shell forming material and (4) heating the composite particles to melt the outer layer and to decompose or vaporize the inner particle to form an expanding inner gas bubble. The expanding gas bubble forms the molten outer layer into a shell of relatively large diameter. By cycling the temperature and pressure on the molten shell, nonuniformities in wall thickness can be reduced. The method of the invention is utilized to produce large uniform spherical shells, in the millimeter to centimeter diameter size range, from a variety of materials and of high quality, including sphericity, concentricity and surface smoothness, for use as laser fusion or other inertial confinement fusion targets as well as other applications.
Adaptive kernel function using line transect sampling
NASA Astrophysics Data System (ADS)
Albadareen, Baker; Ismail, Noriszura
2018-04-01
The estimation of f(0) is crucial in the line transect method which is used for estimating population abundance in wildlife survey's. The classical kernel estimator of f(0) has a high negative bias. Our study proposes an adaptation in the kernel function which is shown to be more efficient than the usual kernel estimator. A simulation study is adopted to compare the performance of the proposed estimators with the classical kernel estimators.
Kernel Partial Least Squares for Nonlinear Regression and Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.
Radial particle-size segregation during packing of particulates into cylindrical containers
Ripple, C.D.; James, R.V.; Rubin, J.
1973-01-01
In a series of experiments, soil materials were placed in long cylindrical containers, using various packing procedures. Soil columns produced by deposition and simultaneous vibratory compaction were dense and axially uniform, but showed significant radial segregation of particle sizes. Similar results were obtained with deposition and simultaneous impact-type compaction when the impacts resulted in significant container "bouncing". The latter procedure, modified to minimize "bouncing" produced dense, uniform soil columns, showing little radial particle-size segregation. Other procedures tested (deposition alone and deposition followed by compaction) did not result in radial segregation, but produced columns showing either relatively low or axially nonuniform densities. Current data suggest that radial particle-size segregation is mainly due to vibration-induced particle circulation in which particles of various sizes have different circulation rates and paths. ?? 1973.
7 CFR 868.254 - Broken kernels determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall be...
7 CFR 51.2090 - Serious damage.
Code of Federal Regulations, 2010 CFR
2010-01-01
... defect which makes a kernel or piece of kernel unsuitable for human consumption, and includes decay...: Shriveling when the kernel is seriously withered, shrunken, leathery, tough or only partially developed: Provided, that partially developed kernels are not considered seriously damaged if more than one-fourth of...
Anisotropic hydrodynamics with a scalar collisional kernel
NASA Astrophysics Data System (ADS)
Almaalol, Dekrayat; Strickland, Michael
2018-04-01
Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.
Ideal regularization for learning kernels from labels.
Pan, Binbin; Lai, Jianhuang; Shen, Lixin
2014-08-01
In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Baker, M. P.; King, J. C.; Gorman, B. P.; Braley, J. C.
2015-03-01
Current methods of TRISO fuel kernel production in the United States use a sol-gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.
NASA Astrophysics Data System (ADS)
Jaravel, Thomas; Labahn, Jeffrey; Ihme, Matthias
2017-11-01
The reliable initiation of flame ignition by high-energy spark kernels is critical for the operability of aviation gas turbines. The evolution of a spark kernel ejected by an igniter into a turbulent stratified environment is investigated using detailed numerical simulations with complex chemistry. At early times post ejection, comparisons of simulation results with high-speed Schlieren data show that the initial trajectory of the kernel is well reproduced, with a significant amount of air entrainment from the surrounding flow that is induced by the kernel ejection. After transiting in a non-flammable mixture, the kernel reaches a second stream of flammable methane-air mixture, where the successful of the kernel ignition was found to depend on the local flow state and operating conditions. By performing parametric studies, the probability of kernel ignition was identified, and compared with experimental observations. The ignition behavior is characterized by analyzing the local chemical structure, and its stochastic variability is also investigated.
NASA Astrophysics Data System (ADS)
Kitt, R.; Kalda, J.
2006-03-01
The question of optimal portfolio is addressed. The conventional Markowitz portfolio optimisation is discussed and the shortcomings due to non-Gaussian security returns are outlined. A method is proposed to minimise the likelihood of extreme non-Gaussian drawdowns of the portfolio value. The theory is called Leptokurtic, because it minimises the effects from “fat tails” of returns. The leptokurtic portfolio theory provides an optimal portfolio for investors, who define their risk-aversion as unwillingness to experience sharp drawdowns in asset prices. Two types of risks in asset returns are defined: a fluctuation risk, that has Gaussian distribution, and a drawdown risk, that deals with distribution tails. These risks are quantitatively measured by defining the “noise kernel” — an ellipsoidal cloud of points in the space of asset returns. The size of the ellipse is controlled with the threshold parameter: the larger the threshold parameter, the larger return are accepted for investors as normal fluctuations. The return vectors falling into the kernel are used for calculation of fluctuation risk. Analogously, the data points falling outside the kernel are used for the calculation of drawdown risks. As a result the portfolio optimisation problem becomes three-dimensional: in addition to the return, there are two types of risks involved. Optimal portfolio for drawdown-averse investors is the portfolio minimising variance outside the noise kernel. The theory has been tested with MSCI North America, Europe and Pacific total return stock indices.
Tildesley, Michael J.; Smith, Gary; Keeling, Matt J.
2013-01-01
In this paper, we simulate outbreaks of foot-and-mouth disease in the Commonwealth of Pennsylvania, USA – after the introduction of a state-wide movement ban – as they might unfold in the presence of mitigation strategies. We have adapted a model previously used to investigate FMD control policies in the UK to examine the potential for disease spread given an infection seeded in each county in Pennsylvania. The results are highly dependent upon the county of introduction and the spatial scale of transmission. Should the transmission kernel be identical to that for the UK, the epidemic impact is limited to fewer than 20 premises, regardless of the county of introduction. However, for wider kernels where infection can spread further, outbreaks seeded in or near the county with highest density of premises and animals result in large epidemics (>150 premises). Ring culling and vaccination reduce epidemic size, with the optimal radius of the rings being dependent upon the county of introduction. Should the kernel width exceed a given county-dependent threshold, ring culling is unable to control the epidemic. We find that a vaccinate-to-live policy is generally preferred to ring culling (in terms of reducing the overall number of premises culled), indicating that well-targeted control can dramatically reduce the risk of large scale outbreaks of foot-and-mouth disease occurring in Pennsylvania. PMID:22169708
Reduced kernel recursive least squares algorithm for aero-engine degradation prediction
NASA Astrophysics Data System (ADS)
Zhou, Haowen; Huang, Jinquan; Lu, Feng
2017-10-01
Kernel adaptive filters (KAFs) generate a linear growing radial basis function (RBF) network with the number of training samples, thereby lacking sparseness. To deal with this drawback, traditional sparsification techniques select a subset of original training data based on a certain criterion to train the network and discard the redundant data directly. Although these methods curb the growth of the network effectively, it should be noted that information conveyed by these redundant samples is omitted, which may lead to accuracy degradation. In this paper, we present a novel online sparsification method which requires much less training time without sacrificing the accuracy performance. Specifically, a reduced kernel recursive least squares (RKRLS) algorithm is developed based on the reduced technique and the linear independency. Unlike conventional methods, our novel methodology employs these redundant data to update the coefficients of the existing network. Due to the effective utilization of the redundant data, the novel algorithm achieves a better accuracy performance, although the network size is significantly reduced. Experiments on time series prediction and online regression demonstrate that RKRLS algorithm requires much less computational consumption and maintains the satisfactory accuracy performance. Finally, we propose an enhanced multi-sensor prognostic model based on RKRLS and Hidden Markov Model (HMM) for remaining useful life (RUL) estimation. A case study in a turbofan degradation dataset is performed to evaluate the performance of the novel prognostic approach.
An Efficient Implementation of Deep Convolutional Neural Networks for MRI Segmentation.
Hoseini, Farnaz; Shahbahrami, Asadollah; Bayat, Peyman
2018-02-27
Image segmentation is one of the most common steps in digital image processing, classifying a digital image into different segments. The main goal of this paper is to segment brain tumors in magnetic resonance images (MRI) using deep learning. Tumors having different shapes, sizes, brightness and textures can appear anywhere in the brain. These complexities are the reasons to choose a high-capacity Deep Convolutional Neural Network (DCNN) containing more than one layer. The proposed DCNN contains two parts: architecture and learning algorithms. The architecture and the learning algorithms are used to design a network model and to optimize parameters for the network training phase, respectively. The architecture contains five convolutional layers, all using 3 × 3 kernels, and one fully connected layer. Due to the advantage of using small kernels with fold, it allows making the effect of larger kernels with smaller number of parameters and fewer computations. Using the Dice Similarity Coefficient metric, we report accuracy results on the BRATS 2016, brain tumor segmentation challenge dataset, for the complete, core, and enhancing regions as 0.90, 0.85, and 0.84 respectively. The learning algorithm includes the task-level parallelism. All the pixels of an MR image are classified using a patch-based approach for segmentation. We attain a good performance and the experimental results show that the proposed DCNN increases the segmentation accuracy compared to previous techniques.
Oddou-Muratorio, Sylvie; Klein, Etienne K; Austerlitz, Frédéric
2005-12-01
Knowing the extent of gene movements from parents to offspring is essential to understand the potential of a species to adapt rapidly to a changing environment, and to design appropriate conservation strategies. In this study, we develop a nonlinear statistical model to jointly estimate the pollen dispersal kernel and the heterogeneity in fecundity among phenotypically or environmentally defined groups of males. This model uses genotype data from a sample of fruiting plants, a sample of seeds harvested on each of these plants, and all males within a circumscribed area. We apply this model to a scattered, entomophilous woody species, Sorbus torminalis (L.) Crantz, within a natural population covering more than 470 ha. We estimate a high heterogeneity in male fecundity among ecological groups, both due to phenotype (size of the trees and flowering intensity) and landscape factors (stand density within the neighbourhood). We also show that fat-tailed kernels are the most appropriate to depict the important abilities of long-distance pollen dispersal for this species. Finally, our results reveal that the spatial position of a male with respect to females affects as much its mating success as ecological determinants of male fecundity. Our study thus stresses the interest to account for the dispersal kernel when estimating heterogeneity in male fecundity, and reciprocally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rank
1942-03-26
When the oven was disassembled after the test, small kernels of porous material were found in both the upper and lower portion of the oven to a depth of about 2 m. The kernels were of various sizes up to 4 mm. From 1,300 metric ..cap alpha..ons of dry coal, there were 330 kg or the residue of 0.025% of the coal input. These kernels brought to mind deposits of spheroidal material termed ''caviar'', since they had rounded tops. However, they were irregularly long. After multiaxis micrography, no growth rings were found as in Leuna's lignite caviar. So, it wasmore » a question of small particles consisting almost totally of ash. The majority of the composition was Al, Fe, Na, silicic acid, S and Cl. The sulfur was found to be in sulfide form and Cl in a volatile form. The remains did not turn to caviar form since the CaO content was slight. The Al, Fe, Na, silicic acid, S and Cl were concentrated in comparison to coal ash and originate apparently from the catalysts (FeSO/sub 4/, Bayermasse, and Na/sub 2/S). It was notable that the Cl content was so high. 2 graphs, 1 table« less
The pre-image problem in kernel methods.
Kwok, James Tin-yau; Tsang, Ivor Wai-hung
2004-11-01
In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.
Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.
Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I
2016-03-01
The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor.
Development of a kernel function for clinical data.
Daemen, Anneleen; De Moor, Bart
2009-01-01
For most diseases and examinations, clinical data such as age, gender and medical history guides clinical management, despite the rise of high-throughput technologies. To fully exploit such clinical information, appropriate modeling of relevant parameters is required. As the widely used linear kernel function has several disadvantages when applied to clinical data, we propose a new kernel function specifically developed for this data. This "clinical kernel function" more accurately represents similarities between patients. Evidently, three data sets were studied and significantly better performances were obtained with a Least Squares Support Vector Machine when based on the clinical kernel function compared to the linear kernel function.
Manycore Performance-Portability: Kokkos Multidimensional Array Library
Edwards, H. Carter; Sunderland, Daniel; Porter, Vicki; ...
2012-01-01
Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel executionmore » performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].« less
Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu
2017-12-15
Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.
NASA Astrophysics Data System (ADS)
Jin, Hyeongmin; Heo, Changyong; Kim, Jong Hyo
2018-02-01
Differing reconstruction kernels are known to strongly affect the variability of imaging biomarkers and thus remain as a barrier in translating the computer aided quantification techniques into clinical practice. This study presents a deep learning application to CT kernel conversion which converts a CT image of sharp kernel to that of standard kernel and evaluates its impact on variability reduction of a pulmonary imaging biomarker, the emphysema index (EI). Forty cases of low-dose chest CT exams obtained with 120kVp, 40mAs, 1mm thickness, of 2 reconstruction kernels (B30f, B50f) were selected from the low dose lung cancer screening database of our institution. A Fully convolutional network was implemented with Keras deep learning library. The model consisted of symmetric layers to capture the context and fine structure characteristics of CT images from the standard and sharp reconstruction kernels. Pairs of the full-resolution CT data set were fed to input and output nodes to train the convolutional network to learn the appropriate filter kernels for converting the CT images of sharp kernel to standard kernel with a criterion of measuring the mean squared error between the input and target images. EIs (RA950 and Perc15) were measured with a software package (ImagePrism Pulmo, Seoul, South Korea) and compared for the data sets of B50f, B30f, and the converted B50f. The effect of kernel conversion was evaluated with the mean and standard deviation of pair-wise differences in EI. The population mean of RA950 was 27.65 +/- 7.28% for B50f data set, 10.82 +/- 6.71% for the B30f data set, and 8.87 +/- 6.20% for the converted B50f data set. The mean of pair-wise absolute differences in RA950 between B30f and B50f is reduced from 16.83% to 1.95% using kernel conversion. Our study demonstrates the feasibility of applying the deep learning technique for CT kernel conversion and reducing the kernel-induced variability of EI quantification. The deep learning model has a potential to improve the reliability of imaging biomarker, especially in evaluating the longitudinal changes of EI even when the patient CT scans were performed with different kernels.
Metabolic network prediction through pairwise rational kernels.
Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian
2014-09-26
Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy values have been improved, while maintaining lower construction and execution times. The power of using kernels is that almost any sort of data can be represented using kernels. Therefore, completely disparate types of data can be combined to add power to kernel-based machine learning methods. When we compared our proposal using PRKs with other similar kernel, the execution times were decreased, with no compromise of accuracy. We also proved that by combining PRKs with other kernels that include evolutionary information, the accuracy can also also be improved. As our proposal can use any type of sequence data, genes do not need to be properly annotated, avoiding accumulation errors because of incorrect previous annotations.
Granfeldt, Y; Wu, X; Björck, I
2006-01-01
To determine the possible differences in glycaemic index (GI) depending on (1) the analytical method used to calculate the 'available carbohydrate' load, that is, using carbohydrates by difference (total carbohydrate by difference, minus dietary fibre (DF)) as available carbohydrates vs available starch basis (total starch minus resistant starch (RS)) of a food rich in intrinsic RS and (2) the effect of GI characteristics and/or the content of indigestible carbohydrates (RS and DF) of the evening meal prior to GI testing the following morning. Blood glucose and serum insulin responses were studied after subjects consuming (1) two levels of barley kernels rich in intrinsic RS (15.2%, total starch basis) and (2) after a standard breakfast following three different evening meals varying in GI and/or indigestible carbohydrates: pasta, barley kernels and white wheat bread, respectively. Healthy adults with normal body mass index. (1) Increasing the portion size of barley kernels from 79.6 g (50 g 'available carbohydrates') to 93.9 g (50 g available starch) to adjust for its RS content did not significantly affect the GI or insulin index (11). (2) The low GI barley evening meal, as opposed to white wheat bread and pasta evening meals, reduced the postprandial glycaemic and insulinaemic (23 and 29%, respectively, P < 0.05) areas under the curve at a standardized white bread breakfast fed the following morning. (1) Increasing portion size to compensate for the considerable portion of RS in a low GI barley product had no significant impact on GI or II. However, for GI testing, it is recommended to base carbohydrate load on specific analyses of the available carbohydrate content. (2) A low GI barley evening meal containing high levels of indigestible carbohydrates (RS and DF) substantially reduced the GI and II of white wheat bread determined at a subsequent breakfast meal.
Differential metabolome analysis of field-grown maize kernels in response to drought stress
USDA-ARS?s Scientific Manuscript database
Drought stress constrains maize kernel development and can exacerbate aflatoxin contamination. In order to identify drought responsive metabolites and explore pathways involved in kernel responses, a metabolomics analysis was conducted on kernels from a drought tolerant line, Lo964, and a sensitive ...
7 CFR 868.203 - Basis of determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Principles Governing..., heat-damaged kernels, red rice and damaged kernels, chalky kernels, other types, color, and the special grade Parboiled rough rice shall be on the basis of the whole and large broken kernels of milled rice...
7 CFR 868.203 - Basis of determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Principles Governing..., heat-damaged kernels, red rice and damaged kernels, chalky kernels, other types, color, and the special grade Parboiled rough rice shall be on the basis of the whole and large broken kernels of milled rice...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the use...
7 CFR 868.304 - Broken kernels determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the use...
Image correlation microscopy for uniform illumination.
Gaborski, T R; Sealander, M N; Ehrenberg, M; Waugh, R E; McGrath, J L
2010-01-01
Image cross-correlation microscopy is a technique that quantifies the motion of fluorescent features in an image by measuring the temporal autocorrelation function decay in a time-lapse image sequence. Image cross-correlation microscopy has traditionally employed laser-scanning microscopes because the technique emerged as an extension of laser-based fluorescence correlation spectroscopy. In this work, we show that image correlation can also be used to measure fluorescence dynamics in uniform illumination or wide-field imaging systems and we call our new approach uniform illumination image correlation microscopy. Wide-field microscopy is not only a simpler, less expensive imaging modality, but it offers the capability of greater temporal resolution over laser-scanning systems. In traditional laser-scanning image cross-correlation microscopy, lateral mobility is calculated from the temporal de-correlation of an image, where the characteristic length is the illuminating laser beam width. In wide-field microscopy, the diffusion length is defined by the feature size using the spatial autocorrelation function. Correlation function decay in time occurs as an object diffuses from its original position. We show that theoretical and simulated comparisons between Gaussian and uniform features indicate the temporal autocorrelation function depends strongly on particle size and not particle shape. In this report, we establish the relationships between the spatial autocorrelation function feature size, temporal autocorrelation function characteristic time and the diffusion coefficient for uniform illumination image correlation microscopy using analytical, Monte Carlo and experimental validation with particle tracking algorithms. Additionally, we demonstrate uniform illumination image correlation microscopy analysis of adhesion molecule domain aggregation and diffusion on the surface of human neutrophils.
Workshop II On Unsteady Separated Flow Proceedings
1988-07-28
was static stall angle of 12 ° . achieved by injecting diluted food coloring at the apex through a 1.5 mm diameter tube placed The response of the wing...differences with uniform step size in q, and trailing -. 75 three- pront differences with uniform step size in ,, ,,as used The nonlinearity of the...flow prop- "Kutta condition." erties for slender 3D wings are addressed. To begin the The present paper emphasizes recent progress in the de- study
Adding muscle where you need it: non-uniform hypertrophy patterns in elite sprinters.
Handsfield, G G; Knaus, K R; Fiorentino, N M; Meyer, C H; Hart, J M; Blemker, S S
2017-10-01
Sprint runners achieve much higher gait velocities and accelerations than average humans, due in part to large forces generated by their lower limb muscles. Various factors have been explored in the past to understand sprint biomechanics, but the distribution of muscle volumes in the lower limb has not been investigated in elite sprinters. In this study, we used non-Cartesian MRI to determine muscle sizes in vivo in a group of 15 NCAA Division I sprinters. Normalizing muscle sizes by body size, we compared sprinter muscles to non-sprinter muscles, calculated Z-scores to determine non-uniformly large muscles in sprinters, assessed bilateral symmetry, and assessed gender differences in sprinters' muscles. While limb musculature per height-mass was 22% greater in sprinters than in non-sprinters, individual muscles were not all uniformly larger. Hip- and knee-crossing muscles were significantly larger among sprinters (mean difference: 30%, range: 19-54%) but only one ankle-crossing muscle was significantly larger (tibialis posterior, 28%). Population-wide asymmetry was not significant in the sprint population but individual muscle asymmetries exceeded 15%. Gender differences in normalized muscle sizes were not significant. The results of this study suggest that non-uniform hypertrophy patterns, particularly large hip and knee flexors and extensors, are advantageous for fast sprinting. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, L.L.; Hendricks, J.S.
1983-01-01
The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays.
Performance Characteristics of a Kernel-Space Packet Capture Module
2010-03-01
Defense, or the United States Government . AFIT/GCO/ENG/10-03 PERFORMANCE CHARACTERISTICS OF A KERNEL-SPACE PACKET CAPTURE MODULE THESIS Presented to the...3.1.2.3 Prototype. The proof of concept for this research is the design, development, and comparative performance analysis of a kernel level N2d capture...changes to kernel code 5. Can be used for both user-space and kernel-space capture applications in order to control comparative performance analysis to
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
Experimental and numerical modeling research of rubber material during microwave heating process
NASA Astrophysics Data System (ADS)
Chen, Hailong; Li, Tao; Li, Kunling; Li, Qingling
2018-05-01
This paper aims to investigate the heating behaviors of block rubber by experimental and simulated method. The COMSOL Multiphysics 5.0 software was utilized in numerical simulation work. The effects of microwave frequency, power and sample size on temperature distribution are examined. The effect of frequency on temperature distribution is obvious. The maximum and minimum temperatures of block rubber increase first and then decrease with frequency increasing. The microwave heating efficiency is maximum in the microwave frequency of 2450 MHz. However, more uniform temperature distribution is presented in other microwave frequencies. The influence of microwave power on temperature distribution is also remarkable. The smaller the power, the more uniform the temperature distribution on the block rubber. The effect of power on microwave heating efficiency is not obvious. The effect of sample size on temperature distribution is evidently found. The smaller the sample size, the more uniform the temperature distribution on the block rubber. However, the smaller the sample size, the lower the microwave heating efficiency. The results can serve as references for the research on heating rubber material by microwave technology.
Brain tumor image segmentation using kernel dictionary learning.
Jeon Lee; Seung-Jun Kim; Rong Chen; Herskovits, Edward H
2015-08-01
Automated brain tumor image segmentation with high accuracy and reproducibility holds a big potential to enhance the current clinical practice. Dictionary learning (DL) techniques have been applied successfully to various image processing tasks recently. In this work, kernel extensions of the DL approach are adopted. Both reconstructive and discriminative versions of the kernel DL technique are considered, which can efficiently incorporate multi-modal nonlinear feature mappings based on the kernel trick. Our novel discriminative kernel DL formulation allows joint learning of a task-driven kernel-based dictionary and a linear classifier using a K-SVD-type algorithm. The proposed approaches were tested using real brain magnetic resonance (MR) images of patients with high-grade glioma. The obtained preliminary performances are competitive with the state of the art. The discriminative kernel DL approach is seen to reduce computational burden without much sacrifice in performance.
SEMI-SUPERVISED OBJECT RECOGNITION USING STRUCTURE KERNEL
Wang, Botao; Xiong, Hongkai; Jiang, Xiaoqian; Ling, Fan
2013-01-01
Object recognition is a fundamental problem in computer vision. Part-based models offer a sparse, flexible representation of objects, but suffer from difficulties in training and often use standard kernels. In this paper, we propose a positive definite kernel called “structure kernel”, which measures the similarity of two part-based represented objects. The structure kernel has three terms: 1) the global term that measures the global visual similarity of two objects; 2) the part term that measures the visual similarity of corresponding parts; 3) the spatial term that measures the spatial similarity of geometric configuration of parts. The contribution of this paper is to generalize the discriminant capability of local kernels to complex part-based object models. Experimental results show that the proposed kernel exhibit higher accuracy than state-of-art approaches using standard kernels. PMID:23666108
Chapin, Jay W; Thomas, James S
2003-08-01
Pitfall traps placed in South Carolina peanut, Arachis hypogaea (L.), fields collected three species of burrower bugs (Cydnidae): Cyrtomenus ciliatus (Palisot de Beauvois), Sehirus cinctus cinctus (Palisot de Beauvois), and Pangaeus bilineatus (Say). Cyrtomenus ciliatus was rarely collected. Sehirus cinctus produced a nymphal cohort in peanut during May and June, probably because of abundant henbit seeds, Lamium amplexicaule L., in strip-till production systems. No S. cinctus were present during peanut pod formation. Pangaeus bilineatus was the most abundant species collected and the only species associated with peanut kernel feeding injury. Overwintering P. bilineatus adults were present in a conservation tillage peanut field before planting and two to three subsequent generations were observed. Few nymphs were collected until the R6 (full seed) growth stage. Tillage and choice of cover crop affected P. bilineatus populations. Peanuts strip-tilled into corn or wheat residue had greater P. bilineatus populations and kernel-feeding than conventional tillage or strip-tillage into rye residue. Fall tillage before planting a wheat cover crop also reduced burrower bug feeding on peanut. At-pegging (early July) granular chlorpyrifos treatments were most consistent in suppressing kernel feeding. Kernels fed on by P. bilineatus were on average 10% lighter than unfed on kernels. Pangaeus bilineatus feeding reduced peanut grade by reducing individual kernel weight, and increasing the percentage damaged kernels. Each 10% increase in kernels fed on by P. bilineatus was associated with a 1.7% decrease in total sound mature kernels, and kernel feeding levels above 30% increase the risk of damaged kernel grade penalties.